Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

00:00

Navigating AI Alignment Challenges

This chapter explores the intricate issues surrounding the alignment of artificial intelligence with human values, emphasizing the risks posed by superintelligent machines. It discusses the difficulties of verifying AI behavior and the challenges with existing alignment techniques, particularly in light of recent developments in large language models. The speakers reflect on the evolution of intelligence models, highlighting concerns about the lack of transparency in modern AI systems and their implications for future safety.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app