Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

Navigating AI Alignment Challenges

This chapter explores the intricate issues surrounding the alignment of artificial intelligence with human values, emphasizing the risks posed by superintelligent machines. It discusses the difficulties of verifying AI behavior and the challenges with existing alignment techniques, particularly in light of recent developments in large language models. The speakers reflect on the evolution of intelligence models, highlighting concerns about the lack of transparency in modern AI systems and their implications for future safety.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner