Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

CHAPTER

Shifting Priorities: The Evolution of AI Alignment Concerns

This chapter explores the changing thoughts on AI alignment as artificial intelligence evolves more rapidly than anticipated. It discusses the implications of this acceleration and emphasizes the importance of a Bayesian approach to forecasting future trends in AI development.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner