Dwarkesh Podcast cover image

Eliezer Yudkowsky - Why AI Will Kill Us, Aligning LLMs, Nature of Intelligence, SciFi, & Rationality

Dwarkesh Podcast

00:00

Shifting Priorities: The Evolution of AI Alignment Concerns

This chapter explores the changing thoughts on AI alignment as artificial intelligence evolves more rapidly than anticipated. It discusses the implications of this acceleration and emphasizes the importance of a Bayesian approach to forecasting future trends in AI development.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app