
The Valmy
Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
Mar 28, 2023
47:41
Podcast: Dwarkesh Podcast
Episode: Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
Release date: 2023-03-27
Get Podcast Transcript →
powered by Listen411 - fast audio-to-text and summarization

Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Episode: Ilya Sutskever (OpenAI Chief Scientist) - Building AGI, Alignment, Future Models, Spies, Microsoft, Taiwan, & Enlightenment
Release date: 2023-03-27
Get Podcast Transcript →
powered by Listen411 - fast audio-to-text and summarization

I went over to the OpenAI offices in San Fransisco to ask the Chief Scientist and cofounder of OpenAI, Ilya Sutskever, about:
* time to AGI
* leaks and spies
* what's after generative models
* post AGI futures
* working with Microsoft and competing with Google
* difficulty of aligning superhuman AI
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here. Follow me on Twitter for updates on future episodes.
Timestamps
(00:00) - Time to AGI
(05:57) - What’s after generative models?
(10:57) - Data, models, and research
(15:27) - Alignment
(20:53) - Post AGI Future
(26:56) - New ideas are overrated
(36:22) - Is progress inevitable?
(41:27) - Future Breakthroughs
Get full access to Dwarkesh Podcast at www.dwarkesh.com/subscribe
Remember Everything You Learn from Podcasts
Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.