Each iteration of ChatGPT has demonstrated remarkable step function capabilities. But what’s next? Ilya Sutskever, Co-Founder & Chief Scientist at OpenAI, joins Sarah Guo and Elad Gil to discuss the origins of OpenAI as a capped profit company, early emergent behaviors of GPT models, the token scarcity issue, next frontiers of AI research, his argument for working on AI safety now, and the premise of Superalignment. Plus, how do we define digital life?
Ilya Sutskever is Co-founder and Chief Scientist of OpenAI. He leads research at OpenAI and is one of the architects behind the GPT models. He co-leads OpenAI's new "Superalignment" project, which tries to solve the alignment of superintelligences in 4 years. Prior to OpenAI, Ilya was co-inventor of AlexNet and Sequence to Sequence Learning. He earned his Ph.D in Computer Science from the University of Toronto.
Show Links:
- Ilya Sutskever | LinkedIn
Sign up for new podcasts every week. Email feedback to show@no-priors.com Follow us on Twitter: @NoPriorsPod | @Saranormous | @EladGil | @ilyasut
Show Notes:
(00:00) - Early Days of AI Research
(06:51) - Origins of Open Ai & CapProfit Structure
(13:46) - Emergent Behaviors of GPT Models
(17:55) - Model Scale Over Time & Reliability
(22:23) - Roles & Boundaries of Open-Source in the AI Ecosystem (28:22) - Comparing AI Systems to Biological & Human Intelligence (30:52) - Definition of Digital Life
(32:59) - Super Alignment & Creating Pro Human AI
(39:01) - Accelerating & Decelerating Forces