Dwarkesh Podcast cover image

Dwarkesh Podcast

John Schulman (OpenAI Cofounder) - Reasoning, RLHF, & Plan for 2027 AGI

May 15, 2024
Join John Schulman, OpenAI co-founder and ChatGPT architect, as he dives deep into the future of AI. He discusses how post-training enhances model capabilities and the roadmap to achieving AGI by 2027. Schulman highlights the importance of reasoning in AI, the evolution of language models, and the delicate balance between human oversight and automation. He also shares insights on the role of memory in AI systems and how new training methods can reshape interactions, making AI assistants more proactive and effective.
01:36:30

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Post-training refines AI behavior for specific tasks like chat assistance, enhancing versatility and content generation.
  • AI models progressing to handle complex tasks over multiple files, improving task efficiency and error recovery.

Deep dives

The Shift Towards AI-Driven Coding Projects

In the near future, models could potentially carry out whole coding projects, transitioning from being search engines to collaborative project partners. While AI models may possess the ability to successfully manage businesses, caution is advised against immediate adoption for running entire firms. The progression towards artificial general intelligence (AGI) raises questions about future strategies and the implications of its arrival. John Schulman, co-founder of OpenAI, highlights the distinctions between pre-training and post-training AI models and their respective contributions to creating versatile personas for generating content and assisting with specific tasks.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner