80,000 Hours Podcast cover image

80,000 Hours Podcast

#176 – Nathan Labenz on the final push for AGI, understanding OpenAI's leadership drama, and red-teaming frontier models

Dec 22, 2023
Nathan Labenz, an AI researcher and host of The Cognitive Revolution podcast, dives into the complexities of AGI and the recent turmoil at OpenAI, including Sam Altman's leadership drama. He shares his experiences on the red team for GPT-4, revealing its powerful capabilities in areas like medical diagnostics. The conversation explores ethical concerns surrounding AI development, the delicate balance between innovation and safety, and the importance of responsible governance in navigating the future of artificial intelligence.
03:46:52

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • OpenAI should strengthen control measures for advanced AI models.
  • There is a need for understanding and governing advanced AI technology.

Deep dives

Concerns over OpenAI's control measures for advanced AI

During the podcast episode, concerns were raised about OpenAI's control measures for their advanced AI models. The speaker expressed worry about the rapidly improving capabilities of the models compared to the seemingly inadequate control measures in place. They highlighted their own experience as a red team member and their observations of the models' behavior, including its lack of refusal for unsafe prompts in the safety edition. The speaker also mentioned their attempts to communicate their concerns to OpenAI's board members and their subsequent removal from the red team project. Overall, the speaker emphasized the need for stronger control measures and a better alignment between the power of AI models and their safety considerations.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner