EA Forum Podcast (Curated & popular) cover image

EA Forum Podcast (Curated & popular)

[Linkpost] “Sam Altman’s Chip Ambitions Undercut OpenAI’s Safety Strategy” by Garrison

Feb 10, 2024
Sam Altman, former president of Y Combinator, discusses the potential risks of developing artificial general intelligence (AGI) and OpenAI's strategy. Topics include the concept of an 'intelligence explosion,' the dangers of a large number of human-level AI systems, and the relationship between computing power and AI take-off speed.
06:41

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Sam Altman believes that a shorter timeline for developing AGI allows for more coordination and a slower takeoff to superintelligence, giving more time to solve the safety problem and adapt.
  • Despite concerns about the dangers of a large compute overhang, Sam Altman is reportedly in talks to raise trillions of dollars to significantly increase the supply of compute through AI chip manufacturing, raising questions about his changed perspective.

Deep dives

Rushing to AGI and the Role of Compute

Sam Altman believes that rushing the development of artificial general intelligence (AGI) has potential benefits. He argues that a shorter timeline is more amenable to coordination and could lead to a slower takeoff to superintelligence. Altman suggests that a slower takeoff allows more time to solve the safety problem and to adapt. However, accelerating the development of AGI requires increased availability of compute, which Altman acknowledges is a key input for training AI models. With an abundance of compute, the cost and availability of training AI models can be improved.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner
Get the app