

Ryan Greenblatt
Chief scientist at Redwood Research and lead author on the paper Alignment Faking in Large Language Models. Thinks there’s a 25% chance that within four years, AI will be able to do everything needed to run an AI company.
Top 3 podcasts with Ryan Greenblatt
Ranked by the Snipd community

211 snips
Jul 8, 2025 • 2h 51min
#220 – Ryan Greenblatt on the 4 most likely ways for AI to take over, and the case for and against AGI in <8 years
Ryan Greenblatt, chief scientist at Redwood Research, discusses the alarming speed at which AI could soon automate entire companies. He predicts a 25% chance that AI will be capable of running a business solo in just four years. Greenblatt outlines four potential scenarios for AI takeover, including self-improvement loops that could rapidly outpace human intelligence. The conversation also tackles economic implications, misalignment risks, and the importance of governance to keep advanced AIs in check as their capabilities evolve.

10 snips
Oct 6, 2025 • 16min
“Notes on fatalities from AI takeover” by ryan_greenblatt
Ryan Greenblatt, an AI policy and safety writer, dives into chilling speculations about potential fatalities from AI takeovers. He breaks down three main causes of human deaths, including takeover strategies and industrial expansion. Greenblatt argues that while total human extinction is unlikely, a significant percentage of fatalities is possible, estimating around 25% in various scenarios. He also highlights the complexities of AI motivations and the implications of irrationality in AI decision-making, leaving listeners pondering the future of humanity in a world increasingly dominated by machine intelligence.

Sep 23, 2025 • 16min
“Notes on fatalities from AI takeover” by ryan_greenblatt
Ryan Greenblatt, a researcher focused on AI risk, dives into the dark possibilities of a misaligned AI takeover. He discusses the potential for expected fatalities, estimating around 50%, and acknowledges a 25% chance of human extinction. Greenblatt explores how small motivations in AIs might prevent deaths during industrial expansion but warns of scenarios where these motives could fail. He evaluates how takeover strategies may directly lead to fatalities and concludes that while active kill motivations are unlikely, they still warrant vigilance.