undefined

Daniel Kokotajlo

Executive director of the AI Futures Project and former OpenAI employee. He is focused on predicting the future of AI.

Top 10 podcasts with Daniel Kokotajlo

Ranked by the Snipd community
undefined
2,466 snips
Apr 3, 2025 • 3h 4min

2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo

Scott Alexander, author of popular blogs on AI and culture, joins Daniel Kokotajlo, director of the AI Futures Project, to explore the AI landscape leading up to 2027. They dive into the concept of an intelligence explosion, discussing potential scenarios and exploring the societal implications of superintelligent AI. The conversation covers the challenges of aligning AI developments with human values, the competitive race in AI technology between the U.S. and China, and the transformative potential of AI in fields like manufacturing and biomedicine.
undefined
169 snips
May 1, 2025 • 1h 37min

AI 2027: What If Superhuman AI Is Right Around the Corner?

In this enlightening discussion, Daniel Kokotajlo, an AI governance researcher and founder of the AI Futures Project, dives deep into the future of AI development. He explores the possibility of superhuman AI emerging in the next few years and the risks and ethical concerns that come with it. Topics include the evolution of AI and its implications for human cognition, the governance challenges of artificial general intelligence, and the urgency for democratic accountability. Kokotajlo emphasizes the need for careful oversight to navigate the complexities of this transformative technology.
undefined
155 snips
Jun 24, 2025 • 2h 7min

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

In this enlightening discussion, Gary Marcus, a cognitive scientist and AI skeptic, cautions against existing cognitive issues in AI. Daniel Kokotajlo, a former OpenAI insider, predicts we could see AGI by 2028 based on current trends. Dan Hendrycks, director at the Center for AI Safety, stresses the urgent need for transparency and collaboration to avert potential disasters. The trio explores the alarming psychological dynamics among AI developers and the critical boundaries that must be respected in this rapidly evolving field.
undefined
153 snips
May 14, 2025 • 1h 23min

Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years

In this engaging discussion, Daniel Kokotajlo, a former OpenAI researcher and AI alignment advocate, teams up with Thomas Larsen, co-author of the AI 2027 report. They dive into the urgent need for safe AI development, exploring the timeline of AI advancements, the growing tensions between the U.S. and China, and the critical risks of misaligned AI. The duo offers compelling predictions for 2027, emphasizes the importance of public awareness, and lays out essential policy recommendations to ensure a safer AI future.
undefined
95 snips
Jul 3, 2025 • 1h 10min

Why the AI Race Ends in Disaster (with Daniel Kokotajlo)

Daniel Kokotajlo, an expert in AI governance at AI-2027 and AI-Futures, discusses the groundbreaking potential of AI and its ability to outpace the Industrial Revolution. He highlights the risks of AI-driven automated coding and the necessity for transparency in AI development. The conversation also delves into the future of AI communication and the inherent risks associated with superintelligence. Additionally, Kokotajlo examines the importance of iterative forecasting in navigating the uncertainties of AI's trajectory.
undefined
63 snips
Jul 17, 2025 • 38min

Daniel Kokotajlo Forecasts the End of Human Dominance

Daniel Kokotajlo, a former researcher at OpenAI and author of AI 2027, shares alarming insights about the future of AI and its possible dangers. He discusses the rapid evolution of AI and the risks of losing human control over technology. Kokotajlo emphasizes the need for alignment with human values, transparency, and regulatory measures to prevent negative outcomes. He advocates for independent oversight and safe channels for whistleblowers in the AI field, stressing the importance of proactive measures to address these pressing challenges.
undefined
63 snips
Jun 12, 2025 • 1h 14min

#420 - Countdown to Superintelligence

Daniel Kokotajlo, the Executive Director at the AI Futures Project and a former governance researcher at OpenAI, joins Sam Harris to dive into the impending era of superintelligent AI. They explore what an intelligence explosion might look like and the dangers of AI's deceptive behaviors, particularly in large language models. Discussions on the alignment problem emphasize the need for AI systems to resonate with human values. They also touch upon the economic implications of AI advancements and the potential for government regulation in shaping the future of technology.
undefined
48 snips
Apr 3, 2025 • 1h 33min

#39 - Daniel Kokotajlo - Wargames, Superintelligence & Quitting OpenAI

Daniel Kokotajlo, a leading AI researcher and former OpenAI employee, offers fascinating insights into the rapid evolution of AI and the ethical dilemmas that accompany it. He shares his reasons for leaving OpenAI, criticizing its culture and legal constraints. The conversation delves into the use of tabletop wargames as tools for anticipating the geopolitical impact of superintelligence. Daniel also discusses AI alignment challenges, the importance of transparency, and shares predictions about the future, offering both caution and hope regarding AI's trajectory.
undefined
45 snips
May 17, 2025 • 38min

OpenAI whistleblower Daniel Kokotajlo on superintelligence and existential risk of AI

In this episode, Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, shares vital insights from the AI 2027 report. He discusses the alarming pace at which artificial general intelligence (AGI) is developing, predicting an 80% chance of its emergence within five to six years. Kokotajlo emphasizes the potential for either a dystopian or utopian future due to AGI and warns about the concentration of power within a few tech firms. He calls for democratic oversight and transparency to mitigate existential risks.
undefined
28 snips
Jun 4, 2025 • 1h 10min

AI DEBATE: Runaway Superintelligence or Normal Technology? | Daniel Kokotajlo vs Arvind Narayanan

In a thought-provoking debate, Daniel Kokotajlo, a former OpenAI researcher and author of AI 2027, argues for a rapid intelligence explosion, while Princeton professor Arvind Narayanan, co-author of AI Snake Oil, counters that AI remains controllable and slowly reshapes society. They explore the intersection of AI capabilities and power dynamics, the societal impact of AI development, and the crucial need for ethical governance. This captivating clash highlights contrasting visions for AI's future and its potential to revolutionize or stabilize our world.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app