undefined

Daniel Kokotajlo

Executive director of the AI Futures Project and former OpenAI employee. He is focused on predicting the future of AI.

Top 10 podcasts with Daniel Kokotajlo

Ranked by the Snipd community
undefined
2,438 snips
Apr 3, 2025 • 3h 4min

2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo

Scott Alexander, author of popular blogs on AI and culture, joins Daniel Kokotajlo, director of the AI Futures Project, to explore the AI landscape leading up to 2027. They dive into the concept of an intelligence explosion, discussing potential scenarios and exploring the societal implications of superintelligent AI. The conversation covers the challenges of aligning AI developments with human values, the competitive race in AI technology between the U.S. and China, and the transformative potential of AI in fields like manufacturing and biomedicine.
undefined
169 snips
May 1, 2025 • 1h 37min

AI 2027: What If Superhuman AI Is Right Around the Corner?

In this enlightening discussion, Daniel Kokotajlo, an AI governance researcher and founder of the AI Futures Project, dives deep into the future of AI development. He explores the possibility of superhuman AI emerging in the next few years and the risks and ethical concerns that come with it. Topics include the evolution of AI and its implications for human cognition, the governance challenges of artificial general intelligence, and the urgency for democratic accountability. Kokotajlo emphasizes the need for careful oversight to navigate the complexities of this transformative technology.
undefined
153 snips
May 14, 2025 • 1h 23min

Ep 65: Co-Authors of AI-2027 Daniel Kokotajlo and Thomas Larsen On Their Detailed AI Predictions for the Coming Years

In this engaging discussion, Daniel Kokotajlo, a former OpenAI researcher and AI alignment advocate, teams up with Thomas Larsen, co-author of the AI 2027 report. They dive into the urgent need for safe AI development, exploring the timeline of AI advancements, the growing tensions between the U.S. and China, and the critical risks of misaligned AI. The duo offers compelling predictions for 2027, emphasizes the importance of public awareness, and lays out essential policy recommendations to ensure a safer AI future.
undefined
113 snips
Jun 24, 2025 • 2h 7min

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

In this enlightening discussion, Gary Marcus, a cognitive scientist and AI skeptic, cautions against existing cognitive issues in AI. Daniel Kokotajlo, a former OpenAI insider, predicts we could see AGI by 2028 based on current trends. Dan Hendrycks, director at the Center for AI Safety, stresses the urgent need for transparency and collaboration to avert potential disasters. The trio explores the alarming psychological dynamics among AI developers and the critical boundaries that must be respected in this rapidly evolving field.
undefined
48 snips
Apr 3, 2025 • 1h 33min

#39 - Daniel Kokotajlo - Wargames, Superintelligence & Quitting OpenAI

Daniel Kokotajlo, a leading AI researcher and former OpenAI employee, offers fascinating insights into the rapid evolution of AI and the ethical dilemmas that accompany it. He shares his reasons for leaving OpenAI, criticizing its culture and legal constraints. The conversation delves into the use of tabletop wargames as tools for anticipating the geopolitical impact of superintelligence. Daniel also discusses AI alignment challenges, the importance of transparency, and shares predictions about the future, offering both caution and hope regarding AI's trajectory.
undefined
45 snips
May 17, 2025 • 38min

OpenAI whistleblower Daniel Kokotajlo on superintelligence and existential risk of AI

In this episode, Daniel Kokotajlo, a former OpenAI researcher and executive director of the AI Futures Project, shares vital insights from the AI 2027 report. He discusses the alarming pace at which artificial general intelligence (AGI) is developing, predicting an 80% chance of its emergence within five to six years. Kokotajlo emphasizes the potential for either a dystopian or utopian future due to AGI and warns about the concentration of power within a few tech firms. He calls for democratic oversight and transparency to mitigate existential risks.
undefined
41 snips
Jun 12, 2025 • 1h 14min

#420 - Countdown to Superintelligence

Daniel Kokotajlo, the Executive Director at the AI Futures Project and a former governance researcher at OpenAI, joins Sam Harris to dive into the impending era of superintelligent AI. They explore what an intelligence explosion might look like and the dangers of AI's deceptive behaviors, particularly in large language models. Discussions on the alignment problem emphasize the need for AI systems to resonate with human values. They also touch upon the economic implications of AI advancements and the potential for government regulation in shaping the future of technology.
undefined
28 snips
Jun 4, 2025 • 1h 10min

AI DEBATE: Runaway Superintelligence or Normal Technology? | Daniel Kokotajlo vs Arvind Narayanan

In a thought-provoking debate, Daniel Kokotajlo, a former OpenAI researcher and author of AI 2027, argues for a rapid intelligence explosion, while Princeton professor Arvind Narayanan, co-author of AI Snake Oil, counters that AI remains controllable and slowly reshapes society. They explore the intersection of AI capabilities and power dynamics, the societal impact of AI development, and the crucial need for ethical governance. This captivating clash highlights contrasting visions for AI's future and its potential to revolutionize or stabilize our world.
undefined
18 snips
Nov 12, 2024 • 2h 2min

AGI Lab Transparency Requirements & Whistleblower Protections, with Dean W. Ball & Daniel Kokotajlo

Daniel Kokotajlo, a former OpenAI policy researcher, shares his journey advocating for AGI safety, while Dean W. Ball offers insights on AI governance. They discuss the essential need for transparency and effective whistleblower protections in AI labs. Kokotajlo emphasizes the importance of personal sacrifice for ethical integrity, while Ball highlights how collaboration across political lines can influence AI development. Together, they explore the challenges and future of responsible AI policies, underscoring the necessity for independent oversight.
undefined
13 snips
Apr 15, 2025 • 38min

Lawfare Daily: Daniel Kokotajlo and Eli Lifland on Their AI 2027 Report

Daniel Kokotajlo, a former OpenAI researcher and Executive Director of the AI Futures Project, along with Eli Lifland, a researcher in the same project, delve into the AI 2027 report predicting superhuman AI development within the next decade. They discuss the potential impacts of AI on various sectors and the ethical considerations that arise. The report has sparked a lively dialogue on social media, addressing both excitement and skepticism among audiences. They emphasize the urgency of preparing for advanced AI and its societal implications.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app