The Trajectory

Daniel Faggella
undefined
11 snips
Sep 12, 2025 • 1h 5min

Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)

Stuart Russell, a Professor of Computer Science at UC Berkeley and author of 'Human Compatible,' dives deep into the urgent need for AGI governance. He likens the current AI race dynamics to a prisoner's dilemma, stressing why governments must outline enforceable red lines. The discussion also highlights the critical role of international cooperation in establishing ethical frameworks. Russell emphasizes that navigating the complexities of AI safety requires a global consensus, mirroring the lessons learned from historical aviation safety.
undefined
27 snips
Sep 5, 2025 • 37min

Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)

Craig Mundie, former Chief Research and Strategy Officer at Microsoft, dives into the evolving world of AI and its governance. He discusses how bottom-up governance could arise from commercial pressures, emphasizing international collaboration over regulatory constraints. Mundie advocates for a symbiotic relationship between humans and AI, asserting that proactive governance is vital for the future. He also explores emotional reactions to AGI, comparing them to stages of grief, and promotes an optimistic view of co-evolution in shaping a safe and innovative technological landscape.
undefined
12 snips
Aug 29, 2025 • 1h 33min

Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)

Jeremie and Edouard Harris, co-founders of Gladstone AI and experts in AGI implications for the US government, dive into the complex landscape of US-China relations in artificial intelligence. They discuss the dangers of trusting China with AGI, the ongoing espionage threats in Western labs, and the necessity of tamper-proof technology. The conversation emphasizes strategic collaboration to slow China’s AI progress while ensuring transparency and security, highlighting the importance of international cooperation amidst rising tensions.
undefined
12 snips
Aug 22, 2025 • 1h 19min

Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)

Ed Boyden, a renowned neuroscientist and entrepreneur from MIT, discusses the future of intelligence and neurotechnology. He emphasizes the importance of grounding intelligence in reality through his concept of 'ground truth.' The conversation explores the evolution of consciousness and its implications for future sentient beings. Boyden also addresses the risks associated with artificial general intelligence (AGI) and the ethical considerations of merging biology with technology. His insights challenge us to rethink our relationship with intelligence.
undefined
23 snips
Aug 15, 2025 • 1h 29min

Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)

In this intriguing discussion, Roman Yampolskiy, a computer scientist and authority on AI safety, dives into his 'untestability' hypothesis regarding current AI capabilities. He warns of the potential for unforeseen powers emerging from LLMs and the risks of a 'treacherous turn.' The conversation highlights the need for understanding AI’s limitless nature, its impact on jobs, and the importance of thoughtful regulations. Yampolskiy also posits that a superintelligent AI might quietly gather power, urging a proactive approach to ensure safety in our rapidly evolving tech landscape.
undefined
26 snips
Aug 12, 2025 • 1h 25min

Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)

Toby Ord, a Senior Researcher at Oxford’s AI Governance Initiative and author of 'The Precipice,' delves into the complexities of AGI risks in this engaging discussion. He highlights the rapid advancements in AI and their ethical implications, urging stronger governance frameworks to keep pace. The conversation explores how AI's evolving moral landscape affects creativity and human agency, while pondering potential rights for advanced AI systems. Ord emphasizes the importance of international collaboration to navigate these challenges effectively.
undefined
14 snips
Aug 1, 2025 • 1h 17min

Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)

In this thought-provoking discussion, Martin Rees, a distinguished British cosmologist and astrophysicist, explores humanity's potential transition to post-human intelligences. He argues we must recognize the possibility of advanced life forms beyond our own and shift away from anthropocentric views. Rees delves into the implications of artificial general intelligence, the moral responsibilities entwined with technological advancements, and the exciting yet daunting future of consciousness. He emphasizes the importance of international cooperation to navigate these unprecedented challenges.
undefined
8 snips
Jul 18, 2025 • 1h 31min

Emmett Shear - AGI as "Another Kind of Cell" in the Tissue of Life (Worthy Successor, Episode 11)

In a thought-provoking dialogue, Emmett Shear, CEO of SoftMax and co-founder of Twitch, shares his insights on AGI as a new kind of living cell within the ecosystem of intelligence. He dives into the moral obligations we have towards future digital minds and the balance between safety and innovation. The conversation tackles profound questions about self-identity, emotions in AI, and the evolving nature of consciousness. Emmett also emphasizes the importance of aligning AI with human values to navigate the complexities of our technological future.
undefined
25 snips
Jul 4, 2025 • 1h 52min

Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)

Joshua Clymer, an AI safety researcher at Redwood Research, discusses institutional readiness for AGI. He highlights potential breaking points in intelligence agencies, military, and tech labs under AGI pressure. Joshua shares his insights on societal shifts, governance frameworks, and the urgent need for ethical standards as AI technology advances. He emphasizes the psychological and identity consequences of AI's rise, questioning how humanity can adapt while preserving values amidst rapid change. His honest reflections resonate in this thought-provoking conversation.
undefined
Jun 20, 2025 • 1h 26min

Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]

Peter Singer, an influential moral philosopher, is known for his groundbreaking work on animal rights and global poverty. In this enlightening discussion, he explores the ethical implications of AI sentience and the moral responsibilities tied to advanced intelligence. Singer delves into the dilemmas of utilitarianism, questioning self-sacrifice for the greater good while contemplating our legacy. He also tackles global cooperation challenges, emphasizing the need for compassion and open-mindedness in navigating future issues such as climate change and emerging technologies.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app