

The Trajectory
Daniel Faggella
What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.
Episodes
Mentioned books

Oct 3, 2025 • 37min
Dean Xue Lan - A Multi-Pronged Approach to Pre-AGI Coordination (AGI Governance, Episode 10)
Dean Xue Lan, a distinguished chair professor at Tsinghua University, dives into the complexities of AGI governance. He discusses the necessity of an adaptive governance network involving the UN, nations, and companies. Xue emphasizes the emerging roles of national AI safety institutes and the importance of company commitments to safety. He also highlights the need for international coordination, contingency planning for AGI incidents, and building trust between the U.S. and China. Xue's insights pave the way for a nuanced approach to managing AI's challenges.

Sep 26, 2025 • 1h 10min
RAND’s Joel Predd - Competitive and Cooperative Dynamics of AGI (US-China AGI Relations, Episode 4)
Joel Predd, a senior engineer at RAND Corporation and co-author of influential work on AGI national security, dives into the competitive yet cooperative dynamics of AGI between the US and China. He emphasizes the need to treat AGI as a credible but uncertain force and discusses five critical national security challenges. The conversation explores how AGI could reshape our future, the risks of loss of control, and the importance of robust government-lab relationships. Predd provides practical recommendations for policymakers, urging proactive strategies and crisis preparedness.

Sep 19, 2025 • 51min
Drew Cukor - AI Adoption as a National Security Priority (US-China AGI Relations, Episode 3)
Drew Cukor, a former USMC colonel and AI strategist, shares his insights on the national security implications of AI in the US-China relationship. He emphasizes that the race isn't won by technological barriers but by societal adoption of AI across all sectors. Cukor discusses the stark differences in how the US and China approach AI, advocating for a proactive strategy over mere defensive measures. He warns that failing to adopt effectively could weaken America's global economic standing, making a compelling case for integrating AI into military strategies and everyday life.

16 snips
Sep 12, 2025 • 1h 5min
Stuart Russell - Avoiding the Cliff of Uncontrollable AI (AGI Governance, Episode 9)
Stuart Russell, a Professor of Computer Science at UC Berkeley and author of 'Human Compatible,' dives deep into the urgent need for AGI governance. He likens the current AI race dynamics to a prisoner's dilemma, stressing why governments must outline enforceable red lines. The discussion also highlights the critical role of international cooperation in establishing ethical frameworks. Russell emphasizes that navigating the complexities of AI safety requires a global consensus, mirroring the lessons learned from historical aviation safety.

27 snips
Sep 5, 2025 • 37min
Craig Mundie - Co-Evolution with AI: Industry First, Regulators Later (AGI Governance, Episode 8)
Craig Mundie, former Chief Research and Strategy Officer at Microsoft, dives into the evolving world of AI and its governance. He discusses how bottom-up governance could arise from commercial pressures, emphasizing international collaboration over regulatory constraints. Mundie advocates for a symbiotic relationship between humans and AI, asserting that proactive governance is vital for the future. He also explores emotional reactions to AGI, comparing them to stages of grief, and promotes an optimistic view of co-evolution in shaping a safe and innovative technological landscape.

12 snips
Aug 29, 2025 • 1h 33min
Jeremie and Edouard Harris - What Makes US-China Alignment Around AGI So Hard (US-China AGI Relations, Episode 2)
Jeremie and Edouard Harris, co-founders of Gladstone AI and experts in AGI implications for the US government, dive into the complex landscape of US-China relations in artificial intelligence. They discuss the dangers of trusting China with AGI, the ongoing espionage threats in Western labs, and the necessity of tamper-proof technology. The conversation emphasizes strategic collaboration to slow China’s AI progress while ensuring transparency and security, highlighting the importance of international cooperation amidst rising tensions.

12 snips
Aug 22, 2025 • 1h 19min
Ed Boyden - Neurobiology as a Bridge to a Worthy Successor (Worthy Successor, Episode 13)
Ed Boyden, a renowned neuroscientist and entrepreneur from MIT, discusses the future of intelligence and neurotechnology. He emphasizes the importance of grounding intelligence in reality through his concept of 'ground truth.' The conversation explores the evolution of consciousness and its implications for future sentient beings. Boyden also addresses the risks associated with artificial general intelligence (AGI) and the ethical considerations of merging biology with technology. His insights challenge us to rethink our relationship with intelligence.

23 snips
Aug 15, 2025 • 1h 29min
Roman Yampolskiy - The Blacker the Box, the Bigger the Risk (Early Experience of AGI, Episode 3)
In this intriguing discussion, Roman Yampolskiy, a computer scientist and authority on AI safety, dives into his 'untestability' hypothesis regarding current AI capabilities. He warns of the potential for unforeseen powers emerging from LLMs and the risks of a 'treacherous turn.' The conversation highlights the need for understanding AI’s limitless nature, its impact on jobs, and the importance of thoughtful regulations. Yampolskiy also posits that a superintelligent AI might quietly gather power, urging a proactive approach to ensure safety in our rapidly evolving tech landscape.

26 snips
Aug 12, 2025 • 1h 25min
Toby Ord - Crucial Updates on the Evolving AGI Risk Landscape (AGI Governance, Episode 7)
Toby Ord, a Senior Researcher at Oxford’s AI Governance Initiative and author of 'The Precipice,' delves into the complexities of AGI risks in this engaging discussion. He highlights the rapid advancements in AI and their ethical implications, urging stronger governance frameworks to keep pace. The conversation explores how AI's evolving moral landscape affects creativity and human agency, while pondering potential rights for advanced AI systems. Ord emphasizes the importance of international collaboration to navigate these challenges effectively.

14 snips
Aug 1, 2025 • 1h 17min
Martin Rees - If They’re Conscious, We Should Step Aside (Worthy Successor, Episode 12)
In this thought-provoking discussion, Martin Rees, a distinguished British cosmologist and astrophysicist, explores humanity's potential transition to post-human intelligences. He argues we must recognize the possibility of advanced life forms beyond our own and shift away from anthropocentric views. Rees delves into the implications of artificial general intelligence, the moral responsibilities entwined with technological advancements, and the exciting yet daunting future of consciousness. He emphasizes the importance of international cooperation to navigate these unprecedented challenges.