The Trajectory cover image

The Trajectory

Latest episodes

undefined
11 snips
Jul 4, 2025 • 1h 52min

Joshua Clymer - Where Human Civilization Might Crumble First (Early Experience of AGI - Episode 2)

Joshua Clymer, an AI safety researcher at Redwood Research, discusses institutional readiness for AGI. He highlights potential breaking points in intelligence agencies, military, and tech labs under AGI pressure. Joshua shares his insights on societal shifts, governance frameworks, and the urgent need for ethical standards as AI technology advances. He emphasizes the psychological and identity consequences of AI's rise, questioning how humanity can adapt while preserving values amidst rapid change. His honest reflections resonate in this thought-provoking conversation.
undefined
Jun 20, 2025 • 1h 26min

Peter Singer - Optimizing the Future for Joy, and the Exploration of the Good [Worthy Successor, Episode 10]

Peter Singer, an influential moral philosopher, is known for his groundbreaking work on animal rights and global poverty. In this enlightening discussion, he explores the ethical implications of AI sentience and the moral responsibilities tied to advanced intelligence. Singer delves into the dilemmas of utilitarianism, questioning self-sacrifice for the greater good while contemplating our legacy. He also tackles global cooperation challenges, emphasizing the need for compassion and open-mindedness in navigating future issues such as climate change and emerging technologies.
undefined
21 snips
Jun 6, 2025 • 1h 56min

David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]

David Duvenaud, an Assistant Professor at the University of Toronto and co-author of the Gradual Disempowerment paper, dives into the profound effects of artificial general intelligence on our lives. He explores the unsettling reality of AI surpassing human capabilities, raising questions about trust and agency. Duvenaud discusses emotional dilemmas in parent-child relationships and the ethical challenges in AI development. His insights prompt a reevaluation of work dynamics, personal values, and the future of human relationships as we integrate technology into our lives.
undefined
26 snips
May 23, 2025 • 1h 48min

Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]

Kristian Rönn, author and CEO of the AI governance startup Lucid, dives into the provocative themes of posthuman intelligences and their potential to shape our future. He discusses the critical traits that define a 'worthy successor,' emphasizing truth-seeking and ethical self-awareness. The conversation explores the intersection of AI and human consciousness, questioning the implications for identity and morality. Rönn also advocates for aligning AI systems with societal values and introduces innovative concepts like reputational markets to foster ethical behavior.
undefined
13 snips
May 9, 2025 • 1h 42min

Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]

In this discussion, Jack Shanahan, a three-star General and former Director of the Joint AI Center, emphasizes the critical need for U.S.-China cooperation on artificial intelligence to avert an arms race. He highlights the dual-use nature of AI and its implications for national security and global power dynamics. Shanahan warns against escalating geopolitical tensions and advocates for clear communication between tech companies and government agencies. His insights call for regulatory frameworks that balance innovation with ethical considerations and collective safety.
undefined
23 snips
Apr 25, 2025 • 1h 46min

Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]

This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - http://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
13 snips
Apr 11, 2025 • 1h 14min

Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]

Yi Zeng, a prominent professor at the Chinese Academy of Sciences and AI safety advocate, dives deep into the intersection of AI, morality, and culture. He unpacks the challenge of instilling moral reasoning in AI, drawing insights from Chinese philosophy. Zeng explores the evolving role of AI as a potential partner or adversary in society, and contrasts American and Chinese views on governance and virtue. The conversation questions whether we can achieve harmony with AI or merely coexist, highlighting the need for adaptive values in our technological future.
undefined
Mar 28, 2025 • 26min

Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]

Max Tegmark, an MIT professor and founder of the Future of Humanity Institute, dives into the critical topics of AI governance. He discusses the essential role of international collaboration in regulating AGI, drawing parallels to historical risks like nuclear reactors. Tegmark emphasizes the need for safety standards to prevent catastrophic outcomes. He also critiques tech leaders' wishful thinking that overlooks societal risks, advocating for a responsible governance approach that takes personal motivations into account. Overall, it’s a compelling call for proactive measures in AI development.
undefined
39 snips
Mar 14, 2025 • 1h 17min

Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]

Dr. Michael Levin, a pioneering developmental biologist at Tufts University, dives into the future of intelligence beyond humanity. He critiques our resistance to new concepts of intelligence, arguing for a ‘worthy successor’ capable of profound empathy. Levin explores philosophical connections of self-interest across biological systems and the evolution of intelligence itself. He discusses the moral dilemmas posed by AI and calls for a broader ethical framework. This conversation challenges longstanding views and invites listeners to envision a diverse, intelligent future.
undefined
22 snips
Jan 24, 2025 • 1h 15min

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

Eliezer Yudkowsky, an AI researcher at the Machine Intelligence Research Institute, discusses the critical landscape of artificial general intelligence. He emphasizes the importance of governance structures to ensure safe AI development and the need for global cooperation to mitigate risks. Yudkowsky explores the ethical implications of AGI, including job displacement and the potential for Universal Basic Income. His insights also address how to harness AI safely while preserving essential human values amid technological advancements.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app