The Trajectory cover image

The Trajectory

Latest episodes

undefined
23 snips
May 23, 2025 • 1h 48min

Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]

Kristian Rönn, author and CEO of the AI governance startup Lucid, dives into the provocative themes of posthuman intelligences and their potential to shape our future. He discusses the critical traits that define a 'worthy successor,' emphasizing truth-seeking and ethical self-awareness. The conversation explores the intersection of AI and human consciousness, questioning the implications for identity and morality. Rönn also advocates for aligning AI systems with societal values and introduces innovative concepts like reputational markets to foster ethical behavior.
undefined
May 9, 2025 • 1h 42min

Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]

In this discussion, Jack Shanahan, a three-star General and former Director of the Joint AI Center, emphasizes the critical need for U.S.-China cooperation on artificial intelligence to avert an arms race. He highlights the dual-use nature of AI and its implications for national security and global power dynamics. Shanahan warns against escalating geopolitical tensions and advocates for clear communication between tech companies and government agencies. His insights call for regulatory frameworks that balance innovation with ethical considerations and collective safety.
undefined
23 snips
Apr 25, 2025 • 1h 46min

Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]

This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - http://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
13 snips
Apr 11, 2025 • 1h 14min

Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]

Yi Zeng, a prominent professor at the Chinese Academy of Sciences and AI safety advocate, dives deep into the intersection of AI, morality, and culture. He unpacks the challenge of instilling moral reasoning in AI, drawing insights from Chinese philosophy. Zeng explores the evolving role of AI as a potential partner or adversary in society, and contrasts American and Chinese views on governance and virtue. The conversation questions whether we can achieve harmony with AI or merely coexist, highlighting the need for adaptive values in our technological future.
undefined
Mar 28, 2025 • 26min

Max Tegmark - The Lynchpin Factors to Achieving AGI Governance [AI Safety Connect, Episode 1]

Max Tegmark, an MIT professor and founder of the Future of Humanity Institute, dives into the critical topics of AI governance. He discusses the essential role of international collaboration in regulating AGI, drawing parallels to historical risks like nuclear reactors. Tegmark emphasizes the need for safety standards to prevent catastrophic outcomes. He also critiques tech leaders' wishful thinking that overlooks societal risks, advocating for a responsible governance approach that takes personal motivations into account. Overall, it’s a compelling call for proactive measures in AI development.
undefined
39 snips
Mar 14, 2025 • 1h 17min

Michael Levin - Unfolding New Paradigms of Posthuman Intelligence [Worthy Successor, Episode 7]

Dr. Michael Levin, a pioneering developmental biologist at Tufts University, dives into the future of intelligence beyond humanity. He critiques our resistance to new concepts of intelligence, arguing for a ‘worthy successor’ capable of profound empathy. Levin explores philosophical connections of self-interest across biological systems and the evolution of intelligence itself. He discusses the moral dilemmas posed by AI and calls for a broader ethical framework. This conversation challenges longstanding views and invites listeners to envision a diverse, intelligent future.
undefined
22 snips
Jan 24, 2025 • 1h 15min

Eliezer Yudkowsky - Human Augmentation as a Safer AGI Pathway [AGI Governance, Episode 6]

Eliezer Yudkowsky, an AI researcher at the Machine Intelligence Research Institute, discusses the critical landscape of artificial general intelligence. He emphasizes the importance of governance structures to ensure safe AI development and the need for global cooperation to mitigate risks. Yudkowsky explores the ethical implications of AGI, including job displacement and the potential for Universal Basic Income. His insights also address how to harness AI safely while preserving essential human values amid technological advancements.
undefined
Jan 10, 2025 • 1h 45min

Connor Leahy - Slamming the Brakes on the AGI Arms Race [AGI Governance, Episode 5]

This is an interview with Connor Leahy, the Founder and CEO of Conjecture.This is the fifth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.Watch this episode on The Trajectory Youtube Channel: https://youtu.be/1j--6JYRLVkSee the full article from this episode: https://danfaggella.com/leahy1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Dec 27, 2024 • 1h 41min

Andrea Miotti - A Human-First AI Future [AGI Governance, Episode 4]

This is an interview with Andrea Miotti, the Founder and Executive Director of ControlAI.This is the fourth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.Watch this episode on The Trajectory Youtube Channel: https://youtu.be/LNUl0_v7wzESee the full article from this episode: https://danfaggella.com/miotti1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Dec 13, 2024 • 50min

Stephen Ibaraki - The Beginning of AGI Global Coordination [AGI Governance, Episode 3]

Stephen Ibaraki, Founder of the ITU's AI for Good initiative and Chairman of REDDS Capital, delves into the future of AGI and its ethical implications. He predicts the rise of AGI in the next six to ten years, highlighting potential conflicts among emerging intelligences. The conversation navigates the intricate dynamics of global governance, urging collaboration to balance innovation and ethical standards. Ibaraki underscores the importance of international cooperation, especially between the US and China, in shaping effective AGI regulations.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app