The Trajectory cover image

The Trajectory

Latest episodes

undefined
Jan 10, 2025 • 1h 45min

Connor Leahy - Slamming the Breaks on the AGI Arms Race [AGI Governance, Episode 5]

This is an interview with Connor Leahy, the Founder and CEO of Conjecture.This is the fifth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.Watch this episode on The Trajectory Youtube Channel: https://youtu.be/1j--6JYRLVkSee the full article from this episode: https://danfaggella.com/leahy1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Dec 27, 2024 • 1h 41min

Andrea Miotti - A Human-First AI Future [AGI Governance, Episode 4]

This is an interview with Andrea Miotti, the Founder and Executive Director of ControlAI.This is the fourth installment of our "AGI Governance" series - where we explore the means, objectives, and implementation of of governance structures for artificial general intelligence.Watch this episode on The Trajectory Youtube Channel: https://youtu.be/LNUl0_v7wzESee the full article from this episode: https://danfaggella.com/miotti1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Dec 13, 2024 • 50min

Stephen Ibaraki - The Beginning of AGI Global Coordination [AGI Governance, Episode 3]

Stephen Ibaraki, Founder of the ITU's AI for Good initiative and Chairman of REDDS Capital, delves into the future of AGI and its ethical implications. He predicts the rise of AGI in the next six to ten years, highlighting potential conflicts among emerging intelligences. The conversation navigates the intricate dynamics of global governance, urging collaboration to balance innovation and ethical standards. Ibaraki underscores the importance of international cooperation, especially between the US and China, in shaping effective AGI regulations.
undefined
Nov 29, 2024 • 1h 10min

Mike Brown - AI Cooperation and Competition Between the US and China [AGI Governance, Episode 2]

This is an interview with Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit at the U.S. Department of Defense.This is the second installment of our "AGI Governance" series - where we explore how important AGI governance is, what it should achieve, and how it should be implemented.Watch this episode on The Trajectory YouTube channel: https://youtu.be/yUA4voA97kE This episode referred to the following other essays and resources:-- The International Governance of AI: https://emerj.com/international-governance-ai/See the full article from this episode: https://danfaggella.com/brown1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Nov 15, 2024 • 1h 34min

Sébastien Krier - Keeping a Pulse on AGI's Takeoff [AGI Governance, Episode 1]

This is an interview with Sebastien Krier, who works in Policy Development and Strategy at Google DeepMind.This is the first installment of our "AGI Governance" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.Watch this episode on The Trajectory YouTube channel:  https://youtu.be/SKl7kcZt57A This episode referred to the following other essays and resources:-- The International Governance of AI: https://emerj.com/international-governance-ai/Read Sebastien's episode highlight: danfaggella.com/krier1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
36 snips
Oct 25, 2024 • 2h 1min

Joscha Bach - Building an AGI to Play the Longest Games [Worthy Successor, Episode 6]

In a captivating discussion, Joscha Bach, a cognitive scientist and AI strategist at Liquid AI, dives deep into the intricacies of artificial general intelligence. He explores who the key players in AGI are, their motivations, and potential futures posthuman intelligence might shape. Bach raises ethical questions about AGI consciousness and the implications for humanity. He emphasizes the need for thoughtful regulation while debating the balance of altruism and self-interest in leadership. With eye-opening insights, he envisions a harmonious coexistence between advanced AI and mankind.
undefined
Oct 11, 2024 • 1h 14min

Jeff Hawkins - Building a Knowledge-Preserving AGI to Live Beyond Us (Worthy Successor, Episode 5)

Join Jeff Hawkins, founder of Numenta and author of "A Thousand Brains," as he dives into the intricacies of artificial general intelligence. He challenges conventional AI ideas by juxtaposing them with neuroscience insights. The discussion explores the quest for knowledge-preserving AI, emphasizing its role in safeguarding our legacy beyond humanity. Hawkins also critiques current AI limitations, debates the philosophical implications of consciousness, and stresses the need for regulatory frameworks in the evolving tech landscape.
undefined
Sep 13, 2024 • 1h 18min

Scott Aaronson - AGI That Evolves Our Values Without Replacing Them (Worthy Successor, Episode 4)

Scott Aaronson, a theoretical computer scientist and Schlumberger Centennial Chair at the University of Texas at Austin, explores the future of artificial general intelligence. He discusses the moral implications of creating successor AIs and questions what kind of posthuman future we should be aiming for. The conversation dives into the evolving relationship between consciousness and ethics, the complexities of aligning AI with human values, and the philosophical inquiries surrounding morality and intelligence in diverse life forms.
undefined
Aug 23, 2024 • 1h 30min

Anders Sandberg - Blooming the Space of Intelligence and Value (Worthy Successor Series, Episode 3)

Anders Sandberg, a Computational Neuroscience PhD and researcher at the Mimir Center for Long-Term Futures Research, dives into the exciting realms of artificial general intelligence and future value. He explores who holds power in AGI development and the potential directions for humanity's posthuman future. The conversation also navigates ethical dilemmas surrounding AI, societal evolution, and the intricate relationship between complex ecosystems and moral responsibilities. It's a thought-provoking journey into the future of intelligence and governance.
undefined
10 snips
Aug 9, 2024 • 1h 19min

Richard Sutton - Humanity Never Had Control in the First Place (Worthy Successor Series, Episode 2)

Join Richard Sutton, a renowned Professor from the University of Alberta and a Research Scientist at Keen Technologies, as he delves into the complexities of artificial general intelligence. He discusses who controls AGI and the moral dilemmas it brings. Sutton envisions a decentralized, cooperative future while questioning what true prosperity means in a tech-driven reality. He unpacks humanity's fragile relationship with AI and the necessity for collaboration amid geopolitical challenges, emphasizing the unpredictable journey that lies ahead.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode