The Trajectory

Daniel Faggella
undefined
Nov 29, 2024 • 1h 10min

Mike Brown - AI Cooperation and Competition Between the US and China [AGI Governance, Episode 2]

This is an interview with Mike Brown, Partner at Shield Capital and the Former Director of the Defense Innovation Unit at the U.S. Department of Defense.This is the second installment of our "AGI Governance" series - where we explore how important AGI governance is, what it should achieve, and how it should be implemented.Watch this episode on The Trajectory YouTube channel: https://youtu.be/yUA4voA97kE This episode referred to the following other essays and resources:-- The International Governance of AI: https://emerj.com/international-governance-ai/See the full article from this episode: https://danfaggella.com/brown1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
Nov 15, 2024 • 1h 34min

Sébastien Krier - Keeping a Pulse on AGI's Takeoff [AGI Governance, Episode 1]

This is an interview with Sebastien Krier, who works in Policy Development and Strategy at Google DeepMind.This is the first installment of our "AGI Governance" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.Watch this episode on The Trajectory YouTube channel:  https://youtu.be/SKl7kcZt57A This episode referred to the following other essays and resources:-- The International Governance of AI: https://emerj.com/international-governance-ai/Read Sebastien's episode highlight: danfaggella.com/krier1...There are four main questions we cover in this AGI Governance series are:1. How important is AGI governance now on a 1-10 scale?2. What should AGI governance attempt to do?3. What might AGI governance look like in practice?4. What should innovators and regulators do now?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
undefined
36 snips
Oct 25, 2024 • 2h 1min

Joscha Bach - Building an AGI to Play the Longest Games [Worthy Successor, Episode 6]

In a captivating discussion, Joscha Bach, a cognitive scientist and AI strategist at Liquid AI, dives deep into the intricacies of artificial general intelligence. He explores who the key players in AGI are, their motivations, and potential futures posthuman intelligence might shape. Bach raises ethical questions about AGI consciousness and the implications for humanity. He emphasizes the need for thoughtful regulation while debating the balance of altruism and self-interest in leadership. With eye-opening insights, he envisions a harmonious coexistence between advanced AI and mankind.
undefined
Oct 11, 2024 • 1h 14min

Jeff Hawkins - Building a Knowledge-Preserving AGI to Live Beyond Us (Worthy Successor, Episode 5)

Join Jeff Hawkins, founder of Numenta and author of "A Thousand Brains," as he dives into the intricacies of artificial general intelligence. He challenges conventional AI ideas by juxtaposing them with neuroscience insights. The discussion explores the quest for knowledge-preserving AI, emphasizing its role in safeguarding our legacy beyond humanity. Hawkins also critiques current AI limitations, debates the philosophical implications of consciousness, and stresses the need for regulatory frameworks in the evolving tech landscape.
undefined
Sep 13, 2024 • 1h 18min

Scott Aaronson - AGI That Evolves Our Values Without Replacing Them (Worthy Successor, Episode 4)

Scott Aaronson, a theoretical computer scientist and Schlumberger Centennial Chair at the University of Texas at Austin, explores the future of artificial general intelligence. He discusses the moral implications of creating successor AIs and questions what kind of posthuman future we should be aiming for. The conversation dives into the evolving relationship between consciousness and ethics, the complexities of aligning AI with human values, and the philosophical inquiries surrounding morality and intelligence in diverse life forms.
undefined
Aug 23, 2024 • 1h 30min

Anders Sandberg - Blooming the Space of Intelligence and Value (Worthy Successor Series, Episode 3)

Anders Sandberg, a Computational Neuroscience PhD and researcher at the Mimir Center for Long-Term Futures Research, dives into the exciting realms of artificial general intelligence and future value. He explores who holds power in AGI development and the potential directions for humanity's posthuman future. The conversation also navigates ethical dilemmas surrounding AI, societal evolution, and the intricate relationship between complex ecosystems and moral responsibilities. It's a thought-provoking journey into the future of intelligence and governance.
undefined
10 snips
Aug 9, 2024 • 1h 19min

Richard Sutton - Humanity Never Had Control in the First Place (Worthy Successor Series, Episode 2)

Join Richard Sutton, a renowned Professor from the University of Alberta and a Research Scientist at Keen Technologies, as he delves into the complexities of artificial general intelligence. He discusses who controls AGI and the moral dilemmas it brings. Sutton envisions a decentralized, cooperative future while questioning what true prosperity means in a tech-driven reality. He unpacks humanity's fragile relationship with AI and the necessity for collaboration amid geopolitical challenges, emphasizing the unpredictable journey that lies ahead.
undefined
4 snips
Jul 26, 2024 • 38min

Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)

Nick Bostrom, the Founding Director of the Future of Humanity Institute at Oxford, delves into the ethical implications of artificial general intelligence (AGI). He explores the concept of 'worthy successor' intelligences that aim to coexist with humanity while preserving our values. The discussion highlights the need for ethical governance in the face of rapid AI development and the importance of regulating AI to ensure human-centric futures. Bostrom also addresses international competition in technology, particularly between the US and China, advocating for cooperative oversight.
undefined
18 snips
Jun 21, 2024 • 54min

Dan Hendrycks - Avoiding an AGI Arms Race (AGI Destinations Series, Episode 5)

Dan Hendrycks, Executive Director of The Center for AI Safety, discusses the power players in AGI, the posthuman future, and solutions to avoid an AGI arms race. Topics include AI safety, human control, future scenarios, international coordination, preventing AGI for military use, and collaboration with international organizations for ethical AI development.
undefined
May 11, 2024 • 1h 12min

Dileep George - Keep Strong AI as a Tool, Not a Successor (AGI Destinations Series, Episode 4)

This is an interview with Dileep George, AI Researcher at Google DeepMind, previously CTO and Co-founder of Vicarious AI.This is the fourth episode in a 5-part series about "AGI Destinations" - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.Watch Dileep's episode on The Trajectory YouTube channel: https://youtu.be/nmsuHz43X24See the full article from this episode: https://danfaggella.com/dileep1 See more of Dileep's ideas - and his humorous AGI comics - at: https://dileeplearning.github.io/Some of the resources referenced in this episode:-- The Intelligence Trajectory Political Matrix: http://www.danfaggella.com/itpm...There three main questions we'll be covering on The Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, I'm glad to have you here.Connect:-- Web -- danfaggella.com/trajectory-- Twitter -- twitter.com/danfaggella-- LinkedIn -- linkedin.com/in/danfaggella-- Newsletter -- bit.ly/TrajectoryTw-- YouTube -- https://youtube.com/@trajectoryai

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app