The Trajectory cover image

The Trajectory

Latest episodes

undefined
Oct 11, 2024 • 1h 14min

Jeff Hawkins - Building a Knowledge-Preserving AGI to Live Beyond Us (Worthy Successor, Episode 5)

Join Jeff Hawkins, founder of Numenta and author of "A Thousand Brains," as he dives into the intricacies of artificial general intelligence. He challenges conventional AI ideas by juxtaposing them with neuroscience insights. The discussion explores the quest for knowledge-preserving AI, emphasizing its role in safeguarding our legacy beyond humanity. Hawkins also critiques current AI limitations, debates the philosophical implications of consciousness, and stresses the need for regulatory frameworks in the evolving tech landscape.
undefined
Sep 13, 2024 • 1h 18min

Scott Aaronson - AGI That Evolves Our Values Without Replacing Them (Worthy Successor, Episode 4)

Scott Aaronson, a theoretical computer scientist and Schlumberger Centennial Chair at the University of Texas at Austin, explores the future of artificial general intelligence. He discusses the moral implications of creating successor AIs and questions what kind of posthuman future we should be aiming for. The conversation dives into the evolving relationship between consciousness and ethics, the complexities of aligning AI with human values, and the philosophical inquiries surrounding morality and intelligence in diverse life forms.
undefined
Aug 23, 2024 • 1h 30min

Anders Sandberg - Blooming the Space of Intelligence and Value (Worthy Successor Series, Episode 3)

Anders Sandberg, a Computational Neuroscience PhD and researcher at the Mimir Center for Long-Term Futures Research, dives into the exciting realms of artificial general intelligence and future value. He explores who holds power in AGI development and the potential directions for humanity's posthuman future. The conversation also navigates ethical dilemmas surrounding AI, societal evolution, and the intricate relationship between complex ecosystems and moral responsibilities. It's a thought-provoking journey into the future of intelligence and governance.
undefined
10 snips
Aug 9, 2024 • 1h 19min

Richard Sutton - Humanity Never Had Control in the First Place (Worthy Successor Series, Episode 2)

Join Richard Sutton, a renowned Professor from the University of Alberta and a Research Scientist at Keen Technologies, as he delves into the complexities of artificial general intelligence. He discusses who controls AGI and the moral dilemmas it brings. Sutton envisions a decentralized, cooperative future while questioning what true prosperity means in a tech-driven reality. He unpacks humanity's fragile relationship with AI and the necessity for collaboration amid geopolitical challenges, emphasizing the unpredictable journey that lies ahead.
undefined
4 snips
Jul 26, 2024 • 38min

Nick Bostrom - AGI That Saves Room for Us (Worthy Successor Series, Episode 1)

Nick Bostrom, the Founding Director of the Future of Humanity Institute at Oxford, delves into the ethical implications of artificial general intelligence (AGI). He explores the concept of 'worthy successor' intelligences that aim to coexist with humanity while preserving our values. The discussion highlights the need for ethical governance in the face of rapid AI development and the importance of regulating AI to ensure human-centric futures. Bostrom also addresses international competition in technology, particularly between the US and China, advocating for cooperative oversight.
undefined
18 snips
Jun 21, 2024 • 54min

Dan Hendrycks - Avoiding an AGI Arms Race (AGI Destinations Series, Episode 5)

Dan Hendrycks, Executive Director of The Center for AI Safety, discusses the power players in AGI, the posthuman future, and solutions to avoid an AGI arms race. Topics include AI safety, human control, future scenarios, international coordination, preventing AGI for military use, and collaboration with international organizations for ethical AI development.
undefined
May 11, 2024 • 1h 12min

Dileep George - Keep Strong AI as a Tool, Not a Successor (AGI Destinations Series, Episode 4)

This is an interview with Dileep George, AI Researcher at Google DeepMind, previously CTO and Co-founder of Vicarious AI.This is the fourth episode in a 5-part series about "AGI Destinations" - where we unpack the preferable and non-preferable futures humanity might strive towards in the years ahead.Watch Dileep's episode on The Trajectory YouTube channel: https://youtu.be/nmsuHz43X24See the full article from this episode: https://danfaggella.com/dileep1 See more of Dileep's ideas - and his humorous AGI comics - at: https://dileeplearning.github.io/Some of the resources referenced in this episode:-- The Intelligence Trajectory Political Matrix: http://www.danfaggella.com/itpm...There three main questions we'll be covering on The Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, I'm glad to have you here.Connect:-- Web -- danfaggella.com/trajectory-- Twitter -- twitter.com/danfaggella-- LinkedIn -- linkedin.com/in/danfaggella-- Newsletter -- bit.ly/TrajectoryTw-- YouTube -- https://youtube.com/@trajectoryai
undefined
Apr 26, 2024 • 1h 6min

Ben Goertzel - Regulating AGI May Do More Harm Than Good (AGI Destinations Series, Episode 3)

In this engaging discussion, Ben Goertzel, CEO of SingularityNET and seasoned AGI researcher, delves into the complex landscape of artificial intelligence. He analyzes the generational divide in attitudes towards AGI regulation and the contrasting global perspectives on its advancement. Goertzel emphasizes the potential harms of regulatory overreach and the need for balancing ambition with altruism. Additionally, he explores the philosophical implications of humanity's relationship with AI, advocating for decentralized approaches to ensure ethical development.
undefined
17 snips
Apr 9, 2024 • 21min

Jaan Tallinn - The Case for a Pause Before We Birth AGI (AGI Destinations Series, Episode 2)

Billionaire tech mogul Jaan Tallinn discusses the preferable and non-preferable futures of AGI, power players in AGI, and the importance of aligning actions with evolving values. They explore ethics, the future of humanity, global collaboration, and preparing for the implications of AGI on society.
undefined
8 snips
Mar 9, 2024 • 1h 3min

Yoshua Bengio - Why We Shouldn't Blast Off to AGI Just Yet (AGI Destinations Series, Episode 1)

In this engaging conversation, Dr. Yoshua Bengio, a Turing Award winner and AI pioneer, discusses the complex landscape of artificial intelligence and its implications for humanity. He emphasizes the importance of open dialogue about AI risks and advocates for a cautious, well-regulated approach to advancements. Bengio also highlights historical lessons that illustrate the need for proactive governance to navigate future challenges. His insights urge a balance between innovation and safety while fostering empathy in AI development.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner