Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

The Compendium - Connor Leahy and Gabriel Alfour

Mar 30, 2025
Connor Leahy and Gabriel Alfour, AI researchers from Conjecture, dive deep into the critical issues of Artificial Superintelligence (ASI) safety. They discuss the existential risks of uncontrolled AI advancements, warning that a superintelligent AI could dominate humanity as humans do less intelligent species. The conversation also touches on the need for robust institutional support and ethical governance to navigate the complexities of AI alignment with human values while critiquing prevailing ideologies like techno-feudalism.
01:37:10

Podcast summary created with Snipd AI

Quick takeaways

  • Connor Leahy and Gabriel Alfour warn that uncontrolled AI development poses existential risks, necessitating a serious focus on safety governance.
  • The complexity of AI intelligence development, driven by algorithms rather than biological processes, complicates effective safety measures and predictions.

Deep dives

The Urgency of AI Safety Research

The rapid development of artificial intelligence raises significant concerns about safety and the existential risks it poses to humanity. Current advancements are driven more by scaling up existing models rather than a deeper understanding of intelligence and its implications. There appears to be a race to create ever more powerful AI without sufficient consideration for the safety measures that should be in place. Establishing a focused research initiative, akin to a Manhattan Project for AI safety, is crucial for addressing these challenges and ensuring that AI development is safe and controlled.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner