Machine Learning Street Talk (MLST)

The Compendium - Connor Leahy and Gabriel Alfour

110 snips
Mar 30, 2025
Connor Leahy and Gabriel Alfour, AI researchers from Conjecture, dive deep into the critical issues of Artificial Superintelligence (ASI) safety. They discuss the existential risks of uncontrolled AI advancements, warning that a superintelligent AI could dominate humanity as humans do less intelligent species. The conversation also touches on the need for robust institutional support and ethical governance to navigate the complexities of AI alignment with human values while critiquing prevailing ideologies like techno-feudalism.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Safety Concerns

  • Current AI development prioritizes scaling over safety, increasing existential risks.
  • This approach is akin to "alchemy"—experimenting without deep understanding.
INSIGHT

Emergence of Intelligence

  • Intelligence is surprisingly emergent, arising from complex systems like shaking particles.
  • Predicting its limits is difficult, as seen with chimps versus humans.
ANECDOTE

Tobacco and Regulation

  • Regulating dangerous activities doesn't require perfect understanding, like with tobacco and cancer.
  • Focus on drawing the smallest possible circle to contain the risk, refining as understanding grows.
Get the Snipd Podcast app to discover more snips from this episode
Get the app