Machine Learning Street Talk (MLST)

Three Red Lines We're About to Cross Toward AGI (Daniel Kokotajlo, Gary Marcus, Dan Hendrycks)

113 snips
Jun 24, 2025
In this enlightening discussion, Gary Marcus, a cognitive scientist and AI skeptic, cautions against existing cognitive issues in AI. Daniel Kokotajlo, a former OpenAI insider, predicts we could see AGI by 2028 based on current trends. Dan Hendrycks, director at the Center for AI Safety, stresses the urgent need for transparency and collaboration to avert potential disasters. The trio explores the alarming psychological dynamics among AI developers and the critical boundaries that must be respected in this rapidly evolving field.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Superintelligence Economic Transformation

  • AI could transform the economy with superintelligent systems faster and cheaper than humans.
  • This may enable abundant wealth, disease cures, and even space settlements, depending on who controls the technology.
INSIGHT

Risks of Automated AI Recursion

  • Fully automated AI research and recursive self-improvement pose existential risks.
  • Red lines like banning automated recursion could help avoid destabilization and arms races.
INSIGHT

Rationalization Behind AI Race

  • AI labs rationalize risky development by trusting themselves more than others.
  • This "if we don't do it, someone else will" mindset drives a dangerous AI race.
Get the Snipd Podcast app to discover more snips from this episode
Get the app