Machine Learning Street Talk (MLST)

#034 Eray Özkural- AGI, Simulations & Safety

Dec 20, 2020
Dr. Eray Özkural, an AGI researcher and founder of Celestial Intellect Cybernetics, critiques mainstream AI safety narratives, arguing they're rooted in fearmongering. He shares his skepticism about the intelligence explosion hypothesis and discusses the complexities of defining intelligence. The conversation also dives into the simulation argument, challenging its validity and exploring its implications. The panel covers the urgent need for nuanced approaches to AGI and the ethics surrounding AI development, urging a departure from doomsday thinking.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

DeepMind and AI Risk

  • Some believe DeepMind founders' closeness to AI risk proponents influenced their views.
  • This connection may have led to support for AI risk ideas within DeepMind.
INSIGHT

AI and Social Intelligence

  • Superintelligent agents are unlikely to disregard humans like ants, possessing social intelligence.
  • Designing agents with flawed objectives and then criticizing their behavior is paradoxical.
ANECDOTE

Real vs. Hypothetical AI Risks

  • Real dangers exist in robotic safety, like warehouse accidents and military weapon use.
  • This contrasts with doomsday prophecies from figures like Nick Bostrom.
Get the Snipd Podcast app to discover more snips from this episode
Get the app