Future of Life Institute Podcast

Brain-like AGI and why it's Dangerous (with Steven Byrnes)

Apr 4, 2025
Steven Byrnes, an AGI safety and alignment researcher at the Astera Institute, explores the intricacies of brain-like AGI. He discusses the differences between controlled AGI and social-instinct AGI, highlighting the relevance of human brain functions in safe AI development. Byrnes emphasizes the importance of aligning AGI motivations with human values, and the need for honesty in AI models. He also shares ways individuals can contribute to enhancing AGI safety and compares various strategies to ensure its benefit to humanity.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Brain-like AGI's Potential

  • Brain-like AGI could invent science and technology, potentially exceeding human capabilities.
  • This poses an existential risk, requiring careful planning and control mechanisms.
INSIGHT

Foundation Model Plateau

  • Current foundation models will likely plateau before reaching dangerous capabilities.
  • Brain-like AGI safety research is crucial due to the potential for rapid advancements.
INSIGHT

Brain's Subsystems

  • The brain has learning (cortex) and steering (hypothalamus/brainstem) subsystems.
  • These subsystems mirror the structure of machine learning algorithms with business logic.
Get the Snipd Podcast app to discover more snips from this episode
Get the app