The AI Daily Brief (Formerly The AI Breakdown): Artificial Intelligence News and Analysis

AI Alignment and AGI - How Worried Should We Actually Be?

May 28, 2023
Delve into the urgent need for AI regulation as political leaders shape the landscape of rapid technological advancement. Explore the complexities of existential risks posed by AI, calling for a broader societal understanding. The conversation highlights differing perspectives on superintelligence development, advocating for alignment with human values. A thought-provoking essay sparks discussion on the balance between innovation and safety, urging a proactive approach to potentially perilous advancements in artificial intelligence.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ANECDOTE

Asteroid Analogy

  • Max Tegmark compares the existential risk of unaligned superintelligence to a large inbound asteroid.
  • He points out half of AI researchers give AI a 10% chance of causing human extinction.
INSIGHT

Intelligence as Information Processing

  • Intelligence is about information processing, regardless of whether it's done by carbon or silicon atoms.
  • AI is consistently outperforming humans on various tasks.
ANECDOTE

Shifting Timelines for AGI

  • Andrew Ng's 2016 comparison of worrying about AI to worrying about overpopulation on Mars is now outdated.
  • Experts like Jeff Hinton and Yoshua Bengio acknowledge the rapid advancement of AI.
Get the Snipd Podcast app to discover more snips from this episode
Get the app