Machine Learning Street Talk (MLST)

AI Alignment & AGI Fire Alarm - Connor Leahy

5 snips
Nov 1, 2020
Connor Leahy, a machine learning engineer from Aleph Alpha and founder of EleutherAI, dives into the urgent complexities of AI alignment and AGI. He argues that AI alignment is philosophy with a deadline, likening AGI's challenges to climate change but with even more catastrophic potential. The discussion touches on decision theories like Newcomb's paradox, the prisoner's dilemma, and the dangers of poorly defined utility functions. Together, they unravel the philosophical implications of AI, the nature of intelligence, and the dire need for responsible action in AI development.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

AI Alignment Schools of Thought

  • AI alignment research has diverse approaches due to its colorful history, originating from transhumanist newsletters.
  • This field has recently gained mainstream recognition, leading to varying levels of familiarity with older, more radical views.
INSIGHT

AI Alignment Approaches

  • AI alignment approaches vary in problem difficulty, timeline, and solution requirements, ranging from prosaic to radical.
  • Prosaic AI alignment assumes future AI will resemble current AI, focusing on aligning neural networks, while MIRI emphasizes fundamental understanding.
INSIGHT

Defining Intelligence

  • Connor Leahy defines intelligence practically as the ability to solve problems, prioritizing practical usefulness over philosophical definitions.
  • He emphasizes focusing on optimization processes and pressure, measuring a system's power by its ability to influence values.
Get the Snipd Podcast app to discover more snips from this episode
Get the app