Clearer Thinking with Spencer Greenberg

AI Safety and Solutions (with Robert Miles)

May 22, 2021
Robert Miles, a science communicator focused on AI safety and alignment, shares insights on the pressing need for AI safety as we advance towards artificial general intelligence (AGI). He discusses the complexity of defining utility functions and the potential existential risks involved. The conversation explores instrumental convergence, the unilateralist's curse, and the challenges of creating AI that aligns with human values. Miles emphasizes the importance of community support in science communication and the necessity for responsible management of AI technology.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
ADVICE

Prioritizing Quality Content

  • Focus on quality content over maximizing views.
  • Fund your work through grants and patrons to maintain creative freedom.
INSIGHT

AI Safety's Importance

  • AI safety is the most interesting topic because it encompasses fundamental philosophical questions.
  • It combines intelligence, values, and high stakes with a deadline.
ANECDOTE

Information Consumption

  • People often cite high-status sources like books, even when they only consumed summaries.
  • This overrates those sources and neglects the impact of simpler media.
Get the Snipd Podcast app to discover more snips from this episode
Get the app