"Upstream" with Erik Torenberg

E48: Liron Shapira on the Case for Pausing AI

10 snips
Mar 2, 2024
Discussion on the risks of AI development, AI agency, and superintelligence. Debate on AI optimization and human values. Exploring AI surpassing human intelligence and power dynamics. Skepticism on AI safety and grassroots advocacy. Debating on future optimization and existential risks in AI development.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Pause AI's Mission

  • PauseAI, a grassroots organization, advocates for pausing AI development.
  • They believe uncontrolled superhuman AI is imminent and poses an existential threat.
INSIGHT

Non-Ideological Concern

  • Liron Shapira's concern about AI isn't ideological.
  • He simply doesn't want AI to cause human extinction before his children grow up.
INSIGHT

Source of AI Danger

  • Liron Shapira’s concern about AI risk is not tied to specific algorithms like transformers.
  • The nature of intelligence itself, as a powerful tool, is the source of his worry.
Get the Snipd Podcast app to discover more snips from this episode
Get the app