Machine Learning Street Talk (MLST)

#111 - AI moratorium, Eliezer Yudkowsky, AGI risk etc

Apr 1, 2023
Gary Marcus, a renowned scientist and AI author, dives deep into the heated debate over a proposed six-month pause on advanced AI development. He discusses the implications of an indefinite moratorium on AGI, as advocated by Eliezer Yudkowsky, highlighting the urgency of addressing current ethical dilemmas rather than focusing solely on speculative risks. The conversation covers critiques of long-termism in AI, the complexities of managing its rapid advancements, and the pressing need for regulations that prioritize human interests amidst ongoing debates about safety and governance.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Intelligence Without Understanding

  • Eliezer Yudkowsky wasn't skeptical that evolutionary computation would eventually work.
  • He believed throwing enough computing power at it, combined with gradient descent and other factors, could lead to intelligence.
ADVICE

Managing AI Risks

  • Consider the global competition and unstoppable progress in AI when thinking about moratoriums.
  • Focus on managing AI risks and developing safety measures instead of halting progress.
ADVICE

Open Sourcing AI: A Catastrophe?

  • Avoid open-sourcing powerful AI models that are difficult to control or align.
  • Focus on building things you understand and can manage responsibly to avoid catastrophic outcomes.
Get the Snipd Podcast app to discover more snips from this episode
Get the app