LessWrong (Curated & Popular)

“The best simple argument for Pausing AI?” by Gary Marcus

Jul 3, 2025
The discussion highlights the critical challenges AI faces in adhering to rules and guidelines. It argues that without a reliable framework, efforts to align AI with ethical standards are futile. Notably, even sophisticated models struggle with fundamental tasks like playing chess or Tower of Hanoi, despite theoretically understanding the rules. This raises urgent questions about the safety and deployment of generative AI in vital areas, suggesting a potential pause until these issues are addressed.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Rule-Following Is Key For Alignment

  • AI alignment is impossible without the ability to follow rules reliably. - Large language models (LLMs) struggle even with simple rule compliance like chess.
ANECDOTE

LLMs Fail Basic Rule Tasks

  • LLM reasoning models empirically fail at abiding by chess rules despite being able to explain them. - The Tower of Hanoi example shows LLMs cannot perform well on moderately complex rule-based tasks.
ADVICE

Pause AI Until Rule Compliance

  • Pause the use of generative AI in safety-critical areas until it can reliably follow rules. - Avoid hyping LLMs as it accelerates unsafe global deployment lacking basic alignment.
Get the Snipd Podcast app to discover more snips from this episode
Get the app