LessWrong (Curated & Popular)

“Safety researchers should take a public stance” by Ishual, Mateusz Bagiński

Sep 20, 2025
A group of safety researchers discusses the existential risks posed by current AI development. They argue for the necessity of a public stance against current practices and advocate for a coordinated ban on AGI until it's safer to proceed. The conversation highlights why working within existing labs often fails, emphasizing the need for solidarity among researchers to prevent dangerous developments. They explore moral dilemmas and the importance of collective action in prioritizing humanity's future.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Public Stance For An AGI Ban

  • Many X-risk concerned people prefer an AGI ban over current lab trajectories that risk human disempowerment or extinction.
  • Authors argue these people should speak publicly in favor of coordinated pauses on dangerous AI development.
ADVICE

Insist On Free Speech And Praise Whistleblowers

  • Safety-concerned staff should ensure labs permit employees to speak out without retaliation and publicly state when they cannot.
  • Praise those who join labs while publicly opposing the race and be skeptical of silent joiners' motives.
INSIGHT

The Monster's Belly Effect

  • Working inside frontier labs to marginally improve outcomes risks gradual moral and policy capture by the lab.
  • The monster's belly metaphor describes how initial noble intent can erode into passive complicity.
Get the Snipd Podcast app to discover more snips from this episode
Get the app