LessWrong (Curated & Popular) cover image

LessWrong (Curated & Popular)

“Training AGI in Secret would be Unsafe and Unethical” by Daniel Kokotajlo

Apr 21, 2025
The dangers of developing Artificial General Intelligence in secrecy take center stage. It explores the risks of power concentration and the significant loss of control that could ensue. Emphasizing transparency and public engagement, the discussion warns about the creation of misaligned AGI systems. With AGI potentially being trained within this decade, the urgency of addressing these ethical considerations is highlighted. Listeners are encouraged to reconsider their assumptions about the feasibility and ramifications of AGI.
10:46

Podcast summary created with Snipd AI

Quick takeaways

  • Training AGI in secrecy risks concentration of power, leading to unchecked internal control and potential ethical abuses by a few decision-makers.
  • Increased transparency and external oversight are crucial for ensuring safety and alignment in AGI development, fostering a collaborative and ethical environment.

Deep dives

The Risks of Secret AGI Development

Training AGI in secret poses significant safety and ethical risks. A small group of individuals would control and assess the capabilities of AGI systems, leading to potential misalignment issues that could go unchecked. This internal control contrasts with the need for a broader oversight that includes diverse perspectives and external validation to ensure safety and alignment. The historical analogy to the Manhattan Project illustrates how secrecy can lead to catastrophic consequences if critical risks are ignored.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner