“Third-wave AI safety needs sociopolitical thinking” by Richard_Ngo
Mar 27, 2025
auto_awesome
Richard Ngo, a speaker at EA Global Boston, discusses pressing themes in AI safety and effective altruism with a focus on sociopolitical thinking. He outlines the three waves of AI/EA, the critical need for high-quality socio-political engagement, and critiques environmentalism's unintended consequences. Ngo also analyzes cultural dynamics, exploring contrasting views on talent distribution, and delves into the regulatory factors shaping economic growth and energy policy. He emphasizes a collaborative approach to AI governance in an ever-evolving landscape.
The third wave of AI safety emphasizes the integration of sociopolitical thinking to navigate the complexities of large-scale interventions responsibly.
Past failures in initiatives like environmentalism highlight the necessity for ethical foresight and critical evaluation in the AI safety movement.
Deep dives
Three Waves of Effective Altruism and AI Safety
The evolution of Effective Altruism (EA) and AI safety is introduced through three distinct waves that illustrate the movement's progression. The first wave, from 2005 to 2013, focused on orientation, where key ideas were developed in forums like the Future of Humanity Institute and Less Wrong, shaping community perspectives. The second wave, spanning 2014 to 2022, marked increased mobilization, highlighted by impactful initiatives such as the establishment of OpenAI and significant publications like 'Superintelligence'. The current third wave emphasizes executing these ideas at a larger scale, addressing real-world implications through events like the FTX collapse and advancements in AI technologies such as ChatGPT.
Transitioning Skills for the Third Wave
As the movement progresses into the third wave, the skill set required shifts from mere impact-focused strategies like 'black swan farming' to incorporating adaptability and virtue ethics. The need for positive impact becomes paramount as interactions with large-scale AI applications can have detrimental outcomes if mishandled. This phase calls for individuals to develop a mindset that prioritizes ethical considerations while navigating complex political landscapes, diverging from the more basic execution skills emphasized in the second wave. It's crucial to recognize that being effective does not only entail achieving results but also ensuring those results are morally sound.
Lessons from Environmentalism
The podcast critiques the environmental movement as an example of how well-intentioned initiatives can lead to unforeseen negative consequences when not executed with care. Claiming that actions like blocking nuclear energy have worsened climate change, it highlights the dangers of operating at scale without critical evaluation. The mention of environmentalism serves as a cautionary tale, suggesting that similarly, the AI safety movement must be vigilant about regulating and guiding advancements responsibly. This reflection underscores the significance of incorporating rigorous ethical standards and foresight into any large-scale initiative to avoid repeating past mistakes.
High-Quality Sociopolitical Thinking
Sociopolitical thinking is posited as essential for effective interventions concerning AI and broader societal movements. The discussion advocates for a deep understanding of the structural levers that drive change, urging the community to move away from solely quantitative measures toward more principle-driven, historical contexts. Influential thinkers like Dominic Cummings are recognized for their ability to identify less obvious drivers of change, suggesting that similar depth is necessary to understand the upcoming evolution in AI safety. This structured thinking approach aims to inform actionable strategies that account for complex socio-political realities while addressing existential risks in the AI realm.
At EA Global Boston last year I gave a talk on how we're in the third wave of EA/AI safety, and how we should approach that. This post contains a (lightly-edited) transcript and slides. Note that in the talk I mention a set of bounties I'll put up for answers to various questions—I still plan to do that, but am taking some more time to get the questions right.
Hey everyone. Thanks for coming. You're probably looking at this slide and thinking, how to change a changing world. That's a weird title for a talk. Probably it's a kind of weird talk. And the answer is, yes, it is a kind of weird talk.
So what's up with this talk? Firstly, I am trying to do something a little unusual here. It's quite abstract. It's quite disagreeable as a talk. I'm trying to poke at the things that maybe [...]