
#64 – Michael Aird on Strategies for Reducing AI Existential Risk
Hear This Idea
00:00
The Importance of Polarization in AI
Ellie Azidkowski: I feel like we've kind of moved beyond just attention hazards. Advancing some risky R&D areas via things other than info hazards is number three. speeding it up is bad because it means that we have less time to do safety alignment research. Number four is like polarizing or making partisan some important like policies, ideas and communities.
Play episode from 01:12:04
Transcript


