
“Safety isn’t safety without a social model (or: dispelling the myth of per se technical safety)” by Andrew_Critch
LessWrong (Curated & Popular)
00:00
Dispelling Myths About Technical AI Safety and Alignment
The chapter delves into the misconception that technical AI safety and alignment progress are inherently safe, stressing the need to consider the human environment they are employed in. It raises awareness on the significance of understanding the social context of AI implementation to ensure the overall benefit for humanity.
Transcript
Play full episode