
"Lessons learned from talking to >100 academics about AI safety" by Marius Hobbhahn
LessWrong (Curated & Popular)
00:00
AI Safety - How to Make a Career Change
Don't start with X risk or alignment. Start with technical problem statements such as uncontrollability and work from there. Be open to questions and don't dismiss criticism even if it has obvious counter arguments. Academic incentives matter to academics. They know that if they want to stay in academia, they have to publish. And the final point? Explain don't convince. If you explained it poorly, people shouldn't feel pressured.
Play episode from 05:37
Transcript


