LessWrong (Curated & Popular) cover image

"Lessons learned from talking to >100 academics about AI safety" by Marius Hobbhahn

LessWrong (Curated & Popular)

00:00

AI Safety: Heading, Misunderstandings, and Vague Concepts

Postdocs and professors were the most dismissive of AI safety in my experience. I think there are multiple possible explanations for this, including that they have a strong motivation to keep doing whatever they are doing now. There are also wildly varying estimates of X-Risk plausibility within the AIS community. If people are unaware of the possible mechanisms of advanced AI leading to extinction, they think you just want detention or don't do serious research. In general, I found it easier just not to talk about X-R risk unless people actively asked me to.

Play episode from 14:00
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app