LessWrong (Curated & Popular) cover image

“Optimistic Assumptions, Longterm Planning, and ‘Cope’” by Raemon

LessWrong (Curated & Popular)

00:00

Exploring Calibration and Research Taste in AI Safety

Exploring the uncertainties in existential safety research, this chapter emphasizes the need to enhance prediction-making through trained calibration. It delves into the risks and challenges faced by theoretical researchers, ML researchers, and policymakers, proposing strategies such as operationalizing predictions and seeking diverse mentorship for improved decision-making.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app