Justified Posteriors cover image

How much should we invest in AI safety?

Justified Posteriors

00:00

Utility Functions and AI Risk Assessment

This chapter examines the intricacies of utility functions in AI safety, questioning whether risk-averse approaches could enhance decision-making for hypothetical world governments. It contrasts individual versus collective risk perspectives, emphasizing the importance of prioritizing humanity's preservation over individual interests in the face of existential threats.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app