Doom Debates cover image

We Found AI's Preferences — What David Shapiro MISSED in this bombshell Center for AI Safety paper

Doom Debates

00:00

AI Values and Decision-Making Biases

This chapter examines the calibration of AI value systems, particularly comparing Claude 3.5 and GPT-4, highlighting discrepancies in how they prioritize human lives. It discusses implications such as temporal discounting, biases in AI decision-making, and the influence of prompt wording on outcomes. Through exploration of reasoning processes and ethical considerations, the chapter underscores the challenges of aligning AI responses with equitable human values.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app