4min chapter

Machine Learning Street Talk (MLST) cover image

AI Alignment & AGI Fire Alarm - Connor Leahy

Machine Learning Street Talk (MLST)

CHAPTER

Is There a Coherent Utility Function?

Daniel Kahneman won some was at the Nobel Prize for his he showed that humans do not maximize economic utility. But there are links here to Dutch book arguments in terms of like probability theory. There's a very hard question to be asked, what is rationality? Actually, can I diverge for just a second here? I'd like to introduce a bit of a new come paradox with thought experiment. It's now a good time to enter Daniel Kahneman wonSome was at theNobel Prize for his showing that humans don't maximize economic Utility. And yet they seem to be a real central principle for the AI alignment problem. Is there a tension there? Or is this just an outsider looking

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode