4min chapter

AXRP - the AI X-risk Research Podcast cover image

12 - AI Existential Risk with Paul Christiano

AXRP - the AI X-risk Research Podcast

CHAPTER

Is It the Right Thing to Do in Computer Science?

iam wondering what you think of these approaches for things. Ha, look more like outer alignment, more like trying to specify what a good objective is. I don't really know that much about the history of science though, so i'm just guessing that that might be a good approach sometimes. So perhaps it's like the human wants. There'se some human whou like has some desires, and they act a certain way because of those desires. And we use that to do some kind of inference. This might look like inverse reinforcement learning. A simple version of it might look like imitation learning. Anyway, there's then this further thing, an Licklickn’s

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode