AXRP - the AI X-risk Research Podcast cover image

8 - Assistance Games with Dylan Hadfield-Menell

AXRP - the AI X-risk Research Podcast

00:00

How Does the Uncertainty Over Human Reward Function Resolve?

We'll try to link at least the blunk pos in the epsit description in the transcript. The paper has a section where it talks about manually modifying the like robots, or uncertainty over human functions. We had it set so that i believe beta equals zero corresponded to a rational person. And as it increased, the person would make more and more errors. All of our results where we analyzed this trade off between bata effectively and the robot's uncertainty about their utility, will be identified as that there's a trade off between those which determines whether or not the robot chooses to interact with you.

Play episode from 01:03:52
Transcript

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app