3min chapter

Data Skeptic cover image

AGI Can Be Safe

Data Skeptic

CHAPTER

The Fear of Stopping Machine Learning Systems

I'm not in search of the magical medical safe reward function I agree to almost everybody else with everybody else that yeah you will not find it humans are fadable they will not know what they want. The only approach you can take is to specify one which is reasonably safe then when you see a mistake stop the computer and adjust it. When you see another thing that you don't like stop the computer again and adjust it as part of your AI safety critters. Even a simple queue learner, even the simplest possible reinforcement learner will actually consider that it might not obey you if you change it.

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode