TalkRL: The Reinforcement Learning Podcast cover image

Rohin Shah

TalkRL: The Reinforcement Learning Podcast

00:00

Is There a Canonical Definition of Air Linement?

There is definitely not acanonical taxonomy of topics. So i think i will just give you maybe my taxonomy of allignment topic. In terms of how alignment relates to ai safety. Accidents are exactly what they sound like. Accidents happen when an system does something bad and nobody intended for that the a system to do that thing. Ari, the process by which designers decide what the a i system intends to do. It's obviously still an important problem. Just like, not part of this definition. Ah, as i gave it, but other people would say, no, that's a bad definition. You should include that problem. There's not even a canonical

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app