TalkRL: The Reinforcement Learning Podcast cover image

Rohin Shah

TalkRL: The Reinforcement Learning Podcast

CHAPTER

Is There a Canonical Definition of Air Linement?

There is definitely not acanonical taxonomy of topics. So i think i will just give you maybe my taxonomy of allignment topic. In terms of how alignment relates to ai safety. Accidents are exactly what they sound like. Accidents happen when an system does something bad and nobody intended for that the a system to do that thing. Ari, the process by which designers decide what the a i system intends to do. It's obviously still an important problem. Just like, not part of this definition. Ah, as i gave it, but other people would say, no, that's a bad definition. You should include that problem. There's not even a canonical

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner