The Inside View cover image

Irina Rish–AGI, Scaling and Alignment

The Inside View

00:00

Is It Better to Be Moral?

I just believe that there might be some good analogies with the human alignment is a very much kind of interactive dynamic and happens in time. I mean, some very basic kind of values and rules of behavior, kids learn quite quickly. And plus they have quite a pre-trained large scale systems. They are not just learning from scratch. So hopefully if you kind of apply these ideas to development of AI systems, a smart enough system may figure out that it's beneficial to be moral. Instrumentally, right? Yeah.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app