The Inside View cover image

Shahar Avin–Intelligence Rising, AI Governance

The Inside View

00:00

Building Safe AGI and Aligning AI

One consideration that happens when considering those 20-50 AI systems is that humans could be out of the loop like nothing is going on. If the systems end up being a gente, they might take decisions that humans don't really want. That brings us to the concept of building safe AGI or safe AI in general. Also aligning AI. What did we mean by AI? I think in your game you mostly talk about safe AGI because it's maybe an easier concept. Just preventing the system from collapsing within itself.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app