The Theory of Anything cover image

Episode 49: AGI Alignment and Safety

The Theory of Anything

00:00

The AGI Safety Problem

The idea that if we invent an AGI program, it's going to quickly become this super intelligence. This causes some people to want to say we shouldn't look into AGI. The only thing you would really have to worry about is the case of it being way faster than us. Even then, there's good reason not to worry too much.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app