The Glenn Beck Program cover image

Ep 185 | Why Experts Are Suddenly Freaking OUT About AI | Tristan Harris | The Glenn Beck Podcast

The Glenn Beck Program

00:00

The Alignment Problem in AI

In the field of AI risk, people call this the alignment problem or containment. How do we make sure that when we create AI that's smarter than us, that it actually is aligned with our values? It only wants to do things that would be good for us. But think about this hypothetical situation. Let's say you have a bunch of Neanderthals. And they start doing gain of function research and testing on how to invent a new smarter version. They create human homosapiens. Now imagine that the Neanderthals say, but don't worry, because when we create these humans that are 100 times smarter than the NeanderthALS, don't worry. We'll make sure

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app