The Glenn Beck Program cover image

Ep 185 | Why Experts Are Suddenly Freaking OUT About AI | Tristan Harris | The Glenn Beck Podcast

The Glenn Beck Program

CHAPTER

The Alignment Problem in AI

In the field of AI risk, people call this the alignment problem or containment. How do we make sure that when we create AI that's smarter than us, that it actually is aligned with our values? It only wants to do things that would be good for us. But think about this hypothetical situation. Let's say you have a bunch of Neanderthals. And they start doing gain of function research and testing on how to invent a new smarter version. They create human homosapiens. Now imagine that the Neanderthals say, but don't worry, because when we create these humans that are 100 times smarter than the NeanderthALS, don't worry. We'll make sure

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner