GoodFellows: Conversations from the Hoover Institution cover image

Schmidt Happens

GoodFellows: Conversations from the Hoover Institution

00:00

The Human Decision Loop in the Military

In ai, what can happen? As you can up with a signal that you don't have time to really contemplate. In the luke military doctrine, as you know, general, is humans must be in charge. But when the computer is telling the human to do something, then human does not have time to com to think about this question. That's te problem. And it gets worse when you have launch on warning. This is the strange love example, where the system will launch based on an indication, as opposed to an actual outcome. These technologies are imprecise and learning. They could both be wrong in learning the wrong thing. We're playing with fire since we're

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app