4min chapter

GoodFellows: Conversations from the Hoover Institution cover image

Schmidt Happens

GoodFellows: Conversations from the Hoover Institution

CHAPTER

The Human Decision Loop in the Military

In ai, what can happen? As you can up with a signal that you don't have time to really contemplate. In the luke military doctrine, as you know, general, is humans must be in charge. But when the computer is telling the human to do something, then human does not have time to com to think about this question. That's te problem. And it gets worse when you have launch on warning. This is the strange love example, where the system will launch based on an indication, as opposed to an actual outcome. These technologies are imprecise and learning. They could both be wrong in learning the wrong thing. We're playing with fire since we're

00:00

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode