The Lawfare Podcast cover image

Cybersecurity and AI

The Lawfare Podcast

00:00

The Dangers of AI and Machine Learning

The Terminator future problem here, I think, is overblown. What we're seeing are models who use statistics to create sequences of characters that have meaning to humans. And so you can make it say scary things. If you put the right inputs in, it will talk about killing all humans. That's because it's trained on the entire corpus of sci-fi that has been in this part of our history. This is just a series of characters that it believes is statistically relevant. It has been trained, as Dave said, to do so and has a set of instructions to follow.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app