Sally Kohn: There's an inscrutability to the current structure of these of these models. How does that get to do things that I really don't like or want or that are dangerous? She says we have some idea of why Sydney did that is just that people cannot stop it. Kohn: Nobody knows in detail what Sydney was thinking when she tried to lead the reporter astray. It's not a debugable technology, all you can do is try to tap it away from repeating a bad thing.
Eliezer Yudkowsky insists that once artificial intelligence becomes smarter than people, everyone on earth will die. Listen as Yudkowsky speaks with EconTalk's Russ Roberts on why we should be very, very afraid, and why we're not prepared or able to manage the terrifiying risks of artificial intelligence.