Theorems that we were able to prove suggest that this is the core of how we retain control. For example, we can prove that the machine will allow you to switch it off. But each person has their own preferences about how the future should be. And so trying to infer our underlying preferences from our behavior and our utterances is a difficult problem. It means sort of inverting how human cognition works in the first place. So there's a lot of difficulties but I think these all seem like, yeah, there are difficulties, we've got to make this theory more elaborate.
The Sunday Times’ tech correspondent Danny Fortson brings on Stuart Russell, professor at UC Berkeley and one of the world’s leading experts on artificial intelligence (AI), to talk about working in the field for decades (4:00), AI’s Sputnik moment (7:45), why these programmes aren’t very good at learning (13:00), trying to inoculating ourselves against the idea that software is sentient (15:00), why super intelligence will require more breakthroughs (17:20), autonomous weapons (26:15), getting politicians to regulate AI in warfare (30:30), building systems to control intelligent machines (36:20), the self-driving car example (39:45), how he figured out how to beat AlphaGo (43:45), the paper clip example (49:50), and the first AI programme he wrote as a 13-year-old. (55:45).
Hosted on Acast. See acast.com/privacy for more information.