AI systems that know they don't know what the objective is, even though the objective is human benefit. That sounds counterintuitive or even impossible, it turns out that you can formulate that as a mathematical problem and it's solvable. So there is a rational way for the machine to behave which initially is going to be very, very cautious because there's lots of stuff it doesn't know about humans' future plans.
The Sunday Times’ tech correspondent Danny Fortson brings on Stuart Russell, professor at UC Berkeley and one of the world’s leading experts on artificial intelligence (AI), to talk about working in the field for decades (4:00), AI’s Sputnik moment (7:45), why these programmes aren’t very good at learning (13:00), trying to inoculating ourselves against the idea that software is sentient (15:00), why super intelligence will require more breakthroughs (17:20), autonomous weapons (26:15), getting politicians to regulate AI in warfare (30:30), building systems to control intelligent machines (36:20), the self-driving car example (39:45), how he figured out how to beat AlphaGo (43:45), the paper clip example (49:50), and the first AI programme he wrote as a 13-year-old. (55:45).
Hosted on Acast. See acast.com/privacy for more information.