

Stuart Russell
Professor of computer science at UC Berkeley, leading researcher in artificial intelligence. Author of the textbook "Artificial Intelligence: A Modern Approach" and the book "Human Compatible: Artificial Intelligence and the Problem of Control".
Top 10 podcasts with Stuart Russell
Ranked by the Snipd community

333 snips
Mar 7, 2023 • 1h 27min
#312 — The Trouble with AI
Stuart Russell, a UC Berkeley professor and author of 'Human Compatible,' and Gary Marcus, a renowned scientist and author, delve into the complexities of artificial intelligence. They explore the limitations of current AI technologies, especially ChatGPT, and the ethical dilemmas surrounding artificial general intelligence. The duo discusses the risks of misinformation, the need for human values in AI systems, and the urgent call for regulations to protect democracy and public safety amid evolving tech. They reveal how business models can exacerbate misinformation crises.

77 snips
Nov 22, 2022 • 1h 8min
Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris
In this insightful discussion, guests include Jay Shapiro, a filmmaker behind an engaging audio documentary series, Eliezer Yudkowsky, a computer scientist renowned for his AI safety work, physicist Max Tegmark, and computer science professor Stuart Russell. They delve into the complexities of AI, revealing the dangers of misaligned objectives and the critical issues of value alignment and control. The conversation touches on the transformative potential of AI juxtaposed with ethical dilemmas, consciousness, and geopolitical concerns surrounding AI weaponization.

39 snips
Dec 9, 2018 • 1h 26min
Stuart Russell: Long-Term Future of AI
Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach. Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.

38 snips
Dec 22, 2021 • 58min
AI: A Future for Humans
Stuart Russell suggests a way forward for human control over super-powerful artificial intelligence. He argues for the abandonment of the current “standard model” of AI, proposing instead a new model based on three principles - chief among them the idea that machines should know that they don’t know what humans’ true objectives are. Echoes of the new model are already found in phenomena as diverse as menus, market research, and democracy. Machines designed according to the new model would be, Russell suggests, deferential to humans, cautious and minimally invasive in their behaviour and, crucially, willing to be switched off. He will conclude by exploring further the consequences of success in AI for our future as a species.Stuart Russell is Professor of Computer Science and founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.The programme and question-and-answer session was recorded at the National Innovation Centre for Data in Newcastle Upon Tyne.
Presenter: Anita Anand
Producer: Jim Frank
Production Coordinator: Brenda Brown
Editor: Hugh Levinson.

21 snips
Dec 1, 2021 • 58min
The Biggest Event in Human History
Stuart Russell explores the future of Artificial Intelligence and asks; how can we get our relationship with it right? Professor Russell is founder of the Centre for Human-Compatible Artificial Intelligence at the University of California, Berkeley. In this lecture he reflects on the birth of AI, tracing our thinking about it back to Aristotle. He outlines the definition of AI, its successes and failures, and the risks it poses for the future. Referencing the representation of AI systems in film and popular culture, Professor Russell will examine whether our fears are well founded. He will explain what led him – alongside previous Reith Lecturer Professor Stephen Hawking to say that “success would be the biggest event in human history … and perhaps the last event in human history.” Stuart will ask how this risk arises and whether it can be avoided, allowing humanity and AI to coexist successfully.This lecture and question-and-answer session was recorded at the Alan Turing Institute at the British Library in London.
Presenter: Anita Anand
Producer: Jim Frank
Editor: Hugh Levinson
Production Coordinator: Brenda Brown
Sound: Neil Churchill and Hal Haines

14 snips
Sep 28, 2023 • 1h 12min
HUMAN COMPATIBLE: Can We Control Artificial Intelligence?
Stuart Russell, an expert in artificial intelligence, discusses the rapid development of AI, the potential termination of the human species, AGI, language models, specialized processes in the human brain, the potential of super intelligent AI, AI in education and science, and regulating dangerous technologies.

14 snips
Mar 3, 2023 • 58min
UC Berkeley’s Stuart Russell: “ChatGPT is a wake-up call”
The Sunday Times’ tech correspondent Danny Fortson brings on Stuart Russell, professor at UC Berkeley and one of the world’s leading experts on artificial intelligence (AI), to talk about working in the field for decades (4:00), AI’s Sputnik moment (7:45), why these programmes aren’t very good at learning (13:00), trying to inoculating ourselves against the idea that software is sentient (15:00), why super intelligence will require more breakthroughs (17:20), autonomous weapons (26:15), getting politicians to regulate AI in warfare (30:30), building systems to control intelligent machines (36:20), the self-driving car example (39:45), how he figured out how to beat AlphaGo (43:45), the paper clip example (49:50), and the first AI programme he wrote as a 13-year-old. (55:45). Hosted on Acast. See acast.com/privacy for more information.

7 snips
Apr 27, 2020 • 1h 27min
94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans
In this thought-provoking conversation, Stuart Russell, a distinguished professor of computer science at UC Berkeley and co-founder of the Center for Human-Compatible Artificial Intelligence, discusses the complexities of artificial intelligence and its alignment with human values. He explores the need for AI to learn from human behavior rather than imposing rigid goals. Russell also addresses the existential risks of superintelligent AI, the challenges of decision-making, and the transformative potential of AI in enhancing civilization, calling for a flexible approach to programming these systems.

5 snips
Mar 10, 2023 • 8min
How will AI change the world? | George Zaidan and Stuart Russell
In the coming years, artificial intelligence is probably going to change your life -- and likely the entire world. But people have a hard time agreeing on exactly how AI will affect our society. Can we build AI systems that help us fix the world? Or are we doomed to a robotic takeover? Explore the limitations of artificial intelligence and the possibility of creating human-compatible technology. This TED-Ed lesson was directed by Christoph Sarow, AIM Creative Studios and narrated by George Zaidan and Stuart Russell, music by André Aires.Want to help shape TED’s shows going forward? Fill out our survey!Become a TED Member today at ted.com/joinLearn more about TED Next at ted.com/futureyou Hosted on Acast. See acast.com/privacy for more information.

Aug 28, 2021 • 1h 49min
#364 - Stuart Russell - The Terrifying Problem Of AI Control
Stuart Russell, a leading Professor of Computer Science at UC Berkeley and author of 'Human Compatible,' delves into the intricate challenges of AI control. He discusses the unsettling consequences of superhuman AI and the manipulation of user behavior by social media algorithms. The conversation highlights the urgent moral implications of AI technology, the necessity for human oversight, and a reevaluation of AI design to align with human values. Stuart emphasizes the importance of understanding the risks of misaligned objectives, drawing parallels with historical figures like Alan Turing.