undefined

Stuart Russell

Professor of Computer Science at UC Berkeley and author of the textbook "Artificial Intelligence: A Modern Approach". Expert in artificial intelligence and its implications.

Top 10 podcasts with Stuart Russell

Ranked by the Snipd community
undefined
327 snips
Mar 7, 2023 • 1h 27min

#312 — The Trouble with AI

Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI). They discuss the limitations of Deep Learning, the surprising power of narrow AI, ChatGPT, a possible misinformation apocalypse, the problem of instantiating human values, the business model of the Internet, the meta-verse, digital provenance, using AI to control AI, the control problem, emergent goals, locking down core values, programming uncertainty about human values into AGI, the prospects of slowing or stopping AI progress, and other topics. If the Making Sense podcast logo in your player is BLACK, you can SUBSCRIBE to gain access to all full-length episodes at samharris.org/subscribe. Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
undefined
77 snips
Nov 22, 2022 • 1h 8min

Making Sense of Artificial Intelligence | Episode 1 of The Essential Sam Harris

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI – including the control problem and the value-alignment problem – as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance.   We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords.   Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
34 snips
Dec 22, 2021 • 58min

AI: A Future for Humans

Stuart Russell suggests a way forward for human control over super-powerful artificial intelligence. He argues for the abandonment of the current “standard model” of AI, proposing instead a new model based on three principles - chief among them the idea that machines should know that they don’t know what humans’ true objectives are. Echoes of the new model are already found in phenomena as diverse as menus, market research, and democracy. Machines designed according to the new model would be, Russell suggests, deferential to humans, cautious and minimally invasive in their behaviour and, crucially, willing to be switched off. He will conclude by exploring further the consequences of success in AI for our future as a species.Stuart Russell is Professor of Computer Science and founder of the Center for Human-Compatible Artificial Intelligence at the University of California, Berkeley.The programme and question-and-answer session was recorded at the National Innovation Centre for Data in Newcastle Upon Tyne. Presenter: Anita Anand Producer: Jim Frank Production Coordinator: Brenda Brown Editor: Hugh Levinson.
undefined
25 snips
Dec 9, 2018 • 1h 26min

Stuart Russell: Long-Term Future of AI

Stuart Russell is a professor of computer science at UC Berkeley and a co-author of the book that introduced me and millions of other people to AI, called Artificial Intelligence: A Modern Approach.  Video version is available on YouTube. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, or YouTube where you can watch the video versions of these conversations.
undefined
15 snips
Nov 22, 2022 • 2h 12min

Making Sense of Artificial Intelligence

Filmmaker Jay Shapiro has produced a new series of audio documentaries, exploring the major topics that Sam has focused on over the course of his career. Each episode weaves together original analysis, critical perspective, and novel thought experiments with some of the most compelling exchanges from the Making Sense archive. Whether you are new to a particular topic, or think you have your mind made up about it, we think you’ll find this series fascinating. And make sure to stick around for the end of each episode, where we provide our list of recommendations from the worlds of film, television, literature, music, and art.   In this episode, we explore the landscape of Artificial Intelligence. We’ll listen in on Sam’s conversation with decision theorist and artificial-intelligence researcher Eliezer Yudkowsky, as we consider the potential dangers of AI — including the control problem and the value-alignment problem — as well as the concepts of Artificial General Intelligence, Narrow Artificial Intelligence, and Artificial Super Intelligence. We’ll then be introduced to philosopher Nick Bostrom’s “Genies, Sovereigns, Oracles, and Tools,” as physicist Max Tegmark outlines just how careful we need to be as we travel down the AI path. Computer scientist Stuart Russell will then dig deeper into the value-alignment problem and explain its importance. We’ll hear from former Google CEO Eric Schmidt about the geopolitical realities of AI terrorism and weaponization. We’ll then touch the topic of consciousness as Sam and psychologist Paul Bloom turn the conversation to the ethical and psychological complexities of living alongside humanlike AI. Psychologist Alison Gopnik then reframes the general concept of intelligence to help us wonder if the kinds of systems we’re building using “Deep Learning” are really marching us towards our super-intelligent overlords. Finally, physicist David Deutsch will argue that many value-alignment fears about AI are based on a fundamental misunderstanding about how knowledge actually grows in this universe.
undefined
14 snips
Sep 28, 2023 • 1h 12min

HUMAN COMPATIBLE: Can We Control Artificial Intelligence?

Stuart Russell, an expert in artificial intelligence, discusses the rapid development of AI, the potential termination of the human species, AGI, language models, specialized processes in the human brain, the potential of super intelligent AI, AI in education and science, and regulating dangerous technologies.
undefined
14 snips
Mar 3, 2023 • 58min

UC Berkeley’s Stuart Russell: “ChatGPT is a wake-up call”

The Sunday Times’ tech correspondent Danny Fortson brings on Stuart Russell, professor at UC Berkeley and one of the world’s leading experts on artificial intelligence (AI), to talk about working in the field for decades (4:00), AI’s Sputnik moment (7:45), why these programmes aren’t very good at learning (13:00), trying to inoculating ourselves against the idea that software is sentient (15:00), why super intelligence will require more breakthroughs (17:20), autonomous weapons (26:15), getting politicians to regulate AI in warfare (30:30), building systems to control intelligent machines (36:20), the self-driving car example (39:45), how he figured out how to beat AlphaGo (43:45), the paper clip example (49:50), and the first AI programme he wrote as a 13-year-old. (55:45). Hosted on Acast. See acast.com/privacy for more information.
undefined
8 snips
Mar 7, 2023 • 2h 27min

#312 - The Trouble with AI

Sam Harris speaks with Stuart Russell and Gary Marcus about recent developments in artificial intelligence and the long-term risks of producing artificial general intelligence (AGI). They discuss the limitations of Deep Learning, the surprising power of narrow AI, ChatGPT, a possible misinformation apocalypse, the problem of instantiating human values, the business model of the Internet, the meta-verse, digital provenance, using AI to control AI, the control problem, emergent goals, locking down core values, programming uncertainty about human values into AGI, the prospects of slowing or stopping AI progress, and other topics. Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book, Artificial Intelligence: A Modern Approach, co-authored with Peter Norvig, is the standard text in AI, used in 1500 universities in 135 countries. Russell is also the author of Human Compatible: Artificial Intelligence and the Problem of Control. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons. Website: https://people.eecs.berkeley.edu/~russell/ LinkedIn: www.linkedin.com/in/stuartjonathanrussell/   Gary Marcus is a scientist, best-selling author, and entrepreneur. He is well-known for his challenges to contemporary AI, anticipating many of the current limitations decades in advance, and for his research in human language development and cognitive neuroscience. He was Founder and CEO of Geometric Intelligence, a machine-learning company acquired by Uber in 2016. His most recent book, Rebooting AI, co-authored with Ernest Davis, is one of Forbes’s 7 Must Read Books in AI. His podcast Humans versus Machines, will come later this spring. Website: garymarcus.com Twitter: @GaryMarcus   Learning how to train your mind is the single greatest investment you can make in life. That’s why Sam Harris created the Waking Up app. From rational mindfulness practice to lessons on some of life’s most important topics, join Sam as he demystifies the practice of meditation and explores the theory behind it.
undefined
8 snips
Dec 1, 2021 • 58min

The Biggest Event in Human History

Stuart Russell explores the future of Artificial Intelligence and asks; how can we get our relationship with it right? Professor Russell is founder of the Centre for Human-Compatible Artificial Intelligence at the University of California, Berkeley. In this lecture he reflects on the birth of AI, tracing our thinking about it back to Aristotle. He outlines the definition of AI, its successes and failures, and the risks it poses for the future. Referencing the representation of AI systems in film and popular culture, Professor Russell will examine whether our fears are well founded. He will explain what led him – alongside previous Reith Lecturer Professor Stephen Hawking to say that “success would be the biggest event in human history … and perhaps the last event in human history.” Stuart will ask how this risk arises and whether it can be avoided, allowing humanity and AI to coexist successfully.This lecture and question-and-answer session was recorded at the Alan Turing Institute at the British Library in London. Presenter: Anita Anand Producer: Jim Frank Editor: Hugh Levinson Production Coordinator: Brenda Brown Sound: Neil Churchill and Hal Haines
undefined
7 snips
Apr 27, 2020 • 1h 27min

94 | Stuart Russell on Making Artificial Intelligence Compatible with Humans

Stuart Russell, AI expert, proposes programming AI to learn human goals by observing behavior. They discuss challenges of implementing rational decision-making in AI, the prospect of artificial superintelligence, potential risks of superintelligent AI, and epistemic uncertainty in AI systems.