undefined

Murray Shanahan

Professor of Cognitive Robotics at Imperial College London, known for his work on consciousness and AI, and for consulting on the movie Ex Machina.

Top 5 podcasts with Murray Shanahan

Ranked by the Snipd community
undefined
42 snips
Jul 14, 2024 • 2h 15min

Prof. Murray Shanahan - Machines Don't Think Like Us

Prof. Murray Shanahan, from Imperial College London and DeepMind, challenges assumptions about AI consciousness. Topics include dangers of anthropomorphizing AI, limitations of describing AI capabilities, and the intersection of philosophy and artificial intelligence.
undefined
38 snips
Oct 23, 2024 • 45min

Nature of Intelligence, Ep. 3: What kind of intelligence is an LLM?

Murray Shanahan, a cognitive robotics expert at Google DeepMind, teams up with Harvard's Tomer Ullman, who focuses on cognition and development. They dive into what distinguishes human intelligence from that of large language models. The discussion unpacks the misconceptions of LLMs as intelligent beings, addressing their 'hallucinations' and inability to genuinely discern truth. They also ponder the alignment problem in AI and question whether LLMs embody real consciousness or merely simulate human-like behavior.
undefined
14 snips
Aug 20, 2019 • 33min

AI, Robot

Forget what sci-fi has told you about superintelligent robots that are uncannily human-like; the reality is more prosaic. Inside DeepMind’s robotics laboratory, Hannah explores what researchers call ‘embodied AI’: robot arms that are learning tasks like picking up plastic bricks, which humans find comparatively easy. Discover the cutting-edge challenges of bringing AI and robotics together, and learning from scratch how to perform tasks. She also explores some of the key questions about using AI safely in the real world.If you have a question or feedback on the series, message us on Twitter (@DeepMind using the hashtag #DMpodcast) or email us at podcast@deepmind.com.Further reading:Blogs on AI safety and further resources from Victoria KrakovnaThe Future of Life Institute: The risks and benefits of AIThe Wall Street Journal: Protecting Against AI’s Existential ThreatTED Talks: Max Tegmark - How to get empowered, not overpowered, by AIRoyal Society lecture series sponsored by DeepMind: You & AINick Bostrom: Superintelligence: Paths, Dangers and Strategies (book)OpenAI: Learning from Human PreferencesDeepMind blog: Learning from human preferencesDeepMind blog: Learning by playing - how robots can tidy up after themselvesDeepMind blog: AI safetyInterviewees: Software engineer Jackie Kay and research scientists Murray Shanahan, Victoria Krakovna, Raia Hadsell and Jan Leike.Credits:Presenter: Hannah FryEditor: David PrestSenior Producer: Louisa FieldProducers: Amy Racs, Dan HardoonBinaural Sound: Lucinda Mason-BrownMusic composition: Eleni Shaw (with help from Sander Dieleman and WaveNet)Commissioned by DeepMind Please like and subscribe on your preferred podcast platform. Want to share feedback? Or have a suggestion for a guest that we should have on next? Leave us a comment on YouTube and stay tuned for future episodes.  
undefined
12 snips
Sep 10, 2024 • 1h 20min

Murray Shanahan: What are Conscious Exotica? Consciousness, AI & the Space of Possible Minds

Murray Shanahan, a Professor of Cognitive Robotics at Imperial College London and a principal research scientist at Google DeepMind, dives deep into the intersection of consciousness and artificial intelligence. He explores whether the robot Ava from 'Ex Machina' was truly conscious, critiques the Turing Test, and discusses the concept of 'Conscious Exotica.' The conversation also touches on the Attention Schema Theory and the implications of embodied cognition for AI, rounding off with thoughts on the future of consciousness in a technological age.
undefined
10 snips
Mar 13, 2023 • 1h 44min

#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Dr. Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. His research draws from his expertise in philosophy and cognitive science to explore the implications of recent progress in deep learning for models of human cognition, as well as various issues in ethics and aesthetics. He is also investigating what underlies the capacity to represent oneself as oneself at a fundamental level, in humans and non-human animals; as well as the role that self-representation plays in perception, action, and memory. In a world where technology is rapidly advancing, Dr. Millière is striving to gain a better understanding of how artificial neural networks work, and to establish fair and meaningful comparisons between humans and machines in various domains in order to shed light on the implications of artificial intelligence for our lives. https://www.raphaelmilliere.com/ https://twitter.com/raphaelmilliere Here is a version with hesitation sounds like "um" removed if you prefer (I didn't notice them personally): https://share.descript.com/view/aGelyTl2xpN YT: https://www.youtube.com/watch?v=fhn6ZtD6XeE TOC: Intro to Raphael [00:00:00] Intro: Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) [00:01:18] Show Kick off [00:07:10] LLMs [00:08:37] Semantic Competence/Understanding [00:18:28] Forming Analogies/JPG Compression Article [00:30:17] Compositional Generalisation [00:37:28] Systematicity [00:47:08] Language of Thought [00:51:28] Bigbench (Conceptual Combinations) [00:57:37] Symbol Grounding [01:11:13] World Models [01:26:43] Theory of Mind [01:30:57] Refs (this is truncated, full list on YT video description): Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) https://nautil.us/moving-beyond-mimicry-in-artificial-intelligence-238504/ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (Bender et al) https://dl.acm.org/doi/10.1145/3442188.3445922 ChatGPT Is a Blurry JPEG of the Web (Ted Chiang) https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web The Debate Over Understanding in AI's Large Language Models (Melanie Mitchell) https://arxiv.org/abs/2210.13966 Talking About Large Language Models (Murray Shanahan) https://arxiv.org/abs/2212.03551 Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender) https://aclanthology.org/2020.acl-main.463/ The symbol grounding problem (Stevan Harnad) https://arxiv.org/html/cs/9906002 Why the Abstraction and Reasoning Corpus is interesting and important for AI (Mitchell) https://aiguide.substack.com/p/why-the-abstraction-and-reasoning Linguistic relativity (Sapir–Whorf hypothesis) https://en.wikipedia.org/wiki/Linguistic_relativity Cooperative principle (Grice's four maxims of conversation - quantity, quality, relation, and manner) https://en.wikipedia.org/wiki/Cooperative_principle