undefined

Melanie Mitchell

Resident Professor at the Santa Fe Institute, known for her research on analogy-making and conceptual abstraction in AI. Co-host of the Complexity podcast.

Top 5 podcasts with Melanie Mitchell

Ranked by the Snipd community
undefined
69 snips
Dec 15, 2022 • 55min

Melanie Mitchell: Abstraction and Analogy in AI

Have suggestions for future podcast guests (or other feedback)? Let us know here!In episode 53 of The Gradient Podcast, Daniel Bashir speaks to Professor Melanie Mitchell. Professor Mitchell is the Davis Professor at the Santa Fe Institute. Her research focuses on conceptual abstraction, analogy-making, and visual recognition in AI systems. She is the author or editor of six books and her work spans the fields of AI, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans. Subscribe to The Gradient Podcast:  Apple Podcasts  | Spotify | Pocket Casts | RSSFollow The Gradient on TwitterOutline:* (00:00) Intro* (02:20) Melanie’s intro to AI* (04:35) Melanie’s intellectual influences, AI debates over time* (10:50) We don’t have the right metrics for empirical study in AI* (15:00) Why AI is Harder than we Think: the four fallacies* (20:50) Difficulties in understanding what’s difficult for machines vs humans* (23:30) Roles for humanlike and non-humanlike intelligence* (27:25) Whether “intelligence” is a useful word* (31:55) Melanie’s thoughts on modern deep learning advances, brittleness* (35:35) Abstraction, Analogies, and their role in AI* (38:40) Concepts as analogical and what that means for cognition* (41:25) Where does analogy bottom out* (44:50) Cognitive science approaches to concepts* (45:20) Understanding how to form and use concepts is one of the key problems in AI* (46:10) Approaching abstraction and analogy, Melanie’s work / the Copycat architecture* (49:50) Probabilistic program induction as a promising approach to intelligence* (52:25) Melanie’s advice for aspiring AI researchers* (54:40) OutroLinks:* Melanie’s homepage and Twitter* Papers* Difficulties in AI, hype cycles* Why AI is Harder than we think* The Debate Over Understanding in AI’s Large Language Models* What Does It Mean for AI to Understand?* Abstraction, analogies, and reasoning* Abstraction and Analogy-Making in Artificial Intelligence* Evaluating understanding on conceptual abstraction benchmarks Get full access to The Gradient at thegradientpub.substack.com/subscribe
undefined
20 snips
Dec 28, 2019 • 1h 53min

Melanie Mitchell: Concepts, Analogies, Common Sense & Future of AI

Melanie Mitchell is a professor of computer science at Portland State University and an external professor at Santa Fe Institute. She has worked on and written about artificial intelligence from fascinating perspectives including adaptive complex systems, genetic algorithms, and the Copycat cognitive architecture which places the process of analogy making at the core of human cognition. From her doctoral work with her advisors Douglas Hofstadter and John Holland to today, she has contributed a lot of important ideas to the field of AI, including her recent book, simply called Artificial Intelligence: A Guide for Thinking Humans. This conversation is part of the Artificial Intelligence podcast. If you would like to get more information about this podcast go to https://lexfridman.com/ai or connect with @lexfridman on Twitter, LinkedIn, Facebook, Medium, or YouTube where you can watch the video versions of these conversations. If you enjoy the podcast, please rate it 5 stars on Apple Podcasts, follow on Spotify, or support it on Patreon. This episode is presented by Cash App. Download it (App Store, Google Play), use code “LexPodcast”.  Episode Links: AI: A Guide for Thinking Humans (book) Here’s the outline of the episode. On some podcast players you should be able to click the timestamp to jump to that time. 00:00 – Introduction 02:33 – The term “artificial intelligence” 06:30 – Line between weak and strong AI 12:46 – Why have people dreamed of creating AI? 15:24 – Complex systems and intelligence 18:38 – Why are we bad at predicting the future with regard to AI? 22:05 – Are fundamental breakthroughs in AI needed? 25:13 – Different AI communities 31:28 – Copycat cognitive architecture 36:51 – Concepts and analogies 55:33 – Deep learning and the formation of concepts 1:09:07 – Autonomous vehicles 1:20:21 – Embodied AI and emotion 1:25:01 – Fear of superintelligent AI 1:36:14 – Good test for intelligence 1:38:09 – What is complexity? 1:43:09 – Santa Fe Institute 1:47:34 – Douglas Hofstadter 1:49:42 – Proudest moment
undefined
18 snips
Jul 25, 2021 • 2h 31min

#57 - Prof. Melanie Mitchell - Why AI is harder than we think

Since its beginning in the 1950s, the field of artificial intelligence has vacillated between periods of optimistic predictions and massive investment and periods of disappointment, loss of confidence, and reduced funding. Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected.  Professor Melanie Mitchell thinks one reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. YT vid- https://www.youtube.com/watch?v=A8m1Oqz2HKc Main show kick off [00:26:51] Panel: Dr. Tim Scarfe, Dr. Keith Duggar, Letitia Parcalabescu (https://www.youtube.com/c/AICoffeeBreak/)
undefined
15 snips
Sep 10, 2023 • 1h 2min

Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!

Prof. Melanie Mitchell argues for rigorous testing of AI systems' capabilities using experimental methods. Popular benchmarks should evolve. Large language models lack common sense and fail at simple tasks. Need more granular testing focused on generalization. Intelligence is situated, domain-specific, and grounded in physical experience. Extracting pure intelligence may not work. Need more focus on proper experimental methods in AI research.
undefined
10 snips
Mar 13, 2023 • 1h 44min

#107 - Dr. RAPHAËL MILLIÈRE - Linguistics, Theory of Mind, Grounding

Support us! https://www.patreon.com/mlst MLST Discord: https://discord.gg/aNPkGUQtc5 Dr. Raphaël Millière is the 2020 Robert A. Burt Presidential Scholar in Society and Neuroscience in the Center for Science and Society, and a Lecturer in the Philosophy Department at Columbia University. His research draws from his expertise in philosophy and cognitive science to explore the implications of recent progress in deep learning for models of human cognition, as well as various issues in ethics and aesthetics. He is also investigating what underlies the capacity to represent oneself as oneself at a fundamental level, in humans and non-human animals; as well as the role that self-representation plays in perception, action, and memory. In a world where technology is rapidly advancing, Dr. Millière is striving to gain a better understanding of how artificial neural networks work, and to establish fair and meaningful comparisons between humans and machines in various domains in order to shed light on the implications of artificial intelligence for our lives. https://www.raphaelmilliere.com/ https://twitter.com/raphaelmilliere Here is a version with hesitation sounds like "um" removed if you prefer (I didn't notice them personally): https://share.descript.com/view/aGelyTl2xpN YT: https://www.youtube.com/watch?v=fhn6ZtD6XeE TOC: Intro to Raphael [00:00:00] Intro: Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) [00:01:18] Show Kick off [00:07:10] LLMs [00:08:37] Semantic Competence/Understanding [00:18:28] Forming Analogies/JPG Compression Article [00:30:17] Compositional Generalisation [00:37:28] Systematicity [00:47:08] Language of Thought [00:51:28] Bigbench (Conceptual Combinations) [00:57:37] Symbol Grounding [01:11:13] World Models [01:26:43] Theory of Mind [01:30:57] Refs (this is truncated, full list on YT video description): Moving Beyond Mimicry in Artificial Intelligence (Raphael Millière) https://nautil.us/moving-beyond-mimicry-in-artificial-intelligence-238504/ On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (Bender et al) https://dl.acm.org/doi/10.1145/3442188.3445922 ChatGPT Is a Blurry JPEG of the Web (Ted Chiang) https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web The Debate Over Understanding in AI's Large Language Models (Melanie Mitchell) https://arxiv.org/abs/2210.13966 Talking About Large Language Models (Murray Shanahan) https://arxiv.org/abs/2212.03551 Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data (Bender) https://aclanthology.org/2020.acl-main.463/ The symbol grounding problem (Stevan Harnad) https://arxiv.org/html/cs/9906002 Why the Abstraction and Reasoning Corpus is interesting and important for AI (Mitchell) https://aiguide.substack.com/p/why-the-abstraction-and-reasoning Linguistic relativity (Sapir–Whorf hypothesis) https://en.wikipedia.org/wiki/Linguistic_relativity Cooperative principle (Grice's four maxims of conversation - quantity, quality, relation, and manner) https://en.wikipedia.org/wiki/Cooperative_principle