Prof. Melanie Mitchell 2.0 - AI Benchmarks are Broken!
Sep 10, 2023
auto_awesome
Prof. Melanie Mitchell argues for rigorous testing of AI systems' capabilities using experimental methods. Popular benchmarks should evolve. Large language models lack common sense and fail at simple tasks. Need more granular testing focused on generalization. Intelligence is situated, domain-specific, and grounded in physical experience. Extracting pure intelligence may not work. Need more focus on proper experimental methods in AI research.
AI understanding is ill-defined and multidimensional, requiring proper experimental methods for testing capabilities.
Large language models lack common sense and do not possess human-like conceptual knowledge.
Benchmarking AI systems should focus on granular testing and instance-level failure modes to gain a deeper understanding.
Deep dives
The importance of refining our notions of understanding in AI
In this podcast episode, the speaker discusses the need to refine our understanding of key concepts in AI, such as understanding and intelligence. The speaker explains that AI systems often exhibit specific skills and capabilities, but their understanding is often limited and context-dependent. This challenges traditional benchmarks and requires a more nuanced approach to evaluating AI systems. The importance of experimental method and rigorous testing is emphasized, particularly in determining the genuine capabilities and limitations of AI systems.
The limitations of large language models
The podcast explores the limitations of large language models, such as GPT-4, in understanding language and the world. It is argued that these models can demonstrate impressive performance on specific tasks, but their knowledge and comprehension are often shallow and lacking true understanding. The reliance on statistical relationships between words hampers their ability to form a grounded causal model of reality. The discussion highlights the need to reassess how AI systems are evaluated, moving towards more focused and granular testing that emphasizes abstract generalization.
The challenges of benchmarking AI systems
The podcast delves into the challenges of benchmarking AI systems, including the issue of information leakage and the limitations of current benchmarks. It is argued that benchmarks often fail to provide comprehensive insights into the capabilities and failures of AI systems. The speaker emphasizes the need for aggregating benchmarks and providing instance-level failure modes to gain a deeper understanding of how and why things go wrong. It is suggested that this level of analysis and reporting will improve our understanding of AI systems.
Examining scaling in complex systems
The podcast discusses the concept of scaling in complex systems, particularly in the context of cities. The scaling of cities is explored in terms of factors such as innovation, energy usage, and happiness. The challenges of measuring and interpreting these scaling phenomena are highlighted, but new opportunities arise with the availability of massive data sets and tracking technologies. The discussion reveals the potential for the emerging field of the science of cities to provide insights into social systems and collective intelligence.
Exploring the complexity of intelligence
The podcast touches upon the complexity of intelligence and the challenges in understanding and replicating it in AI systems. The speaker acknowledges that while there are computational aspects to intelligence and the brain can be seen as a special kind of computer, intelligence is deeply intertwined with an organism's body, environment, and specific adaptations. The discussion emphasizes the need to go beyond brute force approaches and focus on a deeper understanding of cognitive processes and abstractions.
Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB
Prof. Melanie Mitchell argues that the concept of "understanding" in AI is ill-defined and multidimensional - we can't simply say an AI system does or doesn't understand. She advocates for rigorously testing AI systems' capabilities using proper experimental methods from cognitive science. Popular benchmarks for intelligence often rely on the assumption that if a human can perform a task, an AI that performs the task must have human-like general intelligence. But benchmarks should evolve as capabilities improve.
Large language models show surprising skill on many human tasks but lack common sense and fail at simple things young children can do. Their knowledge comes from statistical relationships in text, not grounded concepts about the world. We don't know if their internal representations actually align with human-like concepts. More granular testing focused on generalization is needed.
There are open questions around whether large models' abilities constitute a fundamentally different non-human form of intelligence based on vast statistical correlations across text. Mitchell argues intelligence is situated, domain-specific and grounded in physical experience and evolution. The brain computes but in a specialized way honed by evolution for controlling the body. Extracting "pure" intelligence may not work.
Other key points:
- Need more focus on proper experimental method in AI research. Developmental psychology offers examples for rigorous testing of cognition.
- Reporting instance-level failures rather than just aggregate accuracy can provide insights.
- Scaling laws and complex systems science are an interesting area of complexity theory, with applications to understanding cities.
- Concepts like "understanding" and "intelligence" in AI force refinement of fuzzy definitions.
- Human intelligence may be more collective and social than we realize. AI forces us to rethink concepts we apply anthropomorphically.
The overall emphasis is on rigorously building the science of machine cognition through proper experimentation and benchmarking as we assess emerging capabilities.
TOC:
[00:00:00] Introduction and Munk AI Risk Debate Highlights
[05:00:00] Douglas Hofstadter on AI Risk
[00:06:56] The Complexity of Defining Intelligence
[00:11:20] Examining Understanding in AI Models
[00:16:48] Melanie's Insights on AI Understanding Debate
[00:22:23] Unveiling the Concept Arc
[00:27:57] AI Goals: A Human vs Machine Perspective
[00:31:10] Addressing the Extrapolation Challenge in AI
[00:36:05] Brain Computation: The Human-AI Parallel
[00:38:20] The Arc Challenge: Implications and Insights
[00:43:20] The Need for Detailed AI Performance Reporting
[00:44:31] Exploring Scaling in Complexity Theory
Eratta:
Note Tim said around 39 mins that a recent Stanford/DM paper modelling ARC “on GPT-4 got around 60%”. This is not correct and he misremembered. It was actually davinci3, and around 10%, which is still extremely good for a blank slate approach with an LLM and no ARC specific knowledge. Folks on our forum couldn’t reproduce the result. See paper linked below.
Books (MUST READ):
Artificial Intelligence: A Guide for Thinking Humans (Melanie Mitchell)
https://www.amazon.co.uk/Artificial-Intelligence-Guide-Thinking-Humans/dp/B07YBHNM1C/?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=44ccac78973f47e59d745e94967c0f30&camp=1634&creative=6738
Complexity: A Guided Tour (Melanie Mitchell)
https://www.amazon.co.uk/Audible-Complexity-A-Guided-Tour?&_encoding=UTF8&tag=mlst00-21&linkCode=ur2&linkId=3f8bd505d86865c50c02dd7f10b27c05&camp=1634&creative=6738