
The Gradient: Perspectives on AI
Thomas Dietterich: From the Foundations
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- Professor Thomas Dietterich discusses the philosophy of science in relation to AI and the importance of grounding and observable evidence.
- He highlights the limits of statistical learning without causal reasoning and advocates for systematic understanding.
- Error correcting output codes and ensemble methods in machine learning are explored, raising questions about their effectiveness and the relationship between statistical bias and variance.
- There is a need to develop alternative approaches to large language models, such as neurosymbolic architectures and competence models, to enhance AI systems' reasoning and planning abilities.
Deep dives
Introduction and Background
The podcast episode introduces The Gradient Podcast and its mission to bring interesting voices in AI. It emphasizes the focus on stories, people, and ideas rather than trends. Listeners are encouraged to provide support through reviews and subscriptions to help improve the show and potentially compensate the editorial staff.
Thomas DeEterich's Contributions
The episode features an interview with Professor Thomas DeEterich, who is regarded as a prominent researcher in AI. His work covers various areas, including knowledge level learning, developing high reliability AI, and creating modular intelligent systems. DeEterich's insights shed light on the evolution of machine learning and the scientific paradigms that should guide future AI research.
Understanding the Knowledge Level
DeEterich, inspired by Alan Newell's concept of the knowledge level, discusses how attributing goals and knowledge to systems can help predict their behavior. He explores the instrumentalist perspective that focuses on the system's ability to achieve desired results rather than questioning genuine understanding. DeEterich considers the importance of building systems with deep, systematic understanding, rather than treating understanding as a binary concept.
Ensemble Methods and Error Correcting Output Codes
DeEterich's work on ensemble methods and error correcting output codes is discussed. The idea behind error correcting output codes is to encode multi-class learning problems as binary problems using error correcting codes. This allows for the combination of independent learning algorithms to correct errors and improve classification accuracy. The paper raises questions about why error correcting codes work and the relationship between statistical bias and variance in decision tree algorithms.
The Limitations of Large Language Models
Large language models (LLMs) have several limitations. They are expensive to update and can produce unacceptable outputs, including hallucinations. LLMs are statistical models of knowledge bases rather than knowledge bases themselves. This means they lack the ability to reason and plan, and they have difficulty with self-awareness and understanding social and ethical situations. While LLMs have a vast amount of knowledge, they struggle with providing accurate and reliable answers. The goal is to build an AI system that is knowledgeable, self-aware, and capable of reasoning and planning.
Alternatives to LLMs
Instead of relying solely on large language models, there is a need to explore alternatives. One possibility is to develop neurosymbolic architectures where the knowledge base is an explicit data structure like a knowledge graph. LLMs can then serve as repositories of linguistic knowledge and common sense. By separating linguistic knowledge from factual knowledge, it becomes easier to update and ensure accuracy. Additionally, competence models can be built for LLMs to improve their performance. It is also crucial to integrate inference and learning processes to enhance the overall capabilities of these systems.
Challenges in Building AI Systems
Building AI systems that encompass planning, metacognition, reasoning, and language understanding is a complex task. Each component presents its own challenges and requires intensive research and development. The ability to disentangle language from other cognitive processes and understand the boundary between linguistic knowledge and world knowledge is a key area of exploration. Additionally, the development of cognitive architectures that facilitate learning, retrieval, and integration of new information is crucial. Overall, the aim is to create AI systems that possess knowledge, reasoning abilities, self-awareness, and the capability to navigate complex social and ethical situations.
In episode 100 of The Gradient Podcast, Daniel Bashir speaks to Professor Thomas Dietterich.
Professor Dietterich is Distinguished Professor Emeritus in the School of Electrical Engineering and Computer Science at Oregon State University. He is a pioneer in the field of machine learning, and has authored more than 225 refereed publications and two books. His current research topics include robust artificial intelligence, robust human-AI systems, and applications in sustainability. He is a former President of the Association for the Advancement of Artificial Intelligence, and the founding President of the International Machine Learning Society. Other major roles include Executive Editor of the journal Machine Learning, co-founder of the Journal for Machine Learning Research, and program chair of AAAI 1990 and NIPS 2000. He currently serves as one of the moderators for the cs.LG category on arXiv.
Have suggestions for future podcast guests (or other feedback)? Let us know here or reach us at editor@thegradient.pub
Subscribe to The Gradient Podcast: Apple Podcasts | Spotify | Pocket Casts | RSSFollow The Gradient on Twitter
Outline:
* (00:00) Episode 100 Note
* (02:03) Intro
* (04:23) Prof. Dietterich’s background
* (14:20) Kuhn and theory development in AI, how Prof Dietterich thinks about the philosophy of science and AI
* (20:10) Scales of understanding and sentience, grounding, observable evidence
* (23:58) Limits of statistical learning without causal reasoning, systematic understanding
* (25:48) A challenge for the ML community: testing for systematicity
* (26:13) Forming causal understandings of the world
* (28:18) Learning at the Knowledge Level
* (29:18) Background and definitions
* (32:18) Knowledge and goals, a note on LLMs
* (33:03) What it means to learn
* (41:05) LLMs as learning results of inference without learning first principles
* (43:25) System I/II thinking in humans and LLMs
* (47:23) “Routine Science”
* (47:38) Solving multiclass learning problems via error-correcting output codes
* (52:53) Error-correcting codes and redundancy
* (54:48) Why error-correcting codes work, contra intuition
* (59:18) Bias in ML
* (1:06:23) MAXQ for hierarchical RL
* (1:15:48) Computational sustainability
* (1:19:53) Project TAHMO’s moonshot
* (1:23:28) Anomaly detection for weather stations
* (1:25:33) Robustness
* (1:27:23) Motivating The Familiarity Hypothesis
* (1:27:23) Anomaly detection and self-models of competence
* (1:29:25) Measuring the health of freshwater streams
* (1:31:55) An open set problem in species detection
* (1:33:40) Issues in anomaly detection for deep learning
* (1:37:45) The Familiarity Hypothesis
* (1:40:15) Mathematical intuitions and the Familiarity Hypothesis
* (1:44:12) What’s Wrong with LLMs and What We Should Be Building Instead
* (1:46:20) Flaws in LLMs
* (1:47:25) The systems Prof Dietterich wants to develop
* (1:49:25) Hallucination/confabulation and LLMs vs knowledge bases
* (1:54:00) World knowledge and linguistic knowledge
* (1:55:07) End-to-end learning and knowledge bases
* (1:57:42) Components of an intelligent system and separability
* (1:59:06) Thinking through external memory
* (2:01:10) Outro
Links:
* Research — Fundamentals (Philosophy of AI)
* Learning at the Knowledge Level
* What Does it Mean for a Machine to Understand?
* Research – “Routine science”
* Ensemble methods in ML and error-correcting output codes
* Solving multiclass learning problems via error-correcting output codes
* An experimental comparison of bagging, boosting, and randomization
* ML Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms
* The definitive treatment of these questions, by Gareth James
* Discovering/Exploiting structure in MDPs:
* Exogenous State MDPs (paper with George Trimponias, slides)
* Research — Ecosystem Informatics and Computational Sustainability
* Challenges for ML in Computational Sustainability
* Research — Robustness
* Steps towards robust AI (AAAI President’s Address)
* Benchmarking NN Robustness to Common Corruptions and Perturbations with Dan Hendrycks
* The familiarity hypothesis: Explaining the behavior of deep open set methods
* Recent commentary
* What's Wrong with Large Language Models and What We Should Be Building Instead
Get full access to The Gradient at thegradientpub.substack.com/subscribe