

Explainability, Reasoning, Priors and GPT-3
Sep 16, 2020
Dr. Keith Duggar, MIT PhD and AI expert, joins for a captivating discussion on explainability in machine learning. They dive into Christoph Molnar's insights on interpretability and the intricacies of neural networks' reasoning. Duggar contrasts priors with experience, touches on core knowledge, and critiques deep learning through notable figures like Gary Marcus. The conversation culminates in exploring ethical implications and challenges of GPT-3's reasoning, highlighting the broader questions of machine intelligence and the future of AI.
AI Snips
Chapters
Books
Transcript
Episode notes
Explainability Languages
- Explaining machine learning models requires a language, but current methods are too complex.
- They don't effectively communicate model behavior to the public.
Deceptive Decision Trees
- Decision trees offer deceptively good explanations by focusing on single nodes.
- People fixate on one regret, like a car accident, without understanding the full model.
Confirmation Bias and Explanations
- Good explanations should align with prior beliefs, even if those beliefs are illogical.
- Confirmation bias in humans is analogous to a conservative learning rate in machine learning models.