
Brain Inspired BI 110 Catherine Stinson and Jessica Thompson: Neuro-AI Explanation
Jul 6, 2021
Catherine Stinson is a philosopher focused on AI and neuroscience, and Jessica Thompson, a postdoc in cognitive neuroscience, studies explanation across these fields. They dive into how explanations in neuroscience and AI can be unified. Jessica advocates shifting focus from singular brain areas or models to shared phenomena across both domains. They also discuss the balance between intelligibility and empirical fit in models, the role of philosophy in shaping scientific inquiry, and the importance of interdisciplinary collaboration for innovative research.
AI Snips
Chapters
Books
Transcript
Episode notes
Focus On The Target Phenomenon
- Both papers converge on the importance of the explanation's target: the phenomenon or aspect the model and system instantiate.
- Explanations work by relating model and target via a shared, specific kind or aspect.
Lab Work Crushes Armchair Certainty
- Catherine learned how hard lab science is after trying it herself and failing for years.
- That experience shifted her from armchair criticism to informed critique grounded in practice.
Understanding Requires Intelligibility
- Understanding differs from explanation: intelligibility plus explanation produce scientific understanding.
- Intelligibility depends on a community's skills and background, not just a formal description.

