

#83 Dr. ANDREW LAMPINEN (Deepmind) - Natural Language, Symbols and Grounding [NEURIPS2022 UNPLUGGED]
Dec 4, 2022
Dr. Andrew Lampinen, a DeepMind researcher specializing in natural language understanding and reinforcement learning, dives deep into the complexities of AI language models. He explores the grounding problem and critiques the distinctions between AI and human cognitive abilities. The discussion covers philosophical debates on human agency, the nuances of syntax versus semantics, and the shifting perspectives on deep learning's role in language comprehension. Lampinen also highlights the intricacies of compositionality and the significance of embodied learning in AI.
AI Snips
Chapters
Transcript
Episode notes
Grounding Language Models
- Language models can learn anything testable through language alone.
- Grounding language models in real-world contexts remains a challenge.
Agency and Free Will
- The question of agency in language models is linked to philosophical debates on free will.
- Andrew Lampinen questions the distinction between a language model following instructions and humans' deterministic actions.
Language in Reinforcement Learning
- Andrew Lampinen highlights examples like Helm, React, and SACAM.
- These examples illustrate varied approaches to integrating language models with reinforcement learning.