
 Machine Learning Street Talk (MLST)
 Machine Learning Street Talk (MLST) #103 - Prof. Edward Grefenstette - Language, Semantics, Philosophy
 15 snips 
 Feb 11, 2023  Edward Grefenstette, Head of Machine Learning at Cohere and Honorary Professor at UCL, delves into the fascinating intersection of language, semantics, and philosophy. He discusses the complexities of understanding semantics in AI, particularly in moral contexts, and highlights the significance of Reinforcement Learning from Human Feedback (RLHF) for enhancing model performance. Grefenstette also tackles deep learning's 'Swiss cheese problem' and explores philosophical insights on intelligence, agency, and the nature of creativity in relation to AI. 
 AI Snips 
 Chapters 
 Books 
 Transcript 
 Episode notes 
Differential Semantics in LLMs
- Large language models capture semantic nuances like "evil" by reflecting diverse human language use.
- This raises questions about whether LLMs merely mimic or truly understand these concepts.
Conceptual Abstraction in LLMs
- Language models appear to reuse human concepts, enabling meaningful conversation.
- True novel concept communication from LLMs requires surprising, pragmatically valuable contributions.
LLMs and Human Conversation
- Large language models, like humans, are products of data and experiences.
- This similarity blurs the line between talking to an LLM and talking to another human, raising questions about scale and free will.




