
Machine Learning Street Talk (MLST)
MLST #78 - Prof. NOAM CHOMSKY (Special Edition)
Episode guests
Podcast summary created with Snipd AI
Quick takeaways
- Language acquisition is genetically pre-programmed, challenging behaviorist views on learning.
- Symbolic manipulation plays a crucial role in human cognition, distinguishing human intelligence.
- Humans face limitations in understanding the universe at certain scales, requiring pragmatic modeling.
- Mathematics acts as a bridge between physical reality and cognitive abstractions.
- Human creativity and free will raise philosophical questions about determinism and cognition.
- Focus should be on fundamental questions of language and cognition rather than engineering applications.
Deep dives
The Pre-Programmed Nature of Language Acquisition
Noam Chomsky argued that language acquisition is genetically pre-programmed and that all human languages share a basic structure. This challenges the behaviorist view that language learning is solely based on empirical stimulus and response.
Chomsky's Critique of Behavior Psychology
Chomsky criticized behavior psychologists for their belief that humans are blank slates and that learning is only based on reinforcement. He argued that this cannot explain the rapid and universal acquisition of language at a young age.
The Importance of Symbolic Manipulation
Chomsky and Wally Subber emphasize the role of symbolic manipulation in human cognition. They argue that this ability to manipulate symbols and create abstractions sets human intelligence apart and cannot be solely explained by empirical learning.
The Limitations of Intelligibility in Understanding the Universe
The podcast explores the limitation in understanding the universe, highlighting that there are scales or regimes of physics where our common sense and mechanical intuitions do not apply. The speaker references concepts like action at a distance and curved spacetime to emphasize that our intuitions are not always intelligible at these levels. Despite these limitations, humans have the ability to understand and model the world to a pragmatic degree, as evidenced by technological advancements. The podcast raises the question of whether there are inherent limits to human cognition and understanding, drawing parallels to rats' inability to solve prime number mazes. It concludes by suggesting that embracing the mystery and expanding our creativity and holistic approaches may lead to a broader understanding of the universe.
The Role of Mathematics as a Bridge between Reality and Abstractions
The podcast discusses the role of mathematics as a bridge between the mysterious and unintelligible aspects of physical reality and the abstractions and concepts that exist within our cognition. It highlights that mathematics allows us to create models and theories that can represent elements of reality, even if those models themselves may not be fully intelligible. This reflective process of constructing mathematical representations enables human beings to better understand complex phenomena and engage in scientific exploration. However, the podcast also acknowledges that there are diverse levels of abstraction and multiple interpretations within the scientific community that challenge the notion of a unified intelligibility across all theories and models.
The Mysterious Nature of Human Creativity and Free Will
The podcast delves into the enigmatic aspects of human creativity and free will. It acknowledges the ability of humans to generate novel ideas and linguistic expressions that are not directly caused by external stimuli. The podcast explores the philosophical implications of a deterministic or indeterministic worldview and its impact on the concept of free will. It also emphasizes the role of language in human cognition and the extraordinary capabilities of human minds to comprehend and communicate abstract concepts. The podcast raises questions about the origin of these creative abilities and the limits of human cognition in grasping the metaphysical underpinnings of reality.
The Limitations of Large Language Models
Large language models like GPT-3 are hyped and heavily invested in, but they have achieved zero in terms of understanding language. They may have engineering applications, such as transcription, but they do not contribute to scientific understanding. The focus should be on the fundamental questions about the nature of language and cognition.
The Critique of Connectionism and Deep Learning
The critique of connectionism, as presented by Fodor and Pylyshyn, remains valid. Deep learning and connectionist approaches are engineering techniques that have useful applications, but they do not provide insights into the nature of intelligence or language. The field should move beyond these approaches to address fundamental questions.
The Limitations of Replicating Human Cognition in Silico
The possibility of replicating human cognition in digital circuits alone or through hybrid approaches is uncertain. While there may be interesting work in the intersection of neuroscience and AI, such as quantum properties and neurosymbolic models, the focus should remain on understanding the basic elements of cognition and investigating the nature of language.
Language and Thought: Understanding the Nature of Science
The podcast episode explores the shift in science from seeking an intelligible universe to focusing on intelligible theories. It highlights how scientists no longer prioritize finding an intelligible universe but rather aim to develop intelligible theories about the universe. The podcast also discusses the mysteries that remain unsolved, such as motion and the construction and communication of thoughts. Furthermore, it delves into the connection between universal grammar, the language of thought, and the study of semantics. The episode concludes by emphasizing the need for deep explanations and the pursuit of genuine understanding in scientific research.
Challenges in Language Science and Philosophy
The podcast episode touches upon the misconceptions and misunderstandings in language science and philosophy. It explores the Plato problem, Darwin's problem, and Descartes' problem as examples of commonly misunderstood concepts. Additionally, it highlights the limitations in studying the neuro basis of language and the difficulty of answering questions about consciousness. The episode concludes by discussing the potential for future research in understanding language structure, the nature of particles, and the connection between matter and consciousness.
Patreon: https://www.patreon.com/mlst
Discord: https://discord.gg/ESrGqhf5CB
In this special edition episode, we have a conversation with Prof. Noam Chomsky, the father of modern linguistics and the most important intellectual of the 20th century.
With a career spanning the better part of a century, we took the chance to ask Prof. Chomsky his thoughts not only on the progress of linguistics and cognitive science but also the deepest enduring mysteries of science and philosophy as a whole - exploring what may lie beyond our limits of understanding. We also discuss the rise of connectionism and large language models, our quest to discover an intelligible world, and the boundaries between silicon and biology.
We explore some of the profound misunderstandings of linguistics in general and Chomsky’s own work specifically which have persisted, at the highest levels of academia for over sixty years.
We have produced a significant introduction section where we discuss in detail Yann LeCun’s recent position paper on AGI, a recent paper on emergence in LLMs, empiricism related to cognitive science, cognitive templates, “the ghost in the machine” and language.
Panel:
Dr. Tim Scarfe
Dr. Keith Duggar
Dr. Walid Saba
YT version: https://youtu.be/-9I4SgkHpcA
00:00:00 Kick off
00:02:24 C1: LeCun's recent position paper on AI, JEPA, Schmidhuber, EBMs
00:48:38 C2: Emergent abilities in LLMs paper
00:51:32 C3: Empiricism
01:25:33 C4: Cognitive Templates
01:35:47 C5: The Ghost in the Machine
01:59:21 C6: Connectionism and Cognitive Architecture: A Critical Analysis by Fodor and Pylyshyn
02:19:25 C7: We deep-faked Chomsky
02:29:11 C8: Language
02:34:41 C9: Chomsky interview kick-off!
02:35:39 Large Language Models such as GPT-3
02:39:14 Connectionism and radical empiricism
02:44:44 Hybrid systems such as neurosymbolic
02:48:47 Computationalism silicon vs biological
02:53:28 Limits of human understanding
03:00:46 Semantics state-of-the-art
03:06:43 Universal grammar, I-Language, and language of thought
03:16:27 Profound and enduring misunderstandings
03:25:41 Greatest remaining mysteries science and philosophy
03:33:10 Debrief and 'Chuckles' from Chomsky