
 EdSurge Podcast
 EdSurge Podcast What Can AI Chatbots Teach Us About How Humans Learn?
 7 snips 
 Oct 27, 2024  Terrence Sejnowski, a biology professor at UC San Diego and a pioneer in AI language models, dives into the intriguing parallels between AI chatbots and the human brain. He discusses how AI reveals our limited understanding of human cognition and emphasizes the role of sleep in learning and memory. Their potential in education is highlighted, along with the challenges posed by traditional assessments. Sejnowski also navigates the complexities of human-AI interactions and the importance of creating effective regulations as these technologies evolve. 
 AI Snips 
 Chapters 
 Transcript 
 Episode notes 
AI and Neuroscience Convergence
- Modern AI and neuroscience are converging, revealing insights into brain function.
- Deep learning models, based on simplified brain structures, are improving our understanding of the cortex.
LLMs and Transformers
- Large language models (LLMs) like ChatGPT use transformers, a deep learning architecture with self-attention.
- This allows LLMs to understand context and meaning by predicting the next word in a sentence.
LLMs for Creative Writing
- Jeff Young discusses his experience using LLMs for creative writing, such as generating weather reports in the style of Joyce Carol Oates.
- He questions how these models can move beyond parlor tricks to deeper learning tools.
