Closer To Truth

Terry Sejnowski on ChatGPT and the Future of AI

37 snips
Apr 22, 2025
Terry Sejnowski, Francis Crick Chair at The Salk Institute, dives into the complexities of large language models like ChatGPT. He questions whether these models truly understand language or just mimic human intelligence. Discussion ranges from the evolution of AI and the pursuit of artificial general intelligence (AGI) to the intriguing intersection of neurobiology and AI. Sejnowski also addresses ethical considerations surrounding AI consciousness and its implications for the future, challenging us to rethink what it means to be intelligent.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

LLMs' Unexpected Generality

  • Large language models shocked engineers because they handle many tasks generally, unlike prior AI which solved specific problems.
  • This generalizes AI capabilities beyond expectations, changing human-computer interaction fundamentally.
INSIGHT

LLM Learning Via Next-Word Prediction

  • Large language models learn by predicting the next word using self-supervised learning, requiring no labeled data.
  • This prediction task forces them to embed semantic meaning into rich vector representations, enabling broad understanding.
INSIGHT

Key Concepts Behind GPT Models

  • 'Generative' refers to producing word sequences; 'pre-trained' means trained extensively before use; 'transformer' architecture enables context memory and self-attention in LLMs.
  • These elements together empower LLMs to generate relevant and context-aware language swiftly.
Get the Snipd Podcast app to discover more snips from this episode
Get the app