The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)

Deep Learning, Transformers, and the Consequences of Scale with Oriol Vinyals - #546

Dec 20, 2021
Oriol Vinyals, Lead of the Deep Learning team at DeepMind, shares his insights on the evolving landscape of AI. He discusses the state of transformer models and their potential limitations, as well as the recent paper on StarCraft II Unplugged, exploring the depth of offline reinforcement learning. The conversation delves into translating gaming AI innovations into real-world applications and examines advancements in multimodal few-shot learning. Vinyals also reflects on the consequences of scale in deep learning, inviting thoughts on future directions.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Transformers and Future Models

  • Transformers might be the ultimate model for all modalities, but limitations exist.
  • Hierarchical memory and addressing computational challenges are key areas for improvement.
INSIGHT

Large Language Models: A Paradigm Shift

  • Large language models (LLMs) represent a paradigm shift due to their scale and performance.
  • LLMs are now powerful enough for practical applications, moving beyond theoretical discussions.
ANECDOTE

AlphaStar's Hybrid Approach

  • AlphaStar used imitation learning from human game data and multi-agent self-play.
  • This hybrid approach led to AlphaStar reaching top-level performance in StarCraft II.
Get the Snipd Podcast app to discover more snips from this episode
Get the app