Lex Fridman Podcast cover image

#94 – Ilya Sutskever: Deep Learning

Lex Fridman Podcast

NOTE

The Transformer Is Not Recurring, Is It?

Small neural nets do not capture sentiment, while large neural nets do. This is because smaller models lack semantic understanding, whereas larger models show signs of partial semantic understanding. GPT-2, a transformer with 1.5 billion parameters, is a significant language model. The transformer is a major advance in neural network architectures. Attention is an interesting concept compared to recurrent neural networks.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner