Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13
Introduction
00:00 • 2min
A Regression Task or Predict a Probability?
02:04 • 2min
The Limits of Text to Text Transformations
03:56 • 2min
Scaling Is Not the Most Satisfying Solution
06:00 • 2min
The in Coder Only Architecture in Transfer Learning for Natural Language Processing
07:49 • 3min
Give Well - Give Well Data Sceptic
10:34 • 4min
Transfer Learning for Text - Can You Get Natural Text Out of Common Crawl?
14:30 • 2min
Using Loss Functions in Machine Learning Models
16:32 • 3min
Using Attention Masks in a Language Model
19:13 • 2min
The Perimeter Flop Trade-Off Between DeCoter and Incoecoter Language Models
20:47 • 2min
Transfer Learning
23:08 • 2min
Using a Colap Note Book to Find Tune a Text Model
25:08 • 2min
Can We Just Keep Putting in Bigger Data Sets and See Better Performance?
26:50 • 3min