Snipd home pageGet the app
public
Data Skeptic chevron_right

The Limits of NLP

Dec 24, 2019
29:47
forum Ask episode
view_agenda Chapters
auto_awesome Transcript
info_circle Episode notes
1
Introduction
00:00 • 2min
chevron_right
2
A Regression Task or Predict a Probability?
02:04 • 2min
chevron_right
3
The Limits of Text to Text Transformations
03:56 • 2min
chevron_right
4
Scaling Is Not the Most Satisfying Solution
06:00 • 2min
chevron_right
5
The in Coder Only Architecture in Transfer Learning for Natural Language Processing
07:49 • 3min
chevron_right
6
Give Well - Give Well Data Sceptic
10:34 • 4min
chevron_right
7
Transfer Learning for Text - Can You Get Natural Text Out of Common Crawl?
14:30 • 2min
chevron_right
8
Using Loss Functions in Machine Learning Models
16:32 • 3min
chevron_right
9
Using Attention Masks in a Language Model
19:13 • 2min
chevron_right
10
The Perimeter Flop Trade-Off Between DeCoter and Incoecoter Language Models
20:47 • 2min
chevron_right
11
Transfer Learning
23:08 • 2min
chevron_right
12
Using a Colap Note Book to Find Tune a Text Model
25:08 • 2min
chevron_right
13
Can We Just Keep Putting in Bigger Data Sets and See Better Performance?
26:50 • 3min
chevron_right

We are joined by Colin Raffel to discuss the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer".

HomeTop podcastsPopular guests