The Thesis Review

[22] Graham Neubig - Unsupervised Learning of Lexical Information

4 snips
Apr 2, 2021
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
How to Design Your Own Language
02:51 • 2min
3
How I Learned Mandarin Characters
05:04 • 2min
4
Unsupervised Learning of Lexical Information for Language Processing Systems
07:34 • 3min
5
The Future of Natural Language Processing
10:19 • 2min
6
Multilingual Machine Translation With Soft Decoupled Encoding
12:13 • 2min
7
The Parametric Versus Non-Parametric Difference in Machine Learning
14:42 • 4min
8
The Pros and Cons of Non-Parametric Bayesian Statistics in Neural Models
18:31 • 2min
9
Non-Parametric Language Modeling
20:57 • 6min
10
How to Learn a Language Model From Text
26:35 • 2min
11
The Uses of Formalisms in Machine Learning
28:26 • 3min
12
The Motivation to Work on Translation
31:21 • 3min
13
The Future of Syntax Based Models
34:19 • 3min
14
Scaling Up N-Gram Language Models
37:17 • 2min
15
The Probabilistic Alignment Grammar
39:07 • 5min
16
The Importance of Alignment in Machine Learning
43:51 • 2min
17
The Differences Between Alignment Models and Awesome Aligner
45:25 • 3min
18
The Importance of Data in Translation
48:12 • 2min
19
Neural Networks and the Inversion Transaction Grammar
49:47 • 2min
20
The Root of My PhD Research
51:41 • 2min
21
The Rise of Deep Learning
53:24 • 2min
22
The Future of Deep Learning
55:35 • 2min
23
The Objective Function of a PhD Candidate
57:33 • 2min
24
Advice for a New Researcher
59:23 • 3min