

[22] Graham Neubig - Unsupervised Learning of Lexical Information
4 snips Apr 2, 2021
Chapters
Transcript
Episode notes
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Introduction
00:00 • 3min
How to Design Your Own Language
02:51 • 2min
How I Learned Mandarin Characters
05:04 • 2min
Unsupervised Learning of Lexical Information for Language Processing Systems
07:34 • 3min
The Future of Natural Language Processing
10:19 • 2min
Multilingual Machine Translation With Soft Decoupled Encoding
12:13 • 2min
The Parametric Versus Non-Parametric Difference in Machine Learning
14:42 • 4min
The Pros and Cons of Non-Parametric Bayesian Statistics in Neural Models
18:31 • 2min
Non-Parametric Language Modeling
20:57 • 6min
How to Learn a Language Model From Text
26:35 • 2min
The Uses of Formalisms in Machine Learning
28:26 • 3min
The Motivation to Work on Translation
31:21 • 3min
The Future of Syntax Based Models
34:19 • 3min
Scaling Up N-Gram Language Models
37:17 • 2min
The Probabilistic Alignment Grammar
39:07 • 5min
The Importance of Alignment in Machine Learning
43:51 • 2min
The Differences Between Alignment Models and Awesome Aligner
45:25 • 3min
The Importance of Data in Translation
48:12 • 2min
Neural Networks and the Inversion Transaction Grammar
49:47 • 2min
The Root of My PhD Research
51:41 • 2min
The Rise of Deep Learning
53:24 • 2min
The Future of Deep Learning
55:35 • 2min
The Objective Function of a PhD Candidate
57:33 • 2min
Advice for a New Researcher
59:23 • 3min