Manifold

Artificial Intelligence & Large Language Models: Oxford Lecture — #35

May 11, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 2min
2
Three Perspectives for Deep Learning and Neural Nets
02:26 • 3min
3
How to Design and Train Models That Have Practical Utility
05:52 • 2min
4
The Expressiveness of a Network
07:36 • 3min
5
The Problem With the Back Propagation Algorithm
10:25 • 1min
6
The Problem With Glasses and Physics
11:51 • 3min
7
Large Width Expansion and Neural Tangent Kernels
15:09 • 5min
8
The Future of Large Language Models
19:48 • 2min
9
The Dimensionality of the Vector Space of Our Concepts
22:01 • 4min
10
How to Train an Embedding Machine
26:16 • 5min
11
The Transformer Architecture: How Word Order Matters
31:40 • 2min
12
Attention Is All You Need
33:24 • 3min
13
The Importance of Attention in Open AI
36:41 • 2min
14
The Structure of Attention Heads
38:46 • 5min
15
The Empirical Structure of the Neural Net
43:23 • 6min
16
The Geometry of Thought
48:56 • 4min
17
The Theory of Mind
53:22 • 4min
18
How to Train a Language Model to Translate Natural Language Instructions
57:37 • 5min
19
The Problem With Human Natural Language Models
01:02:20 • 2min
20
The Hallucination of the Models
01:03:59 • 2min
21
The Importance of Ground Truth in AIs
01:06:23 • 5min
22
Wolfram's Corpus and the Hallucination Problem
01:11:17 • 3min
23
How Watson Applies Pavlov's Principles to Learn
01:14:39 • 4min
24
The Future of AI
01:18:29 • 3min
25
The Future of LLMs
01:21:13 • 3min