FYI - For Your Innovation

The Current State of Artificial Intelligence with James Wang

May 4, 2023
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
The Convergence of Stars Aligning
03:04 • 4min
3
Cerebras GPT Family of Large Language Models
07:26 • 2min
4
The Evolution of Deep Learning in AI
09:41 • 4min
5
The Importance of Empirical Laws in AI
13:46 • 5min
6
The Transferability of Scaling Laws to Task Level Tasks
19:04 • 2min
7
Scaling Laws for AI Development
21:10 • 2min
8
The Importance of Scaling Models
22:56 • 2min
9
The Importance of Diffusion in AI
25:25 • 2min
10
Cerebral's Role in Inference
27:29 • 4min
11
The Importance of Graceful Scaling
31:43 • 3min
12
The Advantages of Disaggregated Memory for Language Modeling
34:44 • 2min
13
NVIDIA's Approach to GPU Training Is Similar to the GDC Presentation
36:35 • 2min
14
The Pros and Cons of Dojo's Approach to Training Tiles
38:40 • 4min
15
Ceripras AI Model Studio: A Platform Offering
42:42 • 5min
16
The Cost of Platform as a Service
47:16 • 2min
17
The Future of AI
49:15 • 4min
18
The Differences Between GPT and Existing Software
53:05 • 2min
19
The Importance of Generalization in Language Models
54:47 • 5min
20
The Future of AI
59:26 • 3min
21
The Future of AI
01:02:33 • 4min
22
The Future of Chat GPT
01:06:52 • 2min