Astral Codex Ten Podcast

Biological Anchors: A Trick That Might Or Might Not Work

Feb 24, 2022
Ask episode
Chapters
Transcript
Episode notes
1
Introduction
00:00 • 3min
2
Is the Industrial Revolution a Good Idea?
02:34 • 3min
3
How Much Worse Are Human Engineered Artifacts Than Evolution?
05:21 • 3min
4
How Much Compute Would It Take to Train a Machine Learning Model?
08:05 • 2min
5
How Much Training Do You Need for Human Level AI?
10:34 • 2min
6
How Many Floating Point Operations Are the Evolutionary Process?
12:57 • 2min
7
How Many Parometers Do We Need for Human Intelligence?
15:21 • 2min
8
The Scaling Laws of the Genome
17:48 • 2min
9
Training Efficiency Doubles Every 16 Months
20:08 • 3min
10
Ajaa's Grap Forward to 20 25
23:22 • 2min
11
Ajaa's Scale Models for Training a Human Level a I
25:02 • 3min
12
Using the Colab Notebook and Google Spread Sheet
27:33 • 3min
13
Aj's Report - Does the Truth Point to Itself?
30:05 • 3min
14
Ship Size Trends - The Great Eastern, the Outlier
32:49 • 3min
15
How to Make a Machine - Part One, Biology Inspired a I
35:28 • 2min
16
The Human Brain Consumes About 20 Wats of Power
37:57 • 3min
17
Rhythmic Progress Versus Argorhythmic Paratime Shifts
40:32 • 5min
18
Openfield's Estimate of a G I Is a Double Edged Sword
45:29 • 4min
19
A I Production Function Tech Forecasting Based on Inpurts
49:25 • 4min
20
The Contribution of New Algorithms to Computer Performance
53:06 • 5min
21
How Well Does Softway Progress Get You to Two Thousand 15 Performance?
57:57 • 2min
22
Open Fill Co Holden Kanowski
01:00:19 • 4min
23
How Much of an Up Date Do We Make?
01:04:28 • 3min
24
Should I Update to Black Box With Early Scawled on It?
01:07:11 • 3min