LessWrong (Curated & Popular) cover image

"Two-year update on my personal AI timelines" by Ajeya Cotra

LessWrong (Curated & Popular)

CHAPTER

The Bar for Transformative in Quotes

There's so much human imitation data on programming in AI that the model can train on vastly more examples than a human sees in their lifetime. Coding is intentionally very modular, so it seems especially well suited to break down into smaller, shorter, eyes and steps. Code force, search, seems like it could play a larger role in progress than in many other sciences. For example, a relatively simple ML model could generate and test out thousands of different small tweaks to architectures, loss functions, optimization algorithms, etc., at a small scale.

00:00
Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner