LessWrong (Curated & Popular) cover image

"Two-year update on my personal AI timelines" by Ajeya Cotra

LessWrong (Curated & Popular)

00:00

The Bar for Transformative in Quotes

There's so much human imitation data on programming in AI that the model can train on vastly more examples than a human sees in their lifetime. Coding is intentionally very modular, so it seems especially well suited to break down into smaller, shorter, eyes and steps. Code force, search, seems like it could play a larger role in progress than in many other sciences. For example, a relatively simple ML model could generate and test out thousands of different small tweaks to architectures, loss functions, optimization algorithms, etc., at a small scale.

Transcript
Play full episode

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app