Online discussions about the capabilities of current models often revolve around overestimating or underestimating their potential, causing arguments to miss the mark. The uncertainty in predicting progress for the future can be significantly larger than the difference in their arguments. Moreover, there is a discord between the norms of science, which aim to converge slowly towards truth, and the fast-paced, prediction-focused discourse encouraged by the internet. An old scientist's credibility lies in affirming possibilities rather than dismissing them, as history often reveals unexpected scientific breakthroughs.
Read the full transcript here.
Along what axes and at what rates is the AI industry growing? What algorithmic developments have yielded the greatest efficiency boosts? When, if ever, will we hit the upper limits of the amount of computing power, data, money, etc., we can throw at AI development? Why do some people seemingly become fixated on particular tasks that particular AI models can't perform and draw the conclusion that AIs are still pretty dumb and won't be taking our jobs any time soon? What kinds of tasks are more or less easily automatable? Should more people work on AI? What does it mean to "take ownership" of our friendships? What sorts of thinking patterns employed by AI engineers can be beneficial in other areas of life? How can we make better decisions, especially about large things like careers and relationships?
Danny Hernandez was an early AI researcher at OpenAI and Anthropic. He's best known for measuring macro progress in AI. For example, he helped show that the compute of the largest training runs was growing at 10x per year between 2012 and 2017. He also helped show an algorithmic equivalent of Moore's Law that was faster, and he's done work on scaling laws and mechanistic interpretability of learning from repeated data. He is currently focused on alignment research.
Staff
Music
Affiliates