

Irina Rish | AI & Scale
How has the history of AI been shaped by the "bitter lesson" that simple scaling beats complex algorithms, and what comes next? In this talk, Irina Rish traces AI's evolution from rule-based systems to today's foundation models, exploring how scaling laws predicted performance improvements and recent shifts toward more efficient approaches. She covers the progression from GPT scaling laws to Chinchilla's compute-optimal training, the rise of inference-time computation with models like OpenAI's o1, and why we might need to move beyond transformers to truly brain-inspired dynamical systems.
Irina Rish is a professor at the University of Montreal and Mila Quebec AI Institute. She also co-founded a startup focused on developing more efficient foundation models and recently released a suite of open-source compressed models.
This talk was recorded at Vision Weekend Puerto Rico 2025. To see the slides and more talks from the event, please visit our YouTube channel.
Hosted on Acast. See acast.com/privacy for more information.