Machine Learning Street Talk (MLST) cover image

Machine Learning Street Talk (MLST)

#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS

Dec 20, 2022
In an engaging conversation, Hattie Zhou, a PhD student at Université de Montréal and Mila, discusses her groundbreaking work on teaching algorithmic reasoning to large language models at Google Brain. She outlines the four essential stages for this task, including how to combine and use algorithms as tools. Hattie also shares innovative strategies for enhancing the reasoning capabilities of these models, the computational limits they face, and the exciting prospects for their applications in mathematical conjecturing.
21:14

Episode guests

Podcast summary created with Snipd AI

Quick takeaways

  • Teaching algorithmic reasoning to large language models through algorithmic prompting can significantly reduce errors and improve reasoning capabilities.
  • In-context learning and attention mechanisms can enhance the efficiency of large language models and push the limits of algorithmic reasoning.

Deep dives

Teaching algorithmic reasoning to large language models

Hattie Jo has released a paper on teaching algorithmic reasoning to large language models. She identifies and examines four key stages for successful algorithmic reasoning: formulating algorithms as skills, teaching multiple skills simultaneously, teaching how to combine skills, and teaching how to use skills as tools. Through algorithmic prompting, she has achieved significant error reduction on tasks compared to baselines, demonstrating the viability of this approach.

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner