

#91 - HATTIE ZHOU - Teaching Algorithmic Reasoning via In-context Learning #NeurIPS
Dec 20, 2022
In an engaging conversation, Hattie Zhou, a PhD student at Université de Montréal and Mila, discusses her groundbreaking work on teaching algorithmic reasoning to large language models at Google Brain. She outlines the four essential stages for this task, including how to combine and use algorithms as tools. Hattie also shares innovative strategies for enhancing the reasoning capabilities of these models, the computational limits they face, and the exciting prospects for their applications in mathematical conjecturing.
AI Snips
Chapters
Transcript
Episode notes
LLMs and Reasoning
- Large language models (LLMs) struggle with symbolic manipulation and reasoning.
- This paper explores teaching LLMs algorithms to improve their performance on tasks like addition.
LLMs as Compilers
- Prompt engineering treats LLMs like compilers, providing programs for extrapolation.
- This allows LLMs to perform tasks they weren't explicitly trained on.
Algorithmic Reasoning Defined
- Algorithmic reasoning involves using algorithms, which are input-independent, to solve tasks.
- This applies to tasks with rigidly defined steps and those with softer, more abstract patterns.