Machine Learning Street Talk (MLST)

Daniel Franzen & Jan Disselhoff - ARC Prize 2024 winners

51 snips
Feb 12, 2025
Daniel Franzen and Jan Disselhoff, the winners of the ARC Prize 2024, dive into their innovative approaches with large language models. They discuss achieving a surprising 53.5% accuracy using novel techniques like depth-first search for token selection and test-time training. Their insights into model training complexities, ethical considerations, and the balance between performance and accuracy provide a fascinating look at cutting-edge AI research. Additionally, they share the importance of rapid innovation under competitive pressures and the challenges faced in algorithm development.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

LLMs Can Reason

  • LLMs can reason, contrary to some beliefs.
  • They achieved 53.5% accuracy on the ARC Challenge by creatively using LLMs.
ANECDOTE

LLMs Infer 2D Structure

  • LLMs inferred 2D structure from 1D text representation of ARC tasks.
  • Explicitly providing structural information didn't improve performance significantly.
ADVICE

Leverage REARC for Data Augmentation

  • Use REARC, an unlimited dataset generator.
  • It creates additional examples for each challenge and helps avoid saturation with limited datasets.
Get the Snipd Podcast app to discover more snips from this episode
Get the app