The Thesis Review

[26] Kevin Ellis - Algorithms for Learning to Induce Programs

May 29, 2021
Kevin Ellis, an assistant professor at Cornell and a research scientist at Common Sense Machines, dives into the intriguing world of AI and program synthesis. He discusses his groundbreaking work on DreamCoder, which automates the creation of programming libraries using neural networks. Ellis explores the fusion of AI with natural language and cognitive models, emphasizing Bayesian approaches that mirror human cognition. He shares insights on bridging program synthesis with theorem proving, highlighting the importance of reusable abstractions in machine learning.
Ask episode
AI Snips
Chapters
Books
Transcript
Episode notes
INSIGHT

Differences between programs and language

  • Programs require different data and learning approaches than natural language, with fewer large datasets available.
  • Programs emphasize compositionality and reuse but do not require the full ambiguity and common sense needs of language.
ANECDOTE

Origin of research interest

  • Kevin Ellis became interested in program induction through cognitive science and theory of computation courses near his undergraduate end.
  • He was fascinated by human intelligence's flexibility and considered Turing complete representations key to AI flexibility.
INSIGHT

Program synthesis vs. induction distinction

  • Program synthesis generates programs from precise specifications and automates tedious, error-prone coding tasks.
  • Program induction infers programs from ambiguous data, functioning more like learning rather than constraint-solving.
Get the Snipd Podcast app to discover more snips from this episode
Get the app