Ed-Technical

How & why did Google build an education specific LLM? (part 2/3)

6 snips
Dec 16, 2024
Irina Jurenka, Research Lead at Google DeepMind, and Muktha Ananda, Engineering Leader in Learning and Education at Google, share their insights on developing LearnLM, a large language model tailored for education. They delve into the intricacies of fine-tuning AI to enhance pedagogical effectiveness, explaining how they measure learner outcomes and the challenges of creating an engaging AI tutor. The conversation highlights the delicate balance between emotional engagement and learning efficiency, showcasing a multidisciplinary approach to innovation in educational technology.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Education Complete

  • Effective tutoring requires excelling at many individual components.
  • Building an AI tutor necessitates solving various sub-tasks, like quizzing.
INSIGHT

Proactive Tutoring

  • LLMs are typically trained as helpful assistants, not teachers who lead conversations.
  • AI tutors need to proactively guide learning, unlike typical LLMs.
INSIGHT

Prompting vs. Fine-Tuning

  • Prompting is effective but limited; fine-tuning addresses the gaps in prompting's capabilities.
  • Start with prompting, then use fine-tuning to refine and enhance desired behaviors.
Get the Snipd Podcast app to discover more snips from this episode
Get the app