AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
Balancing Emotional Engagement and Learning Efficiency in AI Tutoring
This chapter examines the intricate task of optimizing AI tutors by balancing the preferences of both teachers and students. It highlights the role of user feedback and the impact of emotional connections on learning outcomes, while cautioning against the risks of developing overly human-like AI.
This episode is the second in our three-part mini-series with Google, where we find out how one of the world’s largest tech companies developed a family of large language models specifically for education, called LearnLM. This instalment focuses on the technical and conceptual groundwork behind LearnLM. Libby and Owen speak to three expert guests from across Google, including DeepMind, who are heavily involved in developing LearnLM.
One of the problems with out-of-the-box large language models is that they’re designed to be helpful assistants, not teachers. Google was interested in developing a large language model better suited to educational tasks, that others might use as a starting point for education products. In this episode, members of the Google team talk about how they approached this, and why some of the subtleties of good teaching makes this an especially tricky undertaking!
They describe the under-the-hood processes that turn a generic large language model into something more attuned to educational needs. Libby and Owen explore how Google’s teams approached fine-tuning to equip LearnLM with pedagogical behaviours that can’t be achieved by prompt engineering alone. This episode offers a rare look at the rigorous, iterative, and multidisciplinary effort it takes to reshape a general-purpose AI into a tool that has the potential to support learning.
Stay tuned for our next episode in this mini-series, where Libby and Owen take a step back and look at how to define tutoring and assess the extent to which an AI tool is delivering.
Team biographies
Muktha Ananda is Engineering leader, Learning and Education @Google. Muktha has applied AI to a variety of domains such as gaming, search, social/professional networks and online advertisement and most recently education and learning. At Google Muktha’s team builds horizontal AI technologies for learning which can be used across surfaces like Search, Gemini, Classroom, and YouTube. Muktha also works on Gemini Learning.
Markus Kunesch is a Staff Research Engineer at Google DeepMind and tech lead of the AI for Education research programme. His work is focused on generative AI, AI for Education, and AI ethics, with a particular interest in translating social science research into new evaluations and modeling approaches. Before embarking on AI research, Markus completed a PhD in black hole physics.
Irina Jurenka is a Research Lead at Google DeepMind, where she works with a multidisciplinary team of research scientists and engineers to advance Generative AI capabilities towards the goal of making quality education more universally accessible. Before joining DeepMind, Irina was a British Psychological Society Undergraduate Award winner for her achievements as an Experimental Psychology student at Westminster University. This was followed by a DPhil at the Oxford Center for Computational Neuroscience and Artificial Intelligence.
Link
Join us on social media:
Credits: Sarah Myles for production support; Josie Hills for graphic design
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode
Hear something you like? Tap your headphones to save it with AI-generated key takeaways
Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more
Listen to all your favourite podcasts with AI-powered features
Listen to the best highlights from the podcasts you love and dive into the full episode