

EdTechnical
Owen Henkel & Libby Hills
Join two former teachers - Libby Hills from the Jacobs Foundation and AI researcher Owen Henkel - for the EdTechnical podcast series about AI in education. Each episode, Libby and Owen will ask experts to help educators sift the useful insights from the AI hype. They’ll be asking questions like - how does this actually help students and teachers? What do we actually know about this technology, and what’s just speculation? And (importantly!) when we say AI, what are we actually talking about?
Episodes
Mentioned books

6 snips
Feb 10, 2025 • 9min
Is two years of learning possible in six weeks with AI?
Owen and Libby discuss a study revealing the power of Microsoft Copilot on student learning in Nigeria. In just six weeks, students reportedly achieved learning gains equivalent to almost two years. They delve into the implications of rapid learning claims and discuss how effect sizes can be misunderstood. The conversation also explores the cognitive limits of rapid knowledge acquisition, questioning the feasibility of such interventions and setting the stage for future educational research.

Jan 27, 2025 • 35min
Babies & AI: what can AI tell us about how babies learn language?
In this episode, Libby and Owen interview Mike Frank, Professor at Stanford University and leading expert in child development. This episode has a different angle to the others, as it is more about AI as a scientific instrument rather than as a tool for learning. Libby and Owen have a fascinating discussion with Mike about language acquisition and what we can learn about language learning from large language models. Mike explains some of the differences between how large language models develop an understanding of human language versus how babies do this. There are some big questions touched on here, including how much of the full human experience it’s possible to capture in data. Libby and Owen also make excellent use of Mike’s valuable time by asking for his expert view on why infants find unboxing videos - videos of other children opening gifts - so addictive. LinksMike Frank’s biography New York Times piece about Mike’s work An interview with Mike about his research Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.

Jan 13, 2025 • 11min
Teachers & ChatGPT: 25.3 extra minutes a week
In this short, Libby and Owen discuss a hot-off-the-press study that is one of the first to test how ChatGPT impacts the time science teachers spend on lesson preparation. The TLDR is that teachers who used ChatGPT, with a guide, spent 31% less time preparing lessons - that’s 25.3 minutes per week on average. This very promising result points to the potential for ChatGPT and similar generative AI tools to help teachers with their workload. However we encourage you to dig into the summary and report to go beyond the headline result (after listening to this episode) - this is a rich and rigorous study with lots of other interesting findings!Links EEF summary Full study Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.

6 snips
Dec 16, 2024 • 38min
How & why did Google build an education specific LLM? (part 2/3)
Irina Jurenka, Research Lead at Google DeepMind, and Muktha Ananda, Engineering Leader in Learning and Education at Google, share their insights on developing LearnLM, a large language model tailored for education. They delve into the intricacies of fine-tuning AI to enhance pedagogical effectiveness, explaining how they measure learner outcomes and the challenges of creating an engaging AI tutor. The conversation highlights the delicate balance between emotional engagement and learning efficiency, showcasing a multidisciplinary approach to innovation in educational technology.

7 snips
Dec 2, 2024 • 21min
AI tutoring part 2: How good can it get?
Ben Kornell, Managing Partner at Common Sense Growth Fund and co-founder of Edtech Insiders, dives into the intricacies of AI tutoring. He differentiates between AI-powered search and genuine tutoring, emphasizing how AI can enhance human interaction. The conversation explores age-specific needs, with younger students benefiting from personal connections while older ones seek independence. Ethical concerns, such as bias and dependency, are also discussed, alongside exciting future developments that may make AI tutoring feel almost like science fiction!

35 snips
Nov 18, 2024 • 36min
Inside the black box: How Google is thinking about AI & education (part 1 of 3)
Rob Wong, the Product Lead for LearnX at Google, dives into the innovative intersection of AI and education. He sheds light on the challenges of modeling learner profiles and the importance of balancing user needs with educational value. The discussion showcases exciting AI tools like YouTutor and Learning Coach Gem, emphasizing personalization in learning experiences. Wong also explores the shift to an 'AI first' approach and the ongoing efforts to fine-tune AI for supportive, engaging education. Prepare for insights into the future of learning!

Oct 21, 2024 • 25min
Big data and algorithmic bias in education: what is it and why does it matter?
Ryan Baker, a Professor at the University of Pennsylvania and Director of the Penn Center for Learning Analytics, dives into the fascinating world of big data and algorithmic bias in education. He highlights how educational data mining can enhance learner engagement and outcomes. The discussion reveals the nuances of algorithmic bias, its societal implications, and why tailored approaches are necessary to ensure fairness. Moreover, Baker debunks myths about AI in education, advocating for a balanced integration that supports educators.

Oct 8, 2024 • 11min
Think aloud or think before you speak?: OpenAI’s new model for advanced reasoning
In this short episode, Libby and Owen discuss OpenAI’s new model for advanced reasoning, o1. They talk about its new capabilities and strengths, and what they think about its significance for education after an initial play around. They talk through the benefits of ‘think aloud’ versus ‘think before you speak’ approaches in education, and how this relates to o1. Links:OpenAI’s announcement about o1Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.

4 snips
Sep 23, 2024 • 34min
Misconceptions about misconceptions: How AI can help teachers understand & tackle student misconceptions
Craig Barton, Head of Education at Eedi and a math podcast host, joins Simon Woodhead, Director of Research at Eedi, to dive deep into educational misconceptions. They discuss how AI can enhance understanding and address errors in math education. Listeners will learn about the importance of identifying misconceptions using diagnostic questions and how AI integration can support teachers in overcoming real-world challenges. The conversation emphasizes both the potential and limitations of AI in enriching student learning experiences.

Sep 9, 2024 • 12min
Why Language Models are suck ups and how this can be bad for learning
In this short, Libby and Owen discuss recent research from Anthropic looking at sycophancy – the tendency to agree with users – in large language models (LLMs), and key research from educational psychology about how important feedback is for learning. Libby and Owen connect the two papers and explore why sycophancy is especially a problem when it comes to using LLMs for educational purposes. Links:Anthropic paper on sycophancy in language models John Hattie and Helen Timberley’s paper, The Power of Feedback Join us on social media: BOLD (@BOLD_insights), Libby Hills (@Libbylhhills) and Owen Henkel (@owen_henkel) Listen to all episodes of Ed-Technical here: https://bold.expert/ed-technical Subscribe to BOLD’s newsletter: https://bold.expert/newsletter Stay up to date with all the latest research on child development and learning: https://bold.expert Credits: Sarah Myles for production support; Josie Hills for graphic design; Anabel Altenburg for content production.