

AI in Education Podcast
Dan Bowen and Ray Fleming
Dan Bowen and Ray Fleming are experienced education renegades who have worked in many various educational institutions and educational companies across the world. They talk about Artificial Intelligence in Education - what it is, how it works, and the different ways it is being used. It's not too serious, or too technical, and is intended to be a good conversation.
Please note the views on the podcast are our own or those of our guests, and not of our respective employers (unless we say otherwise at the time!)
Please note the views on the podcast are our own or those of our guests, and not of our respective employers (unless we say otherwise at the time!)
Episodes
Mentioned books

Nov 19, 2023 • 27min
Rapid Rundown - Another gigantic news week for AI in Education
Rapid Rundown - Series 7 Episode 3 All the key news since our episode on 6th November - including new research on AI in education, and a big tech news week! It's okay to write research papers with Generative AI - but not to review them! The publishing arm of American Association for Advancement of Science (they publish 6 science journals, including the "Science" journal) says authors can use “AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript” as long as their use is noted. But they've banned AI-generated images and other multimedia" without explicit permission from the editors”. And they won't allow the use of AI by reviewers because this “could breach the confidentiality of the manuscript”. A number of other publishers have made announcements recently, including the International Committee of Medical Journal Editors , the World Association of Medical Editors and the Council of Science Editors. https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models Learning From Mistakes Makes LLM Better Reasoner https://arxiv.org/abs/2310.20689 News Article: https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn. The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week. The researchers first had models like LLaMA-2 generate flawed reasoning paths for math word problems. GPT-4 then identified errors in the reasoning, explained them and provided corrected reasoning paths. The researchers used the corrected data to further train the original models. Role of AI chatbots in education: systematic literature review International Journal of Educational Technology in Higher Education https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00426-1#Sec8 Looks at chatbots from the perspective of students and educators, and the benefits and concerns raised in the 67 research papers they studied We found that students primarily gain from AI-powered chatbots in three key areas: homework and study assistance, a personalized learning experience, and the development of various skills. For educators, the main advantages are the time-saving assistance and improved pedagogy. However, our research also emphasizes significant challenges and critical factors that educators need to handle diligently. These include concerns related to AI applications such as reliability, accuracy, and ethical considerations." Also, a fantastic list of references for papers discussing chatbots in education, many from this year More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems https://arxiv.org/abs/2311.04926 https://arxiv.org/pdf/2311.04926.pdf Parsons problems are a type of programming puzzle where learners are given jumbled code snippets and must arrange them in the correct logical sequence rather than producing the code from scratch "While some scholars have advocated for the integration of visual problems as a safeguard against the capabilities of language models, new multimodal language models now have vision and language capabilities that may allow them to analyze and solve visual problems. … Our results show that GPT-4V solved 96.7% of these visual problems" The research's findings have significant implications for computing education. The high success rate of GPT-4V in solving visually diverse Parsons Problems suggests that relying solely on visual complexity in coding assignments might not effectively challenge students or assess their true understanding in the era of advanced AI tools. This raises questions about the effectiveness of traditional assessment methods in programming education and the need for innovative approaches that can more accurately evaluate a student's coding skills and understanding. Interesting to note some research earlier in the year found that LLMs could only solve half the problems - so things have moved very fast! The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4 https://arxiv.org/pdf/2311.07361.pdf By Microsoft Research and Microsoft Azure Quantum researchers "Our preliminary exploration indicates that GPT-4 exhibits promising potential for a variety of scientific applications, demonstrating its aptitude for handling complex problem-solving and knowledge integration tasks" The study explores the impact of GPT-4 in advancing scientific discovery across various domains. It investigates its use in drug discovery, biology, computational chemistry, materials design, and solving Partial Differential Equations (PDEs). The study primarily uses qualitative assessments and some quantitative measures to evaluate GPT-4's understanding of complex scientific concepts and problem-solving abilities. While GPT-4 shows remarkable potential and understanding in these areas, particularly in drug discovery and biology, it faces limitations in precise calculations and processing complex data formats. The research underscores GPT-4's strengths in integrating knowledge, predicting properties, and aiding interdisciplinary research. An Interdisciplinary Outlook on Large Language Models for Scientific Research https://arxiv.org/abs/2311.04929 Overall, the paper presents LLMs as powerful tools that can significantly enhance scientific research. They offer the promise of faster, more efficient research processes, but this comes with the responsibility to use them well and critically, ensuring the integrity and ethical standards of scientific inquiry. It discusses how they are being used effectively in eight areas of science, and deals with issues like hallucinations - but, as it points out, even in Engineering where there's low tolerance for mistakes, GPT-4 can pass critical exams. This research is a good source of focus for researchers thinking about how it may help or change their research areas, and help with scientific communication and collaboration. With ChatGPT, do we have to rewrite our learning objectives -- CASE study in Cybersecurity https://arxiv.org/abs/2311.06261 This paper examines how AI tools like ChatGPT can change the way cybersecurity is taught in universities. It uses a method called "Understanding by Design" to look at learning objectives in cybersecurity courses. The study suggests that ChatGPT can help students achieve these objectives more quickly and understand complex concepts better. However, it also raises questions about how much students should rely on AI tools. The paper argues that while AI can assist in learning, it's crucial for students to understand fundamental concepts from the ground up. The study provides examples of how ChatGPT could be integrated into a cybersecurity curriculum, proposing a balance between traditional learning and AI-assisted education. "We hypothesize that ChatGPT will allow us to accelerate some of our existing LOs, given the tool’s capabilities… From this exercise, we have learned two things in particular that we believe we will need to be further examined by all educators. First, our experiences with ChatGPT suggest that the tool can provide a powerful means to allow learners to generate pieces of their work quickly…. Second, we will need to consider how to teach concepts that need to be experienced from “first-principle” learning approaches and learn how to motivate students to perform some rudimentary exercises that “the tool” can easily do for me." A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models https://arxiv.org/abs/2311.07491 What this means is that AI is continuing to get better, and people are finding ways to make it even better, at passing exams and multi-choice questions Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study https://arxiv.org/abs/2311.07387 Good news for me though - I still have a skill that can't be replaced by a robot. It seems that AI might be great at playing Go, and Chess, and seemingly everything else. BUT it turns out it can't play Minesweeper as well as a person. So my leisure time is safe! DEMASQ: Unmasking the ChatGPT Wordsmith https://arxiv.org/abs/2311.05019 Finally, I'll mention this research, where the researchers have proposed a new method of ChatGPT detection, where they're assessing the 'energy' of the writing. It might be a step forward, but tbh it took me a while to find the thing I'm always looking for with detectors, which is the False Positive rate - ie how many students in a class of 100 will it accuse of writing something with ChatGPT when they actually wrote it themself. And the answer is it has a 4% false positive rate on research abstracts published on ArXiv - but apparently it's 100% accurate on Reddit. Not sure that's really good enough for education use, where students are more likely to be using academic style than Reddit style! I'll leave you to read the research if you want to know more, and learn about the battle between AI writers and AI detectors Harvard's AI Pedagogy Project And outside of research, it's worth taking a look at work from the metaLAB at Harvard called "Creative and critical engagement with AI in education" It's a collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. It includes an AI starter, an LLM tutorial, lots of resources, and a set of assignments https://aipedagogy.org/ Microsoft Ignite Book of News There's way too much to fit into the shownotes, so just head straight to the Book of News for all the huge AI announcements from Microsoft's big developer conference Link: Microsoft Ignite 2023 Book of News

Nov 10, 2023 • 16min
Rapid Rundown : A summary of the week of AI in education and research
This week's rapid rundown of AI in education includes topics such as false AI-generated allegations, UK DfE guidance on generative AI, Claude's undetectable AI writing, the contrast between old and new worlds of AI, Open AI's exciting announcements, specialization and research bots, GPT4 updates, and gender bias in AI education.

Nov 1, 2023 • 29min
Regeneration: Human Centred Educational AI
After 72 episodes, and six series, we've some exciting news. The AI in Education podcast is returning to its roots - with the original co-hosts Dan Bowen and Ray Fleming. Dan and Ray started this podcast over 4 years ago, and during that time Dan's always been here, rotating through co-hosts Ray, Beth and Lee, and now we're back to the original dynamic duo and a reset of the conversation. Without doubt, 2023 has been the year that AI hit the mainstream, so it's time to expand our thinking right out. Also, New Series Alert! We're starting Series 7 - In this episode of the AI podcast, Dan and Ray discuss the rapid advancements in AI and the impact on various industries. They explore the concept of generative AI and its implications. The conversation shifts to the challenges and opportunities of implementing AI in business and education settings. The hosts highlight the importance of a human-centered approach to AI and the need for a mindset shift in organizations. They also touch on topics such as bias in AI, the role of AI in education, and the potential benefits and risks of AI technology. Throughout the discussion, they emphasize the need for continuous learning, collaboration, and understanding of AI across different industries.

12 snips
Sep 19, 2023 • 35min
Dr Nick Jackson - Student agency: Into my 'AI'rms
Dr Nick Jackson, expert educator, and leader of student agency, talks about AI's impact on assessment, the importance of AI in education, the intersection of AI and music, and the power of technology in young students' hands.

Aug 11, 2023 • 45min
AI - The fuel that drives insight in K12 with Travis Smith
In this Epsiode Dan talks to Travis Smith about many aspects of Generative AI and Data in Education. Is AI the fuel that drives insight? Can we really personalise education? We also look at examples of how AI is currently being used in education.

Jul 28, 2023 • 35min
What just happened?
To kick off series 6, Dan interviews Ray Fleming about 'What just happened?' in terms of the landing on Generative AI and ChatGPT into society. We lookat how it might change assessment, courses and more. AI Business School Artificial Intelligence Courses - Microsoft AI

Dec 21, 2022 • 54min
Christmas, Infinite Monkeys and everything
Welcome to this week's episode of the podcast! We have a special guest – Ray Fleming, a podcast pioneer, educationalist, and improv master. Join Dan, Lee, Beth, and Ray as we discuss the events of 2022 and look forward to the future and the holidays. We have some interesting resources to share with you: ChatGPT: Optimizing Language Models for Dialogue (openai.com) DALL·E 2 (openai.com) Looking for some holiday reading recommendations? Check out these books: Broken: Social Systems and the Failing Them by Paul LeBlanc (https://www.amazon.com.au/Broken-Social-Systems-Failing-Them/dp/1637741766) Hack Your Bureaucracy: 10 Things That Matter Most by Marina Nitze and Nick Sinai (https://www.amazon.com.au/Hack-Your-Bureaucracy-Things-Matter/dp/0306827751) And don't forget to check out the article about how Takeru Kobayashi "redefined the problem" at the world hotdog eating championship: https://www.businessinsider.com/how-takeru-kobayashi-changed-competitive-eating-2016-7 We hope you enjoy the episode! This podcast is produced by Microsoft Australia & New Zealand employees, Lee Hickin, Dan Bowen, and Beth Worrall. The views and opinions expressed on this podcast are our own.

Dec 12, 2022 • 39min
Sustainability and the Future
Welcome to the AI podcast! In this episode, Beth, Dan, and Lee are joined by the Microsoft ANZ Sustainability lead, Brett Shoemaker. This episode discusses all things sustainability. This podcast is produced by Microsoft Australia & New Zealand employees, Lee Hickin, Dan Bowen, and Beth Worrall. The views and opinions expressed on this podcast are our own. Show links: https://www.linkedin.com/in/brettshoemaker/

Nov 7, 2022 • 39min
Hacking for good: ideas and tips
In this espisode Beth, Lee and Dan look at the mechanics of a creating hackathons based on our experiences on various projects around ethical and hackign for good. From CSIRO projects to the Imagine Academy we we look at what makes them a success and share tips on what works well.

Oct 4, 2022 • 51min
Mastery and lifelong learning moving Beyond ATAR
In this episode Beth, Dan and Lee and joined by Jan Owen AO. We discuss growing leadership from toads, skills and policy changes to drive future assessment. Digital Pulse 2022 (acs.org.au) “This podcast is produced by Microsoft Australia & New Zealand employees, Lee Hickin, Dan Bowen, and Beth Worrall. The views and opinions expressed on this podcast are our own.”