AI Education Podcast cover image

AI Education Podcast

Latest episodes

undefined
12 snips
Jan 18, 2024 • 39min

Education, Data, and Generative AI - A Futurist Perspective with Kate Carruthers

Experts discuss the fusion of AI and education, focusing on the role of data in transforming traditional systems. They explore the potential of generative AI in education and its impact on business models. The conversation touches on barriers to AI adoption, adapting education to leverage unstructured data through AI, and the implications of this shift in the education sector.
undefined
Dec 21, 2023 • 50min

Joe Dale - the ultimate Christmas AI gift list

Our final episode for 2024 is an absolutely fabulous Christmas gift, full of a lots of presents in the form of different AI tips and services  Joe Dale, who's a UK-based education ICT & Modern Foreign Languages consultant, spends 50 lovely minutes sharing a huge list of AI tools for teachers and ideas for how to get the most out of AI in learning. We strongly recommend you find and follow Joe on LinkedIn or Twitter And if you're a language teacher, join Joe's Language Teaching with AI Facebook group Joe's also got an upcoming webinar series on using ChatGPT for language teachers: Resource Creation with ChatGPT on Mondays - 10.00, 19.00 and 21.30 GMT (UTC) in January - 8th, 15th, 22nd and 29th January 2024 Good news - 21:30 GMT is 8:30 AM and 10:00 GMT is 9PM in Sydney/Melbourne, so there's two times that work for Australia. And if you can't attend live, you get access to the recordings and all the prompts and guides that Joe shares on the webinars. There was a plethora of AI tools and resources mentioned in this episode: ChatGPT: https://chat.openai.com DALL-E: https://openai.com/dall-e-2 Voice Dictation in MS Word Online https://support.microsoft.com/en-au/office/dictate-your-documents-in-word-3876e05f-3fcc-418f-b8ab-db7ce0d11d3c Transcripts in Word Online https://support.microsoft.com/en-us/office/transcribe-your-recordings-7fc2efec-245e-45f0-b053-2a97531ecf57 AudioPen: https://audiopen.ai ‘Live titles’ in Apple Clips https://www.apple.com/uk/clips Scribble Diffusion: https://www.scribblediffusion.com Wheel of Names: https://wheelofnames.com Blockade Labs: https://blockadelabs.com Momento360: https://momento360.com Book Creator: https://app.bookcreator.com Bing Chat: https://www.bing.com/chat Voice Control for ChatGPT https://chrome.google.com/webstore/detail/voice-control-for-chatgpt/eollffkcakegifhacjnlnegohfdlidhn Joe Dale’s Language Teaching with AI Facebook group https://www.facebook.com/groups/1364632430787941 TalkPal for Education https://talkpal.ai/talkpal-for-education Pi: https://pi.ai/talk ChatGPT and Azure https://azure.microsoft.com/en-us/blog/chatgpt-is-now-available-in-azure-openai-service Google Earth: https://www.google.com/earth Questionwell https://www.questionwell.org MagicSchool https://www.magicschool.ai Eduaide https://www.eduaide.ai “I can’t draw’ in Padlet: https://padlet.com    
undefined
13 snips
Dec 14, 2023 • 38min

Revolutionising Classrooms: Inside the New Australian AI Frameworks with their Creators

In this podcast, Andrew Smith from ESA and AI guru Leon Furze discuss the new Australian AI Frameworks. They explore topics such as privacy, ethics, and transparency, while emphasizing the importance of respecting teachers' professional judgment. The podcast also delves into the purpose and evolution of the framework, the development process of the Vine network's practical framework, and the potential of multimodal technologies and generative AI. They encourage teachers to explore and experiment with AI technologies like chatbots and image generation platforms.
undefined
Dec 6, 2023 • 22min

Matt Esterman at the AI in Education Conference

Matt Esterman is Director of Innovation & Partnerships, and history teacher, at Our Lady of Mercy College Parramatta. An educational leader who's making things happen with AI in education in Australia, Matt created and ran the AI in Edcuation conference in Sydney in November 2023, where this interview with Dan and Ray was recorded.  Part of Matt's role is to help his school on the journey to adopting and using generative AI. As an example, he spent time understanding the UNESCO AI Framework for education, and relating that to his own school. One of the interesting perspectives from Matt is the response to students using ChatGPT to write assignments and assessments - and the advice for teachers within his school on how to handle this well with them (which didn't involve changing their assessment policy!) "And so we didn't have to change our assessment policy. We didn't have to change our ICT acceptable use policy. We just apply the rules that should work no matter what. And just for the record, like I said, 99 percent of the students did the right thing anyway." This interview is full of common sense advice, and it's reassuring the hear the perspective of a leader, and school, that might be ahead on the journey. Follow Matt on Twitter and LinkedIn
undefined
Dec 1, 2023 • 22min

Another Rapid Rundown - news and research on AI in Education

Academic Research   Researchers Use GPT-4 To Generate Feedback on Scientific Manuscripts https://hai.stanford.edu/news/researchers-use-gpt-4-generate-feedback-scientific-manuscripts https://arxiv.org/abs/2310.01783 Two episodes ago I shared the news that for some major scientific publications, it's okay to write papers with ChatGPT, but not to review them. But… Combining a large language model and open-source peer-reviewed scientific papers, researchers at Stanford built a tool they hope can help other researchers polish and strengthen their drafts. Scientific research has a peer problem. There simply aren’t enough qualified peer reviewers to review all the studies. This is a particular challenge for young researchers and those at less well-known institutions who often lack access to experienced mentors who can provide timely feedback. Moreover, many scientific studies get “desk rejected” — summarily denied without peer review. James Zou, and his research colleagues, were able to test using GPT-4 against human reviews 4,800 real Nature + ICLR papers. It found AI reviewers overlap with human ones as much as humans overlap with each other, plus, 57% of authors find them helpful and 83% said it beats at least one of their real human reviewers.     Academic Writing with GPT-3.5 (ChatGPT): Reflections on Practices, Efficacy and Transparency https://dl.acm.org/doi/pdf/10.1145/3616961.3616992 Oz Buruk, from Tampere University in Finland, published a paper giving some really solid advice (and sharing his prompts) for getting ChatGPT to help with academic writing. He uncovered 6 roles: Chunk Stylist Bullet-to-Paragraph Talk Textualizer Research Buddy Polisher Rephraser He includes examples of the results, and the prompts he used for it. Handy for people who want to use ChatGPT to help them with their writing, without having to resort to trickery     Considerations for Adapting Higher Education Technology Course for AI Large Language Models: A Critical Review of the Impact of ChatGPT https://www.sciencedirect.com/journal/machine-learning-with-applications/articles-in-press This is a journal pre-proof from the Elsevier journal "Machine Learning with Applications", and takes a look at how ChatGPT might impact assessment in higher education. Unfortunately it's an example of how academic publishing can't keep up with the rate of technology change, because the four academics from University of Prince Mugrin who wrote this submitted it on 31 May, and it's been accepted into the Journal in November - and guess what? Almost everything in the paper has changed. They spent 13 of the 24 pages detailing exactly which assessment questions ChatGPT 3 got right or wrong - but when I re-tested it on some sample questions, it got nearly all correct. They then tested AI Detectors - and hey, we both know that's since changed again, with the advice that none work. And finally they checked to see if 15 top universities had AI policies. It's interesting research, but tbh would have been much, much more useful in May than it is now. And that's a warning about some of the research we're seeing. You need to really check carefully about whether the conclusions are still valid - eg if they don't tell you what version of OpenAI's models they’ve tested, then the conclusions may not be worth much. It's a bit like the logic we apply to students "They’ve not mastered it…yet"     A SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of ChatGPT in the Medical Literature: Concise Review https://www.jmir.org/2023/1/e49368/ They looked at 160 papers published on PubMed in the first 3 months of ChatGPT up to the end of March 2023 - and the paper was written in May 2023, and only just published in the Journal of Medical Internet Research. I'm pretty sure that many of the results are out of date - for example, it specifically lists unsuitable uses for ChatGPT including "writing scientific papers with references, composing resumes, or writing speeches", and that's definitely no longer the case.     Emerging Research and Policy Themes on Academic Integrity in the Age of Chat GPT and Generative AI https://ajue.uitm.edu.my/wp-content/uploads/2023/11/12-Maria.pdf This paper, from a group of researchers in the Philippines, was written in August. The paper referenced 37 papers, and then looked at the AI policies of the 20 top QS Rankings universities, especially around academic integrity & AI. All of this helped the researchers create a 3E Model - Enforcing academic integrity, Educating faculty and students about the responsible use of AI, and Encouraging the exploration of AI's potential in academia.   Can ChatGPT solve a Linguistics Exam? https://arxiv.org/ftp/arxiv/papers/2311/2311.02499.pdf If you're keeping track of the exams that ChatGPT can pass, then add to it linguistics exams, as these researchers from the universities of Zurich & Dortmund, came  to the conclusion that, yes, chatgpt can pass the exams, and said "Overall, ChatGPT reaches human-level competence and         performance without any specific training for the task and has performed similarly to the student cohort of that year on a first-year linguistics exam" (Bonus points for testing its understanding of a text about Luke Skywalker and unmapped galaxies)   And, I've left the most important research paper to last: Math Education with Large Language Models: Peril or Promise? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641653 Researchers at University of Toronto and Microsoft Research have published a paper that is the first large scale, pre-registered controlled experiment using GPT-4, and that looks at Maths education. It basically studied the use of Large Language Models as personal tutors. In the experiment's learning phase, they gave participants practice problems and manipulated two key factors in a between-participants design: first, whether they were required to attempt a problem before or after seeing the correct answer, and second, whether participants were shown only the answer or were also exposed to an LLM-generated explanation of the answer. Then they test participants on new test questions to assess how well they had learned the underlying concepts. Overall they found that LLM-based explanations positively impacted learning relative to seeing only correct answers. The benefits were largest for those who attempted problems on their own first before consulting LLM explanations, but surprisingly this trend held even for those participants who were exposed to LLM explanations before attempting to solve practice problems on their own. People said they learn more when they were given explanations, and thought the subsequent test was easier They tried it using standard GPT-4 and got a 1-3 standard deviation improvement; and using a customised GPT got a 1 1/2 - 4 standard deviation improvement. In the tests, that was basically the difference between getting a 50% score and a 75% score. And the really nice bonus in the paper is that they shared the prompt's they used to customise the LLM This is the one paper out of everything I've read in the last two months that I'd recommend everybody listening to read.       News on Gen AI in Education   About 1 in 5 U.S. teens who’ve heard of ChatGPT have used it for schoolwork https://policycommons.net/artifacts/8245911/about-1-in-5-us/9162789/ Some research from the Pew Research Center in America says 13% of all US teens have used it in their schoolwork - a quarter of all 11th and 12th graders, dropping to 12% of 7th and 8th graders. This is American data, but pretty sure it's the case everywhere.     UK government has published 2 research reports this week. Their Generative AI call for evidence had over 560  responses from all around the education system and is informing UK future policy design. https://www.gov.uk/government/calls-for-evidence/generative-artificial-intelligence-in-education-call-for-evidence     One data point right at the end of the report was that 78% of people said they, or their institution, used generative AI in an educational setting   Two-thirds of respondents reported a positive result or impact from using genAI. Of the rest, they were divided between 'too early to tell', a bit of +positive and a bit of negative, and some negative - mainly around cheating by students and low-quality outputs.   GenAI is being used by educators for creating personalized teaching resources and assisting in lesson planning and administrative tasks. One Director of teaching and learning said "[It] makes lesson planning quick with lots of great ideas for teaching and learning" Teachers report GenAI as a time-saver and an enhancer of teaching effectiveness, with benefits also extending to student engagement and inclusivity. One high school principal said "Massive positive impacts already. It marked coursework that would typically take 8-13 hours in 30 minutes (and gave feedback to students). " Predominant uses include automating marking, providing feedback, and supporting students with special needs and English as an additional language.   The goal for more teachers is to free up more time for high-impact instruction.     Respondents reported five broad challenges that they had experienced in adopting GenAI: • User knowledge and skills - this was the major thing - people feeling the need for more help to use GenAI effectively • Performance of tools - including making stuff up • Workplace awareness and attitudes • Data protection adherence • Managing student use • Access   However, the report also highlight common worries - mainly around AI's tendency to generate false or unreliable information. For History, English and language teachers especially, this could be problematic when AI is used for assessment and grading   There are three case studies at the end of the report - a college using it for online formative assessment with real-time feedback; a high school using it for creating differentiated lesson resources; and a group of 57 schools using it in their learning management system.   The Technology in Schools survey The UK government also did The Technology in Schools survey which gives them information about how schools in England specifically are set up for using technology and will help them make policy to level the playing field on use of tech in education which also brings up equity when using new tech like GenAI. https://www.gov.uk/government/publications/technology-in-schools-survey-report-2022-to-2023 This is actually a lot of very technical stuff about computer infrastructure but the interesting table I saw was Figure 2.7, which asked teachers which sources they most valued when choosing which technology to use. And the list, in order of preference was: Other teachers Other schools Research bodies Leading practitioners (the edu-influencers?) Leadership In-house evaluations Social media Education sector publications/websites Network, IT or Business Managers Their Academy Strust   My take is that the thing that really matters is what other teachers think - but they don't find out from social media, magazines or websites   And only 1 in 5 schools have an evaluation plan for monitoring effectiveness of technology.       Australian uni students are warming to ChatGPT. But they want more clarity on how to use it https://theconversation.com/australian-uni-students-are-warming-to-chatgpt-but-they-want-more-clarity-on-how-to-use-it-218429 And in Australia, two researchers - Jemma Skeat from Deakin Uni and Natasha Ziebell from Melbourne Uni published some feedback from surveys of university students and academics, and found in the period June-November this year, 82% of students were using generative AI, with 25% using it in the context of university learning, and 28% using it for assessments. One third of first semester student agreed generative AI would help them learn, but by the time they got to second semester, that had jumped to two thirds There's a real divide that shows up between students and academics. In the first semester 2023, 63% of students said they understood its limitations - like hallucinations  and 88% by semester two. But in academics, it was just 14% in semester one, and barely more - 16% - in semester two   22% of students consider using genAI in assessment as cheating now, compared to 72% in the first semester of this year!! But both academics and students wanted clarify on the rules - this is a theme I've seen across lots of research, and heard from students The Semester one report is published here: https://education.unimelb.edu.au/__data/assets/pdf_file/0010/4677040/Generative-AI-research-report-Ziebell-Skeat.pdf     Published 20 minutes before we recorded the podcast, so more to come in a future episode:   The AI framework for Australian schools was released this morning. https://www.education.gov.au/schooling/announcements/australian-framework-generative-artificial-intelligence-ai-schools The Framework supports all people connected with school education including school leaders, teachers, support staff, service providers, parents, guardians, students and policy makers. The Framework is based on 6 guiding principles: Teaching and Learning  Human and Social Wellbeing Transparency Fairness Accountability Privacy, Security and Safety The Framework will be implemented from Term 1 2024. Trials consistent with these 6 guiding principles are already underway across jurisdictions. A key concern for Education Ministers is ensuring the protection of student privacy. As part of implementing the Framework, Ministers have committed $1 million for Education Services Australia to update existing privacy and security principles to ensure students and others using generative AI technology in schools have their privacy and data protected. The Framework was developed by the National AI in Schools Taskforce, with representatives from the Commonwealth, all jurisdictions, school sectors, and all national education agencies - Educational Services Australia (ESA), Australian Curriculum, Assessment and Reporting Authority (ACARA), Australian Institute for Teaching and School Leadership (AITSL), and Australian Education Research Organisation (AERO).
undefined
Nov 24, 2023 • 32min

Am-AI-zing Educator Interviews from Sydney's AI in Education Conference

This episode is one to listen to and treasure - and certainly bookmark to share with colleagues now and in the future. No matter where you are on your journey with using generative AI in education, there's something in this episode for you to apply in the classroom or leading others in the use of AI. There are many people to thank for making this episode possible, including the extraordinary guests: Matt Esterman - Director of Innovation & Partnerships at Our Lady of Mercy College Parramatta. An educational leader who's making things happen with AI in education in Australia, Matt created and ran the conference where these interviews happened. He emphasises the importance of passionate educators coming together to improve education for students. He shares his main takeaways from the conference and the need to rethink educational practices for the success of students. Follow Matt on Twitter and LinkedIn Roshan Da Silva - Dean of Digital Learning and Innovation at The King's School - shares his experience of using AI in both administration and teaching. He discusses the evolution of AI in education and how it has advanced from simple question-response interactions to more sophisticated prompts and research assistance. Roshan emphasises the importance of teaching students how to use AI effectively and proper sourcing of information. Follow Roshan on Twitter  Siobhan James - Teacher Librarian at Epping Boys High School - introduces her journey of exploring AI in education. She shares her personal experimentation with AI tools and services, striving to find innovative ways to engage students and enhance learning. Siobhan shares her excitement about the potential of AI beyond traditional written subjects and its application in other areas. Follow Siobhan on LinkedIn Mark Liddell - Head of Learning and Innovation from St Luke's Grammar School - highlights the importance of supporting teachers on their AI journey. He explains the need to differentiate learning opportunities for teachers and address their fears and misconceptions. Mark shares his insights on personalised education, assessment, and the role AI can play in enhancing both. Follow Mark on Twitter and LinkedIn Anthony England - Director of Innovative Learning Technologies at Pymble Ladies College - discusses his extensive experimentation with AI in education. He emphasises the need to challenge traditional assessments and embrace AI's ability to provide valuable feedback and support students' growth and mastery. Anthony also explains the importance of inspiring curiosity and passion in students, rather than focusing solely on grades. And we're not sure which is our favourite quote from the interviews, but Anthony's "Haters gonna hate, cheater's gonna cheat" is up there with his "Pushing students into beige" Follow Anthony on Twitter and LinkedIn   Special thanks to Jo Dunbar and the team at Western Sydney University's Education Knowledge Network who hosted the conference, and provided Dan and I with a special space to create our temporary podcast studio for the day
undefined
Nov 19, 2023 • 27min

Rapid Rundown - Another gigantic news week for AI in Education

Rapid Rundown - Series 7 Episode 3 All the key news since our episode on 6th November - including new research on AI in education, and a big tech news week! It's okay to write research papers with Generative AI - but not to review them! The publishing arm of American Association for Advancement of Science (they publish 6 science journals, including the "Science" journal) says authors can use “AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript” as long as their use is noted.  But they've banned AI-generated images and other multimedia" without explicit permission from the editors”. And they won't allow the use of AI by reviewers because this “could breach the confidentiality of the manuscript”. A number of other publishers have made announcements recently, including the International Committee of Medical Journal Editors , the World Association of Medical Editors and the  Council of Science Editors. https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models   Learning From Mistakes Makes LLM Better Reasoner https://arxiv.org/abs/2310.20689 News Article: https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving Researchers from Microsoft Research Asia, Peking University, and Xi’an Jiaotong University have developed a new technique to improve large language models’ (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn. The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week. The researchers first had models like LLaMA-2 generate flawed reasoning paths for math word problems. GPT-4 then identified errors in the reasoning, explained them and provided corrected reasoning paths. The researchers used the corrected data to further train the original models.     Role of AI chatbots in education: systematic literature review International Journal of Educational Technology in Higher Education https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00426-1#Sec8 Looks at chatbots from the perspective of students and educators, and the benefits and concerns raised in the 67 research papers they studied We found that students primarily gain from AI-powered chatbots in three key areas: homework and study assistance, a personalized learning experience, and the development of various skills. For educators, the main advantages are the time-saving assistance and improved pedagogy. However, our research also emphasizes significant challenges and critical factors that educators need to handle diligently. These include concerns related to AI applications such as reliability, accuracy, and ethical considerations." Also, a fantastic list of references for papers discussing chatbots in education, many from this year     More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems https://arxiv.org/abs/2311.04926 https://arxiv.org/pdf/2311.04926.pdf Parsons problems are a type of programming puzzle where learners are given jumbled code snippets and must arrange them in the correct logical sequence rather than producing the code from scratch "While some scholars have advocated for the integration of visual problems as a safeguard against the capabilities of language models, new multimodal language models now have vision and language capabilities that may allow them to analyze and solve visual problems. … Our results show that GPT-4V solved 96.7% of these visual problems" The research's findings have significant implications for computing education. The high success rate of GPT-4V in solving visually diverse Parsons Problems suggests that relying solely on visual complexity in coding assignments might not effectively challenge students or assess their true understanding in the era of advanced AI tools. This raises questions about the effectiveness of traditional assessment methods in programming education and the need for innovative approaches that can more accurately evaluate a student's coding skills and understanding. Interesting to note some research earlier in the year found that LLMs could only solve half the problems - so things have moved very fast!     The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4 https://arxiv.org/pdf/2311.07361.pdf By Microsoft Research and Microsoft Azure Quantum researchers "Our preliminary exploration indicates that GPT-4 exhibits promising potential for a variety of scientific applications, demonstrating its aptitude for handling complex problem-solving and knowledge integration tasks" The study explores the impact of GPT-4 in advancing scientific discovery across various domains. It investigates its use in drug discovery, biology, computational chemistry, materials design, and solving Partial Differential Equations (PDEs). The study primarily uses qualitative assessments and some quantitative measures to evaluate GPT-4's understanding of complex scientific concepts and problem-solving abilities. While GPT-4 shows remarkable potential and understanding in these areas, particularly in drug discovery and biology, it faces limitations in precise calculations and processing complex data formats. The research underscores GPT-4's strengths in integrating knowledge, predicting properties, and aiding interdisciplinary research.     An Interdisciplinary Outlook on Large Language Models for Scientific Research https://arxiv.org/abs/2311.04929 Overall, the paper presents LLMs as powerful tools that can significantly enhance scientific research. They offer the promise of faster, more efficient research processes, but this comes with the responsibility to use them well and critically, ensuring the integrity and ethical standards of scientific inquiry. It discusses how they are being used effectively in eight areas of science, and deals with issues like hallucinations - but, as it points out, even in Engineering where there's low tolerance for mistakes, GPT-4 can pass critical exams. This research is a good source of focus for researchers thinking about how it may help or change their research areas, and help with scientific communication and collaboration.     With ChatGPT, do we have to rewrite our learning objectives -- CASE study in Cybersecurity https://arxiv.org/abs/2311.06261 This paper examines how AI tools like ChatGPT can change the way cybersecurity is taught in universities. It uses a method called "Understanding by Design" to look at learning objectives in cybersecurity courses. The study suggests that ChatGPT can help students achieve these objectives more quickly and understand complex concepts better. However, it also raises questions about how much students should rely on AI tools. The paper argues that while AI can assist in learning, it's crucial for students to understand fundamental concepts from the ground up. The study provides examples of how ChatGPT could be integrated into a cybersecurity curriculum, proposing a balance between traditional learning and AI-assisted education.   "We hypothesize that ChatGPT will allow us to accelerate some of our existing LOs, given the tool’s capabilities… From this exercise, we have learned two things in particular that we believe we will need to be further examined by all educators. First, our experiences with ChatGPT suggest that the tool can provide a powerful means to allow learners to generate pieces of their work quickly…. Second, we will need to consider how to teach concepts that need to be experienced from “first-principle” learning approaches and learn how to motivate students to perform some rudimentary exercises that “the tool” can easily do for me."     A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models https://arxiv.org/abs/2311.07491 What this means is that AI is continuing to get better, and people are finding ways to make it even better, at passing exams and multi-choice questions     Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study https://arxiv.org/abs/2311.07387 Good news for me though - I still have a skill that can't be replaced by a robot. It seems that AI might be great at playing Go, and Chess, and seemingly everything else. BUT it turns out it can't play Minesweeper as well as a person. So my leisure time is safe!     DEMASQ: Unmasking the ChatGPT Wordsmith https://arxiv.org/abs/2311.05019 Finally, I'll mention this research, where the researchers have proposed a new method of ChatGPT detection, where they're assessing the 'energy' of the writing. It might be a step forward, but tbh it took me a while to find the thing I'm always looking for with detectors, which is the False Positive rate - ie how many students in a class of 100 will it accuse of writing something with ChatGPT when they actually wrote it themself.  And the answer is it has a 4% false positive rate on research abstracts published on ArXiv - but apparently it's 100% accurate on Reddit. Not sure that's really good enough for education use, where students are more likely to be using academic style than Reddit style! I'll leave you to read the research if you want to know more, and learn about the battle between AI writers and AI detectors     Harvard's AI Pedagogy Project And outside of research, it's worth taking a look at work from the metaLAB at Harvard called "Creative and critical engagement with AI in education" It's a collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. It includes an AI starter, an LLM tutorial, lots of resources, and a set of assignments https://aipedagogy.org/     Microsoft Ignite Book of News There's way too much to fit into the shownotes, so just head straight to the Book of News for all the huge AI announcements from Microsoft's big developer conference Link: Microsoft Ignite 2023 Book of News  
undefined
Nov 10, 2023 • 16min

Rapid Rundown : A summary of the week of AI in education and research

This week's rapid rundown of AI in education includes topics such as false AI-generated allegations, UK DfE guidance on generative AI, Claude's undetectable AI writing, the contrast between old and new worlds of AI, Open AI's exciting announcements, specialization and research bots, GPT4 updates, and gender bias in AI education.
undefined
Nov 1, 2023 • 29min

Regeneration: Human Centred Educational AI

After 72 episodes, and six series, we've some exciting news. The AI in Education podcast is returning to its roots - with the original co-hosts Dan Bowen and Ray Fleming. Dan and Ray started this podcast over 4 years ago, and during that time Dan's always been here, rotating through co-hosts Ray, Beth and Lee, and now we're back to the original dynamic duo and a reset of the conversation. Without doubt, 2023 has been the year that AI hit the mainstream, so it's time to expand our thinking right out. Also, New Series Alert! We're starting Series 7 - In this episode of the AI podcast, Dan and Ray discuss the rapid advancements in AI and the impact on various industries. They explore the concept of generative AI and its implications. The conversation shifts to the challenges and opportunities of implementing AI in business and education settings. The hosts highlight the importance of a human-centered approach to AI and the need for a mindset shift in organizations. They also touch on topics such as bias in AI, the role of AI in education, and the potential benefits and risks of AI technology. Throughout the discussion, they emphasize the need for continuous learning, collaboration, and understanding of AI across different industries.
undefined
12 snips
Sep 19, 2023 • 35min

Dr Nick Jackson - Student agency: Into my 'AI'rms

Dr Nick Jackson, expert educator, and leader of student agency, talks about AI's impact on assessment, the importance of AI in education, the intersection of AI and music, and the power of technology in young students' hands.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode