AI in Education Podcast

Dan Bowen and Ray Fleming
undefined
Dec 21, 2023 • 50min

Joe Dale - the ultimate Christmas AI gift list

Our final episode for 2024 is an absolutely fabulous Christmas gift, full of a lots of presents in the form of different AI tips and services Joe Dale, who's a UK-based education ICT & Modern Foreign Languages consultant, spends 50 lovely minutes sharing a huge list of AI tools for teachers and ideas for how to get the most out of AI in learning. We strongly recommend you find and follow Joe on LinkedIn or Twitter And if you're a language teacher, join Joe's Language Teaching with AI Facebook group Joe's also got an upcoming webinar series on using ChatGPT for language teachers: Resource Creation with ChatGPT on Mondays - 10.00, 19.00 and 21.30 GMT (UTC) in January - 8th, 15th, 22nd and 29th January 2024 Good news - 21:30 GMT is 8:30 AM and 10:00 GMT is 9PM in Sydney/Melbourne, so there's two times that work for Australia. And if you can't attend live, you get access to the recordings and all the prompts and guides that Joe shares on the webinars. There was a plethora of AI tools and resources mentioned in this episode: ChatGPT: https://chat.openai.com DALL-E: https://openai.com/dall-e-2 Voice Dictation in MS Word Online https://support.microsoft.com/en-au/office/dictate-your-documents-in-word-3876e05f-3fcc-418f-b8ab-db7ce0d11d3c Transcripts in Word Online https://support.microsoft.com/en-us/office/transcribe-your-recordings-7fc2efec-245e-45f0-b053-2a97531ecf57 AudioPen: https://audiopen.ai 'Live titles' in Apple Clips https://www.apple.com/uk/clips Scribble Diffusion: https://www.scribblediffusion.com Wheel of Names: https://wheelofnames.com Blockade Labs: https://blockadelabs.com Momento360: https://momento360.com Book Creator: https://app.bookcreator.com Bing Chat: https://www.bing.com/chat Voice Control for ChatGPT https://chrome.google.com/webstore/detail/voice-control-for-chatgpt/eollffkcakegifhacjnlnegohfdlidhn Joe Dale's Language Teaching with AI Facebook group https://www.facebook.com/groups/1364632430787941 TalkPal for Education https://talkpal.ai/talkpal-for-education Pi: https://pi.ai/talk ChatGPT and Azure https://azure.microsoft.com/en-us/blog/chatgpt-is-now-available-in-azure-openai-service Google Earth: https://www.google.com/earth Questionwell https://www.questionwell.org MagicSchool https://www.magicschool.ai Eduaide https://www.eduaide.ai "I can't draw' in Padlet: https://padlet.com
undefined
13 snips
Dec 14, 2023 • 38min

Revolutionising Classrooms: Inside the New Australian AI Frameworks with their Creators

In this podcast, Andrew Smith from ESA and AI guru Leon Furze discuss the new Australian AI Frameworks. They explore topics such as privacy, ethics, and transparency, while emphasizing the importance of respecting teachers' professional judgment. The podcast also delves into the purpose and evolution of the framework, the development process of the Vine network's practical framework, and the potential of multimodal technologies and generative AI. They encourage teachers to explore and experiment with AI technologies like chatbots and image generation platforms.
undefined
Dec 6, 2023 • 22min

Matt Esterman at the AI in Education Conference

Matt Esterman is Director of Innovation & Partnerships, and history teacher, at Our Lady of Mercy College Parramatta. An educational leader who's making things happen with AI in education in Australia, Matt created and ran the AI in Edcuation conference in Sydney in November 2023, where this interview with Dan and Ray was recorded. Part of Matt's role is to help his school on the journey to adopting and using generative AI. As an example, he spent time understanding the UNESCO AI Framework for education, and relating that to his own school. One of the interesting perspectives from Matt is the response to students using ChatGPT to write assignments and assessments - and the advice for teachers within his school on how to handle this well with them (which didn't involve changing their assessment policy!) "And so we didn't have to change our assessment policy. We didn't have to change our ICT acceptable use policy. We just apply the rules that should work no matter what. And just for the record, like I said, 99 percent of the students did the right thing anyway." This interview is full of common sense advice, and it's reassuring the hear the perspective of a leader, and school, that might be ahead on the journey. Follow Matt on Twitter and LinkedIn
undefined
Dec 1, 2023 • 22min

Another Rapid Rundown - news and research on AI in Education

Academic Research Researchers Use GPT-4 To Generate Feedback on Scientific Manuscripts https://hai.stanford.edu/news/researchers-use-gpt-4-generate-feedback-scientific-manuscripts https://arxiv.org/abs/2310.01783 Two episodes ago I shared the news that for some major scientific publications, it's okay to write papers with ChatGPT, but not to review them. But… Combining a large language model and open-source peer-reviewed scientific papers, researchers at Stanford built a tool they hope can help other researchers polish and strengthen their drafts. Scientific research has a peer problem. There simply aren't enough qualified peer reviewers to review all the studies. This is a particular challenge for young researchers and those at less well-known institutions who often lack access to experienced mentors who can provide timely feedback. Moreover, many scientific studies get "desk rejected" — summarily denied without peer review. James Zou, and his research colleagues, were able to test using GPT-4 against human reviews 4,800 real Nature + ICLR papers. It found AI reviewers overlap with human ones as much as humans overlap with each other, plus, 57% of authors find them helpful and 83% said it beats at least one of their real human reviewers. Academic Writing with GPT-3.5 (ChatGPT): Reflections on Practices, Efficacy and Transparency https://dl.acm.org/doi/pdf/10.1145/3616961.3616992 Oz Buruk, from Tampere University in Finland, published a paper giving some really solid advice (and sharing his prompts) for getting ChatGPT to help with academic writing. He uncovered 6 roles: Chunk Stylist Bullet-to-Paragraph Talk Textualizer Research Buddy Polisher Rephraser He includes examples of the results, and the prompts he used for it. Handy for people who want to use ChatGPT to help them with their writing, without having to resort to trickery Considerations for Adapting Higher Education Technology Course for AI Large Language Models: A Critical Review of the Impact of ChatGPT https://www.sciencedirect.com/journal/machine-learning-with-applications/articles-in-press This is a journal pre-proof from the Elsevier journal "Machine Learning with Applications", and takes a look at how ChatGPT might impact assessment in higher education. Unfortunately it's an example of how academic publishing can't keep up with the rate of technology change, because the four academics from University of Prince Mugrin who wrote this submitted it on 31 May, and it's been accepted into the Journal in November - and guess what? Almost everything in the paper has changed. They spent 13 of the 24 pages detailing exactly which assessment questions ChatGPT 3 got right or wrong - but when I re-tested it on some sample questions, it got nearly all correct. They then tested AI Detectors - and hey, we both know that's since changed again, with the advice that none work. And finally they checked to see if 15 top universities had AI policies. It's interesting research, but tbh would have been much, much more useful in May than it is now. And that's a warning about some of the research we're seeing. You need to really check carefully about whether the conclusions are still valid - eg if they don't tell you what version of OpenAI's models they've tested, then the conclusions may not be worth much. It's a bit like the logic we apply to students "They've not mastered it…yet" A SWOT (Strengths, Weaknesses, Opportunities, and Threats) Analysis of ChatGPT in the Medical Literature: Concise Review https://www.jmir.org/2023/1/e49368/ They looked at 160 papers published on PubMed in the first 3 months of ChatGPT up to the end of March 2023 - and the paper was written in May 2023, and only just published in the Journal of Medical Internet Research. I'm pretty sure that many of the results are out of date - for example, it specifically lists unsuitable uses for ChatGPT including "writing scientific papers with references, composing resumes, or writing speeches", and that's definitely no longer the case. Emerging Research and Policy Themes on Academic Integrity in the Age of Chat GPT and Generative AI https://ajue.uitm.edu.my/wp-content/uploads/2023/11/12-Maria.pdf This paper, from a group of researchers in the Philippines, was written in August. The paper referenced 37 papers, and then looked at the AI policies of the 20 top QS Rankings universities, especially around academic integrity & AI. All of this helped the researchers create a 3E Model - Enforcing academic integrity, Educating faculty and students about the responsible use of AI, and Encouraging the exploration of AI's potential in academia. Can ChatGPT solve a Linguistics Exam? https://arxiv.org/ftp/arxiv/papers/2311/2311.02499.pdf If you're keeping track of the exams that ChatGPT can pass, then add to it linguistics exams, as these researchers from the universities of Zurich & Dortmund, came to the conclusion that, yes, chatgpt can pass the exams, and said "Overall, ChatGPT reaches human-level competence and performance without any specific training for the task and has performed similarly to the student cohort of that year on a first-year linguistics exam" (Bonus points for testing its understanding of a text about Luke Skywalker and unmapped galaxies) And, I've left the most important research paper to last: Math Education with Large Language Models: Peril or Promise? https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641653 Researchers at University of Toronto and Microsoft Research have published a paper that is the first large scale, pre-registered controlled experiment using GPT-4, and that looks at Maths education. It basically studied the use of Large Language Models as personal tutors. In the experiment's learning phase, they gave participants practice problems and manipulated two key factors in a between-participants design: first, whether they were required to attempt a problem before or after seeing the correct answer, and second, whether participants were shown only the answer or were also exposed to an LLM-generated explanation of the answer. Then they test participants on new test questions to assess how well they had learned the underlying concepts. Overall they found that LLM-based explanations positively impacted learning relative to seeing only correct answers. The benefits were largest for those who attempted problems on their own first before consulting LLM explanations, but surprisingly this trend held even for those participants who were exposed to LLM explanations before attempting to solve practice problems on their own. People said they learn more when they were given explanations, and thought the subsequent test was easier They tried it using standard GPT-4 and got a 1-3 standard deviation improvement; and using a customised GPT got a 1 1/2 - 4 standard deviation improvement. In the tests, that was basically the difference between getting a 50% score and a 75% score. And the really nice bonus in the paper is that they shared the prompt's they used to customise the LLM This is the one paper out of everything I've read in the last two months that I'd recommend everybody listening to read. News on Gen AI in Education About 1 in 5 U.S. teens who've heard of ChatGPT have used it for schoolwork https://policycommons.net/artifacts/8245911/about-1-in-5-us/9162789/ Some research from the Pew Research Center in America says 13% of all US teens have used it in their schoolwork - a quarter of all 11th and 12th graders, dropping to 12% of 7th and 8th graders. This is American data, but pretty sure it's the case everywhere. UK government has published 2 research reports this week. Their Generative AI call for evidence had over 560 responses from all around the education system and is informing UK future policy design. https://www.gov.uk/government/calls-for-evidence/generative-artificial-intelligence-in-education-call-for-evidence One data point right at the end of the report was that 78% of people said they, or their institution, used generative AI in an educational setting Two-thirds of respondents reported a positive result or impact from using genAI. Of the rest, they were divided between 'too early to tell', a bit of +positive and a bit of negative, and some negative - mainly around cheating by students and low-quality outputs. GenAI is being used by educators for creating personalized teaching resources and assisting in lesson planning and administrative tasks. One Director of teaching and learning said "[It] makes lesson planning quick with lots of great ideas for teaching and learning" Teachers report GenAI as a time-saver and an enhancer of teaching effectiveness, with benefits also extending to student engagement and inclusivity. One high school principal said "Massive positive impacts already. It marked coursework that would typically take 8-13 hours in 30 minutes (and gave feedback to students). " Predominant uses include automating marking, providing feedback, and supporting students with special needs and English as an additional language. The goal for more teachers is to free up more time for high-impact instruction. Respondents reported five broad challenges that they had experienced in adopting GenAI: • User knowledge and skills - this was the major thing - people feeling the need for more help to use GenAI effectively • Performance of tools - including making stuff up • Workplace awareness and attitudes • Data protection adherence • Managing student use • Access However, the report also highlight common worries - mainly around AI's tendency to generate false or unreliable information. For History, English and language teachers especially, this could be problematic when AI is used for assessment and grading There are three case studies at the end of the report - a college using it for online formative assessment with real-time feedback; a high school using it for creating differentiated lesson resources; and a group of 57 schools using it in their learning management system. The Technology in Schools survey The UK government also did The Technology in Schools survey which gives them information about how schools in England specifically are set up for using technology and will help them make policy to level the playing field on use of tech in education which also brings up equity when using new tech like GenAI. https://www.gov.uk/government/publications/technology-in-schools-survey-report-2022-to-2023 This is actually a lot of very technical stuff about computer infrastructure but the interesting table I saw was Figure 2.7, which asked teachers which sources they most valued when choosing which technology to use. And the list, in order of preference was: Other teachers Other schools Research bodies Leading practitioners (the edu-influencers?) Leadership In-house evaluations Social media Education sector publications/websites Network, IT or Business Managers Their Academy Strust My take is that the thing that really matters is what other teachers think - but they don't find out from social media, magazines or websites And only 1 in 5 schools have an evaluation plan for monitoring effectiveness of technology. Australian uni students are warming to ChatGPT. But they want more clarity on how to use it https://theconversation.com/australian-uni-students-are-warming-to-chatgpt-but-they-want-more-clarity-on-how-to-use-it-218429 And in Australia, two researchers - Jemma Skeat from Deakin Uni and Natasha Ziebell from Melbourne Uni published some feedback from surveys of university students and academics, and found in the period June-November this year, 82% of students were using generative AI, with 25% using it in the context of university learning, and 28% using it for assessments. One third of first semester student agreed generative AI would help them learn, but by the time they got to second semester, that had jumped to two thirds There's a real divide that shows up between students and academics. In the first semester 2023, 63% of students said they understood its limitations - like hallucinations and 88% by semester two. But in academics, it was just 14% in semester one, and barely more - 16% - in semester two 22% of students consider using genAI in assessment as cheating now, compared to 72% in the first semester of this year!! But both academics and students wanted clarify on the rules - this is a theme I've seen across lots of research, and heard from students The Semester one report is published here: https://education.unimelb.edu.au/__data/assets/pdf_file/0010/4677040/Generative-AI-research-report-Ziebell-Skeat.pdf Published 20 minutes before we recorded the podcast, so more to come in a future episode: The AI framework for Australian schools was released this morning. https://www.education.gov.au/schooling/announcements/australian-framework-generative-artificial-intelligence-ai-schools The Framework supports all people connected with school education including school leaders, teachers, support staff, service providers, parents, guardians, students and policy makers. The Framework is based on 6 guiding principles: Teaching and Learning Human and Social Wellbeing Transparency Fairness Accountability Privacy, Security and Safety The Framework will be implemented from Term 1 2024. Trials consistent with these 6 guiding principles are already underway across jurisdictions. A key concern for Education Ministers is ensuring the protection of student privacy. As part of implementing the Framework, Ministers have committed $1 million for Education Services Australia to update existing privacy and security principles to ensure students and others using generative AI technology in schools have their privacy and data protected. The Framework was developed by the National AI in Schools Taskforce, with representatives from the Commonwealth, all jurisdictions, school sectors, and all national education agencies - Educational Services Australia (ESA), Australian Curriculum, Assessment and Reporting Authority (ACARA), Australian Institute for Teaching and School Leadership (AITSL), and Australian Education Research Organisation (AERO). ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 7 Episode: 5 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Hi, welcome to the AI education podcast. How are you? Ray, I am great. Dan, do you know what? Another amazing two weeks of news. I can't keep up. Can you? You know, there's so much research and news that's happening. Even this morning, we've got to release the AI framework, which we touch on a little a little bit later. I know another one. Oh my word. Okay. Well, look, compared to the world of news, the world of academic research has been moving a bit slower. Thank goodness. There have been again another 200 papers produced in the last few weeks. So, hey Dan, can I do the usual and run you down my top 20 of the interesting research that I've read? So, really interesting one about generating feedback on scientific manuscripts. You remember Dan, I said that publications were allowing researchers to write papers now with chat GPT, but they weren't allowed to review them. Yes. Another bunch of researchers did the research and way what they did was they built a tool that reviewed papers to help researchers kind of polish and fin and finalize their their final drafts. And the answer was it was really useful especially young researchers who can't get professional reviewers to review their manuscripts. and their papers really useful for them to be able to get feedback on it. The interesting thing is they asked the re researchers did this AI help you to produce a better paper. 57% said it it found the feedback helpful and 83% so that's four in five Dan. We'll come back to four and five said that it beats at least one of the real human reviewers that they had look at papers. So that that's really interesting. The second bit research was about using chat GPT to help with academic writing. So that isn't help in the sense of getting it to rewrite things for you or sorry write things for you originally. The kind of help that I have which is help I've got a blank page but in terms of being able to find roles for chat GPT to help make writing more effective. Really interesting research because he talked about six key roles. Uh a chunk stylist, you know, help me rewrite this bit of it. a bullet to paragraph stylist. Here's the five bullet points. Now, turn this into text. A talk textualizer, a research buddy, a polisher, and a rephrase. All really useful. He includes the examples and he includes the prompts. So, if you want it to do that kind of stuff, that paper is really good for that. There's a really long-winded title called Considerations for Adopting Higher Education Technology Course for AI large language models, a critical review of the impact of chat GPT. Now, this has come out as a pre-p proof. It's it's a really good example, unfortunately, of how academic publishing cannot keep up with the rate of technology change because these four academics from the University of Prince Mugrin wrote it and submitted it on the 31st of May. It has only just been accepted into the journal of machine learning with applications later, right? Yeah. So, they spent 13 of the 24 pages detailing assessment questions and which ones CH GPT got right or wrong. Now, I retested it on some of those sample questions and it got them nearly all correct. Now, the other thing they did, they tested AI detectors. What do we both know about AI detectors, Dan? No, they don't work. Yeah. But one thing that I thought was useful is they looked across the top 15 universities to see if they had AI policies. I I'd actually say that research is a warning about some of the research we're seeing. You really need to check carefully if the conclusions it made are still valid. Like did they test it with the current open AI model or did they use a previous model? I think that the way I think about when people evaluate can AI do something is not yes they can can or no they can't. I actually think it's like we do with students not mastered it yet. That's kind of my feeling. Some other papers similar challenge about delayed publishing a strengths and weaknesses analysis of chat GPT in the medical literature. probably useful because they looked at 160 papers. So if you want to know about chat GPT and medical then there's 160 papers that are linked in it but a lot of the results are out of date. There was some work done around academic integrity and uh it was talking about academic integrity in the age of chat GPT and generative AI. The paper was written in August so not completely out of date references 37 papers but probably the interesting thing is They looked at the top 20 QS ranking universities for their policies around academic integrity and AI and they created a nice simple model they called the three E model. They said that when it comes to generative AI and acade academic integrity think about three E enforcing integrity educating faculty and students about responsible use and encouraging the exploration. I I think that's really good. Enforcing Yes. Educating. Yes. Encouraging the exploration. Yeah. Absolutely. Yes. So, are you keeping track, Dan, of the exam papers that Jack GPT has passed? I was talking to the customer yesterday, actually. Yes. It can be a financial analyst. It can be a stock broker. It can pass the medical tests. Everybody says about the bar exam. Yeah. So, latest thing according to the research, it's now a linguist. An expert linguist. So, these research Researchers from universities in Zurich and Dortmund came to the conclusion that yes, Chat GPT can pass exams in linguistics and their conclusion is overall chat GPT reaches human level competence and performance without any specific training for the task and has performed similarly to the stat student cohort of a year one linguistic tests. Correct. And I'm going to give the researchers bonus points. A lot of the research is very dry and inaccessible. But this one, they were testing about the understanding of a text about Luke Skywalker and G unmapped galaxies. Fun for you, Dan. Okay. So, I left the most important research paper to last. The paper is called math education with large language models, peril or promise. So, from that, Dan, you know, it comes from which country? Math. Math. America. The US of course. Exactly. So, actually it comes from some Canada. Yeah. You're Look at ahead of the notes. The research at the University of Toronto and Microsoft Microsoft research. So Microsoft researcher I remember when I used to work with them. It's a bunch of people that are academic researchers. They just happen to work for Microsoft but they do some amazing blue sky research and this is the largest bit of research I think so far of large scale pre-registered controlled experiments using GPT4 and looking at it in the cont of maths education. So basically yeah they were looking at can a large language model be a personal tutor and they did some proper AB testing to understand if we dealt with students this way and this way and this way what are the differences between them. So some students were not given any help from a tutor. Some students were giving help from a tutor an AI tutor after they had tackled some of the challenges and some were given help from an AI tutor before they tackled the challenges. And then what they did was they gave all of these students another test. And the really interesting thing is they got one to three standard deviation improvement in the test results using just standard GPT4 and then they tried a customized GPT4 and they got one and a half to four standard deviation improvements. So in test results basically the students were getting 50% before they got help from the AI tutor and 75% after that. There was there was a message in there for me. A lot of people talk about, oh, we've got to fine-tune the large language models and we've got to have a special flavor of it, but we can actually get huge leaps using the everyday one that is on your phone and mine and on our laptops. It's a really good paper. I think of all the papers we've talked about over the last few months, it's the one that I would recommend. people read. So it's called math education with large language models herald or promise and at the end of it they share all of the prompts that they use. So if you want to do math tutoring or maths tutoring then this is the paper to read. Go and steal the prompts. It's really excellent paper. That's it for the research Dan. Y this that's phenomenal. There's some really good stuff there. I'm really passionate with the math side of this and I need to unpick that one because obviously the way the prompts and the chat GBT kind of tools are now handling images and some of the maths equations as an exmaths teacher at high school. I'm really interested to see how some of the image recognitions working as well and and and where the the kind of blurred line is between maths formulas prompting for other differentiated purposes and stuff. So that's that's going to be exciting one to read. But there's great things there and I like the way that a lot of these researchers are giving advice at the end of the research like those three E sounds good and the and the roles that you mentioned about the chunking analyst or whatever else they they really sound as if they come into life and adding a bit more things that we can do and actions at the end of the the research rather than you know 76% of people are improved by using chatbt so what else has been on the news of generative education have you seen anything else so I saw some uh research from the US that said one in five US team have used chat GPT for school work and and it went up by the time you got to 11th and 12th grade. It was a higher proportion than lower down, but it's American data. It's slightly old. One in five have used it. I I think that might actually be a lot higher now. But let me tell you one thing. Yes. Can I suggest it's another nail in the coffin for AI detectors? Because if you're not detecting the one in five of your bits of work, is generated by chat GPT, then it's another example of why the AI detectors don't work. Yeah, absolutely. And that's a great great data point. That's one thing that's come out. But I suppose the things that I looked at recently, last week or this week, this is how quick these podcast episodes are being published. Now, this week, the UK government published two research reports. One about generative AI and one about technology in schools. And I know you've got an interesting point on the technology in schools one, but the generative AI one was the call for evidence that happened. They had about 560 responses from all around the education system in the UK and it's informing the future policy design there. We put the link in the show notes, but there were a couple of interesting data points in there. One data point right the end of the report was that it was about 78% of people saying that institution use generative AI in an education setting. So the usage is high there. There was some really good qualitative points as well that were picked up talking about lesson plan. planning and the fact that he was making lesson planning really quick. One of the directors of teaching and learning mentioned that and they were thinking about idea generation for teaching and learning and rejigging lessons. One high school principal said in the requirements analysis that there was a massive impact already in his school and in Mark course that would typically take 8 to 13 hours in 30 minutes and gave feedback to students. So there's a lot of use cases that was would appearing in the report they were talking about automated marking, providing feedback, supporting students with special educational needs and EAL. So there was some really good things that that qualitatively brought out even though it was lacking a little bit of the quantitative detail in there. There was some really good responses that talk through some of the some of the benefits, but also it picked up some broad challenges around skills and user understanding, which is some of the major things that people felt they needed. to know more about using AI and prompt engineering. The performance of the tools was kind of picked up, but obviously teachers really worry about hallucinations in inside the the generative AI uh world. There was a big discussion around attitudes within the workplace which is quite interesting because it starts to push on the administrative use of these tools and technologies in schools rather than just in the classroom. Obviously the classic things around access to the tools and then managing student use of them. and then data protection. So really really interesting report. Yeah, I thought it was interesting that concern they were picking up already was the risk to students and teachers if you become overly reliant on generative AI and do your fundamental skills go down? And I think about that a lot in the context of we could and should be using generative AI to time save, especially for teachers that are under this incredible time pressure. But we need to make sure that we're not time-saving on the things that make a difference. So, if I think about the planning process, there's steps in the planning process that are really important because it forces you to think about things and then there are steps that are really dull and that don't have value like writing up the notes afterwards. Yeah, very true. The the the second report um which was uh released at the same time and I I don't know if they did it purposely, but it was about uh and we've been here before uh the UK government doing a technology in school survey. So giving updated information about how schools in England specifically were set up for using technology. I suppose it is useful to give context then when people are using generative AI and other technologies but there was another another report that landed. Have you got any thoughts on that? Dan I remember 20 years ago when I was at RM I would have been all over a report like this because it was you know let's count how many network switches are in schools and are they managed and let's count the laptops. There is some stuff that's in there that is useful, but there's too much of it which is about counting bits and bites and even the strategy document that talked about schools having a strategy document. The strategy wasn't about teaching and learning. The strategy was about how do you manage your wires and cables and stuff like that. You're so you're so you're so true. So to summarize those two things from the UK, generative AI is being used by early adopters who say you're saving time and they came up with really interesting uses but there are risks to manage but there's huge optimism from educators generally and then schools of their technology they're increasingly getting strategic about the tech use but it's still a way to go before there's a kind of minimum tech standard happening in the UK but so I now I'm going to take just to one thing that I spotted linked across both reports in the generative AI report teachers are worried about big tech now you don't want to talk about this Dan because you're part of the big tech world so so I will so they're worried that that big tech might exercise undue power things like misaligned incentives like it's all about the money and can we sweep up all of this data versus the incentives for people, students and teachers to be able to learn more. That was interesting because I don't I don't see the world having been in the world of big tech that isn't the incentive isn't all about money and power and sweeping up student data. Often the discussions about more data are about how can we get more data in order to provide more value back to learners. But let me jump across to the technology report. One of the things they asked is they said to teachers where do you get input that you value for choosing technologies and the top answers were other teachers, other schools, research bodies, leading practitioners, which I think means the edgu influencers on Twitter and LinkedIn and big tech was not in the list. Down at the bottom of the list, I imagine if they'd asked they might have put big tech, but the other thing they put down the bottom of the list was their own leadership. So they didn't tend to look at leadership within the education system. They tended to look to their peers for good advice. Wow, that's that's a really interesting insight. I saw a bit of research about Australian university students, how they're using chat GPT. Really nice bit of research done down in Melbourne by Gemma from Deacin and Natasha Zeble from Melbourne Uni. They had done a survey of university students in semester 1 and semester 2. By the end of semester 2, 82% Hey Dan, that's going in five again another using genative AI and 25% using it in the context of university learning 28% using it for assessment wow that's great okay now if that's the number of students that are using it about academics what they found was that just 14% of academics were using it in semester 1 and 16% in semester 2 so something like one quarter of the academics are using it compared to the students. Can can I can I ask a question about that? Just generally what's generally the split of casual academics versus full-time academics in unis. I know that's a sort of wide question, but is the statistics on that? I'm just wondering because they got a lot of academics in this system. It's unlike a school. I'm just wondering whether a lot of casual academics might not got might not get as much professional development. They might be like subject knowledge experts like accountants and things like that on accountants courses. I'm just wondering what BD they get. Yeah, great great theory Dan and my theory was going in the other direction. So about 23 I think from the research I've seen over the years of academics are casual 2/3 right now here's my supposition the casual academics use AI more and the reason I'm saying that is because they are out in the commercial world to three days a week and then teaching at university a couple of days a week. I remember that's what the pattern was for my daughter's courses. And so I'd be willing to bet that they're more likely to be using in their professional practice and then bringing it into their academic practice rather than they're missing out on some academic training. We'll have to find somebody who might have some actual data rather than theories on that. It's interesting. And then finally, I suppose something that's landed on our desks this morning, 1st of December, and I think this is worth unpacking again and I know we've looked at some of the drafts already but the AI framework for schools in Australia was released today and some really interesting guiding principles in there from teaching and learning human and social well-being transparency fairness accountability and privacy and and this is where it was really interesting when we were looking at AI detectors in the past and I was looking through some of the policies or I was looking at this and the transparency and accountability elements were were quite evident in here which is really nice to see and it also put a lot of responsibility on schools as well. If you are going to be using a eye detectors and picking up Ray's uh geography homework for possible uses of generative AI then you do a responsibility for people to challenge that and the student to say well hey I use generative AI in this particular way. So I think it's a very mature document. I need to look at the devil in the detail around this now in the next couple of hours today but I'm glad it's been released. Okay. I've not seen the final version. So, I'll go and have a read of it. We'll put the link in the show notes, but in a couple of weeks time, let's find 10 or 15 minutes to talk through it and let's see if we can find somebody smarter than both you and I put together to talk about it. Let's see if we can find somebody involved in the drafting of it. And yeah, let's dig down deeper into it. But, as you say, there's some really specific advice and guidance for schools about what they need to do about responsible AI usage. Yeah, can't wait. Well, what what are we What a week. Dan, can we can we have the news just slowing down because we're aim to do this in 20 minutes every week and here we are. I think we've overrun again. So, next week, Dan, the podcast is another interview from the AI in Education Conference. We've got the longer interview with Matt Estman on the podcast. Back in two weeks time with more news. See you soon. Bye, Dan. Brilliant. That research was phenomenal. Holy smokes.
undefined
Nov 24, 2023 • 32min

Am-AI-zing Educator Interviews from Sydney's AI in Education Conference

This episode is one to listen to and treasure - and certainly bookmark to share with colleagues now and in the future. No matter where you are on your journey with using generative AI in education, there's something in this episode for you to apply in the classroom or leading others in the use of AI. There are many people to thank for making this episode possible, including the extraordinary guests: Matt Esterman - Director of Innovation & Partnerships at Our Lady of Mercy College Parramatta. An educational leader who's making things happen with AI in education in Australia, Matt created and ran the conference where these interviews happened. He emphasises the importance of passionate educators coming together to improve education for students. He shares his main takeaways from the conference and the need to rethink educational practices for the success of students. Follow Matt on Twitter and LinkedIn Roshan Da Silva - Dean of Digital Learning and Innovation at The King's School - shares his experience of using AI in both administration and teaching. He discusses the evolution of AI in education and how it has advanced from simple question-response interactions to more sophisticated prompts and research assistance. Roshan emphasises the importance of teaching students how to use AI effectively and proper sourcing of information. Follow Roshan on Twitter Siobhan James - Teacher Librarian at Epping Boys High School - introduces her journey of exploring AI in education. She shares her personal experimentation with AI tools and services, striving to find innovative ways to engage students and enhance learning. Siobhan shares her excitement about the potential of AI beyond traditional written subjects and its application in other areas. Follow Siobhan on LinkedIn Mark Liddell - Head of Learning and Innovation from St Luke's Grammar School - highlights the importance of supporting teachers on their AI journey. He explains the need to differentiate learning opportunities for teachers and address their fears and misconceptions. Mark shares his insights on personalised education, assessment, and the role AI can play in enhancing both. Follow Mark on Twitter and LinkedIn Anthony England - Director of Innovative Learning Technologies at Pymble Ladies College - discusses his extensive experimentation with AI in education. He emphasises the need to challenge traditional assessments and embrace AI's ability to provide valuable feedback and support students' growth and mastery. Anthony also explains the importance of inspiring curiosity and passion in students, rather than focusing solely on grades. And we're not sure which is our favourite quote from the interviews, but Anthony's "Haters gonna hate, cheater's gonna cheat" is up there with his "Pushing students into beige" Follow Anthony on Twitter and LinkedIn Special thanks to Jo Dunbar and the team at Western Sydney University's Education Knowledge Network who hosted the conference, and provided Dan and I with a special space to create our temporary podcast studio for the day ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 7 Episode: 4 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI and education podcast. Now, we've got something pretty special for you over the next three episodes cuz we're going to hear from a group of really smart people. So, that's not you and I, Dan. There were a great group of people at the AI education conference that was the run and created by Matt Estim. and hosted by the education team at Western Sydney University. It was a such a high energy event. I had about 130 teachers, education leaders spending all the day hearing about and talking about the pedagogical aspects of AI in education. So they talked about the educational implications. They didn't really spend that much time talking about all the twists and turns in technology that we were talking about in last week's podcast. It was more about what would happen in the classroom. Yeah. Absolutely. So I think our listeners should settle in because I think we're going to bring a lot of the series together in some shorter interviews from different schools in this episode. So next week we'll have a longer interview with a brains behind the conference Mr. Matt Estman himself. But this week we're going to really hear from several of the speakers from K12 and the week after I think Ray we're going to put some bits together around higher education as well. We've got a lot to get through today. Let's crack it. Yeah, lots of voices to hear. here from schools. So we've got Anthony England from Pimble Ladies College, Siobhan James from Eping uh Boys High School, Roshanda Silva from King School, Mark Liddell from St. Luke's. But Dan, we should start with the man in the moment, the conference convenor extraordinaire Matt Estman from Our Lady of Mercy College. Hi Matt, welcome to the podcast today. How you doing? Yeah, really well, thanks. Really well. What an amazing event. It's been so good and we've had so many great people from different areas represented different uh parts to the sector. What were your main takeaways? Cuz you were in a lot of the panels and everything. Yeah. Oh, look, main takeaways are when you get a bunch of interested, passionate people together, then you can talk about pretty much anything, go anywhere, but everyone's here for the same purpose, which is to just do better by our kids. And just because today was themed on AI, I think a lot of the the conversation came back to success for students and working with young people and rethinking what we're doing for them. So, yeah, I was I walked away really inspired, been great. Did Did you learn anything new. I'm sure there's lots of things, right? Yeah. I guess perspectives like what we talked about earlier, you know, that idea of hearing from people in say primary or higher education and what they're thinking about because I guess my echo chamber is secondary education and Australian secondary education. So, it was really good to hear from other states and from those other sectors as well. Yeah. I I thought it was also interesting picking up on the variation in adoption. So, we we had some people that were all in like you or all in. And then other people that even though this is an AI and education conference, they didn't really use it that much. It was very new to them and they were in sponge mode. They were just absorbing what other people were doing. And then in some of the conversations then people were very cautious about what their colleagues might think. You know, they were happy using chat GPT to do things but they knew that their colleagues wouldn't be. And that kind of human aspect of the change came out a lot for me and I think a lot of people worried about things perhaps about perceptions rather than reality as well. People worried about what their boss might think or worried about what systems might think whereas actually they don't think about it yet and whether people are using it on their personal devices to test things out and really explore but then in their work world having to be very very cautious or applying a self-perception that they have to be cautious whereas actually if they turned around to the person next to them and said hey here's something I tried their minds would be and they and they'd be have a really cool conversation about that. So, it's interesting. Yeah. And also I picked up on people saying, "Oh, wouldn't it be good if the department did this for us?" So, things like report writing, you know, there was obviously a demand to have have their lives made easier, but looking above them for organizations to to take that pain away. But I don't think we've ever had something that was individually driven recently than this because most learning technologies or technologies in schools are are given to you, right? Like you're given a device, you're given a room that has particular equipment or whatever, given particular software to apply. This it you can be at any school in any context doing any job and pick up your own smartphone and just try some stuff that relates to that job, but nobody at work needs to know about. So, it's a totally different environment we're working in. Yeah. Well, this this been phenomenal today and this podcast episode is a testament to your uh you know, tenacity to bring all these people together because I think if we read We want to change and I do think Australia does have the opportunity to do it. As I was saying to you earlier, I think we big enough that we can make a difference and an impact, but small enough that and agile enough and smart enough that we can actually do it. Wow, Dan. Small enough, agile enough, smart enough. Hey Dan, two out of three isn't bad. Matt's extraordinarily wise, isn't he Dan? I saw that at the conference and there's a lot more to come in the fulllength interview next week. But who else did he speak to? Well, let me introduce you next. Rashandanda Silva from the King School. He started to use AI in his work from an admin side and a teaching side. And like Matt, he's from a humanities and he's a history teacher. So let's roll the VT. Hi Rashan from the King School. How are you doing today? I'm very good, thank Dan. Thank you very much. So in your kind of background in education and technology, this AI, what are your thoughts on it? Look, it was quite exciting right when it first came about because obviously this process has has actually been around a long time but it obviously allowed people like a ordinary classroom teacher and a student for example you put put in some information and get some information back. At the start it was a bit bit fraught I guess in the education sense because obviously students were using it for a number of different reasons mostly just to put in a question and get an answer back that they would submit as their own work. I think staff and students have become a lot clever in the use of it especially in terms of writing prompts now and I think a lot of schools are teaching students how to write proper prompt so they can Yeah. So they can actually get a a much much better response in return and then take that information and edit it edit it for their own. Have you had to do lots of staff training around that? Is there a bit of a gap appearing? Look, there is a huge gap and I think teachers are a little bit still concerned about the use of AI. I still think there's this whole idea of catching students out and I don't think that's that's what we should be looking at. It's actually helping them use the tool like a a computer or or a calculator came out a long time ago. Yeah, we've been using those tools for a long time. So This is just the next step. Yeah. Well, what excites you most? Have you used any of the tools out there? What kind of ones capture your imagination the most or have you seen teachers using in the classroom? Yeah, I can split that answer into two actually. So, probably in in my area where we're lighting writing a lot of policies and huge documentation where we're actually reading large amounts of PDFs that are sometimes 195 pages long. Yeah. Yeah. So, that takes time, right? The whole speed and and processing power of the of the AI tools of of summarizing information for us quickly and then for us to reuse that information in a meaningful way. That saved us a lot of time, which has made us a lot more productive in terms of writing policies and procedures. I'm a history teacher by trade. So, we would now start the process of saying to students, let's put this question in and see what the AI spits out. Yeah. And let's look at the sources that we can use to either prove or disprove some of this information. And that's interesting from a history point of view. I know Matt Estman, he's a history teacher by Chad and geography teacher and he's done quite a lot of interest in historical references and uses of G technology which is quite cool but then also on the other side of it I think science and history are one of the subjects that do a lot their own sources correct so what would your thoughts be then if students are using work from GPT and and other technology similar to that using AI what's the best way to source things like that look I think sourcing is is a difficult thing right I think what AI is actually doing is giving students a starting point so we all know that not every student is is equal in terms with their understanding and the way they're able to formulate an answer. Yeah. So, I would say the whole AI process is providing students who aren't as gifted or talented with an opportunity to start the whole process at a point and that's making a lot easier for for boys and girls, I guess, to formulate an answer because they they're being provided with a prompt and then they can go away and check that prompt that's linking into their research skills and not just history, that's linked into any subject and then proving or disproving that. answers. So I think the whole idea that AI is a tool for cheating I think has now moved because we're now realizing that actually it's speeding up our whole work process students staff at the same time. Yeah, absolutely. That's such a thoughtful and mature way to be thinking about this. I hope lots of teachers are listening into to that cuz that's that's a wonderful way to think about it. Thanks Rashan. Thank you for having me. Take care. Wow. It's been interesting to hear the perspectives of people who are on the journey and doing their experimentation. Now it's not People like you and I, Dan. Few. We live and breathe technology and AI every day. But I love hearing from people like Matt and Rashan who are primarily teachers and they're taking the pragmatic approach of this just another tool in my teaching toolkit. Yeah. And and like every tool, you have to learn how to use it. So next we've got Siobhan from Eping Boys. She's going to talk about a voyage of discovery about using AI, finding AI, and the different apps and services that she's tried. in the classroom and with her teachers. It's really fascinating to listen to what she's learned on the way. Hi Siobhan, how are you? Good. How are you Dan? I'm very well, thanks. So from Epin Boy School, I started last year. Awesome. So I was teaching and then I moved to the teacher librarian role this year. You do so many things in that role. What what's kind of capturing your attention at the minute around this generative AI conversation? Probably the apprehension mostly like how a lot of people feel that it's this new scary thing and the actual potential of what it can do. Do are you getting any PD or professional development around this area at all or you just got to find it all yourself? Find it all yourself. Basically, this has been like a little pet project of mine since end of last year when chat GPT kind of started to be released to the public. Yeah. What kind of things have you been doing with it in your school? Well, I've been playing around to see I'm trying to test it to its limits. So, for me, um, if it can't write a whole response, I'll see how well it can mimic my own writing style. Wow. So, I've been having fun playing around with that. Yeah. And like following a lot of different social media platforms to see how they've been testing and playing with the technology. So, some people have been using it to write what's called VBA code to craft PowerPoint presentations with pre-filled information, playing around with your voice. There's a few different AIs which you can train it for about 10 minutes and then can record dialogue in your style. Yeah. And in your voice. Wow. Wow. That's fantastic, isn't it? Yeah. The technology is moving very, very quickly. Yeah. Are are any of the teachers in your school utilizing anything specifically? Have you got any examples of what some of the teachers have been doing? Not really. It's mainly been me playing around with it and then having casual conversations with other teachers to see what they want to use it for. Yeah. But basically, next year there's going to be a bigger push for how we're going to deal with AI as a whole school, especially with the department. I believe it was announced recently. that they're going to be not blocking chat GBT from school servers because other schools have it freely available but department schools don't. Yeah. Okay, that's interesting. And are there any other tools that you might have been using in the classroom and in school which you've been playing with that that have been quite uh fun? Have you done any image stuff? I've been playing around with the image stuff myself just for fun and I've been using it to like I've been using it for mainly my own practice. Yeah. I haven't had the I've had conversations with other students doing like individual research projects on how to best utilize it and be being aware of the limitations that AI has. Um, but I haven't had the chance to execute it officially in full school lessons yet because we're still trying to learn how we're going to deal with it as a school as well as in general. I I find the the teacher librarians, especially in New South Wales where I reside, there's a lot of innovation that comes out to the teacher librarian side. When STEM became popular, teacher librarians are running the STEM programs in schools. So, it's great to see that you're bringing that generative AI stuff together. Is there something that you're looking forward to learning more about is there kind of areas that you're interested in? Probably seeing how it's applied in other subjects because a lot of the time most people think with AI, oh, it's specifically for the written subjects. It's usually, oh, it's how are we using it in the English classroom, the history classroom. I'm curious to see how it can be used for other subject areas that may not necessarily be based in writing. Yes. Cuz writing is the biggest obvious But of course with Midjourney and Darly and all those visual AI stuff, it'd be interesting to see how students could adapt and use those kinds of image technology AI within their own individual projects. Yeah, definitely. I've seen some great stuff in some schools I've been in recently where even some of the English teachers have been showing images and getting kids to reverse engineer Yes. using English the prompts back to try to copy the image. It's like been fantastic. So thanks again Siobhan. Thank you. That was fascinating. I love the freedom to experiment that Siobhan obviously feels like that's something I talk about all the time. I ask the question, have you used it? What tasks have you given it? And then I talk about the wide range of tasks that I've used it for because let's be honest, we don't really know what it's capable of, do we then? No. True. And that's what I found interesting when you were talking with Mark Liddell from Salut's Grammar School. I listened to that and we went from learning about the technology to learning about the application of it with teachers and some really re really really good insights in the in this interview about the different approaches and different staff attitudes. Hi Mark, how are you Dan? Great to see you today. Yeah, you too. Thanks for joining us on this podcast episode. We at the AI education event obviously. What have we find this morning so far? It's been wonderful. So it's been great to be able to hear some different perspectives on how is it that AI can have an impact? What's been happening so far and then looking at where are we headed? What are some of the ways that we can help for our teachers and for our students to get prepared for these next steps within AI? Yeah. And what are you currently doing at the minute at your school? So, right now we're just asking lots of questions. So, we have been able to provide some optin AI professional learning sessions for our teachers. We've been able to develop a student learning continuum and a teacher learning continuum and Now we're looking at our next few years of being able to say right in what ways do we want to differentiate the way that we'll support teachers. We've also got innovators within the school who we're working closely with. So right now working with this really wonderful math teacher and he's asking the question how is it that I can help to improve the behavior, the effort and the meaning of each lesson for my students? And it turns out the way he's been solving those different problems is by AI writing code that's developed a dashboard for him that's bringing together effort academic progress and disposition reflection. So it's just been really fun to go on that journey with this particular teacher. And isn't it interestingly from technology I know you've got a rich history of technology and innovation in your in your background but one thing that's jumped out to me is there's a couple of maths lecturers here today. Mathematics education has been something that often technology has kind of left behind and other other subjects picked it up more. So that's fantastic to hear the maths department's picked this up. Yeah. Yeah. Well, it's a matter of saying we've got our math curriculum, right? And we're doing quadratic equations. Okay. We we can't really shift a whole lot within that if we're learning quadratics. However, we get to set this up for our learners. We get to provide the landscape of how it's going to apply. We also get to describe to students what does success look like? And we would want to say that Success doesn't just look like I can understand quadratic equations and solve them. We also want to articulate and say, wait a second, your ability to persevere, your ability to develop reasoning, your ability to collaborate, all of these things are part of that success criteria. So if all you can do by the end of the lesson is solve for two values for X, we haven't done our job properly. Like there's a whole lot more to regular classroom learning than just ticking that curriculum. box. That That's phenomenal, isn't it? Did you learn anything specific from today? I I know the one of the quotes that jumped out to me was around the beehive. Yeah. Right. So, that quote that you're referring to there was just saying that the purpose of the beehive is not to produce honey. It's that these bees are constructing this healthy hive and it turns out a byproduct of that is this honey. And so, I guess the thing that's jumped out for me in today's session is that we're all come together around AI. But really most of the conversation has been about hi which is human intelligence or AI like emotional intelligence that actually will go alongside that AI work. So I feel as though we shouldn't really be just coming together to discuss AI. We should be looking at the whole person the whole student and saying right we already know all of these things are required for our students to be successful in the long run and now we've just got this additional tool. I I feel that for some exper experienced teachers. Uh for some teachers that aren't that interchange, just having that conversation always about AI is actually going to repel them. And if we start using more holistic language where we are talking about the relational growth of our students, we are talking about the the growth in reasoning of our students. And then alongside that, we've got these other tools that help to refine some of the learning process of our students. I think that's going to allow for better buying for some of different teachers that will be able to say, "Wait a second, I can support my student with their academic learning and also with the tools that will support them with revision that might connect with AI." That's that's such a good way to look at it. When you reflect yourself on where things are going with AI now, what what do you see is coming next, which you excited about? And then also, how are you managing that digital divide in in your school? Is everybody on the same page or are varying different uh degrees of staff PD happening? Sure. So where are we headed next? So firstly, we're headed into a very exciting time because when it is that staff are equipped to understand and use AI well, it does amplify the progress for our students. It helps them to be able to see just like we've had with other technologies. How is it that I can harness this for good use? So I'm reading and I'm listening and I'm getting involved with lots of different conversations which is helping me to know that whatever those different problems are, we're going to solve them alongside students to make school a better place. So that's number one. Number two, how is it that we best help our teachers? Well, in the same way that we differentiate learning for our students, we have to do the same for our teachers. And just in the last week, some of the different conversations, some of the opportunities, some of the fears that our teachers have, there's lots of misinformation, there's lots of misunderstanding because it's just moving at 190 km per second. A lot of people will say, "Right, I'm jumping on board. This is going to be the ride of my life." Other people are like, "I'm not even going to the station." Like, if you tell me something now, it's going to be out of date and then I'm going to have to listen again. So, I think we've got to empathize well and say, what is going to provide that entry step for this group of teachers? What's going to provide that comfort and that kind of handholding for a different group of teachers? Because we can't just put our heads in the sand and think this is going away like that is not going to happen. We need to be able to make sure that all of our different teachers can see the possibilities. They can see the need and then to be able to care for them well and provide those learning and development opportunities that will help them to take their next steps. Well, that that's fantastic. Thanks for joining us today on the podcast, Mark from St. Luke's. Really really appreciate it. Thanks so much, Dan. That was fab. Hearing that really unlocked good ideas for moving from experimentation to actually supporting teachers on their journey. Let's be honest and between Matt, Rashan's, Siobhan, and Mark, we've had some amazing insights to help other school, let's call them middle leaders, the people that are leading this and they're navigating the complexity and the ambiguity. So, how do we bring it home, Dan? Who have you saved for last? Oh, we've got Anthony England from PLC. Let's roll the VT. Welcome, Anthony England, to the podcast. studio. Absolutely. It's a good looking studio. It is, isn't it? How have you How have you enjoyed today? Yeah. Good. Look, I love talking AI. I actually think I like cruising the uncertain edge and I think that's what AI is. Yeah, absolutely. And and I was really interested of the questions you posed the panel discussion because it really teased out some of the key components with with some of the some of the panelists and the audience in terms of the things that you were doing with your school. I I saw a LinkedIn post the other day where you It's a a personalized tutor. You're really pushing the boundaries at the minute, right? Yeah. Look, I think bit like co there is no best practice with the experiment. And so I'm one to experiment and happy to fail, but certainly happy to kick the tires to see what works, what doesn't, and then if I love something, I will tell people. So I'm going to flip the question. What hasn't worked is the old assessments. Wow. Because in a world where where typically what we do at the moment is get a student to make something to prove that they've mastered the content, the mode, and no longer be able to say with certainty that they were the maker. I love that AI is pushing assessments to a different place. I love that if you know what you want to say, then you've got this savant assistant that with clear purpose you can get it to produce something that you're happy with and it's so polite at taking feedback when it's I'm so sorry I got of course 2 plus 2 plus 5 what was I thinking as an AI it's so good isn't it I how are the image generation side to those things have you seen a lot of use of that look interesting I reckon I've learned more about art and art criticism given an AI tool than ever before from any art theory or museum that I went to because I love I know what I love but I need to now know how to describe what I love to a a machine so we can create something that is something that I value. Um so I've given it tasks to provide feedback on user interface design. I've given it tasks to generate logos and images that give meaning to some concept I'm trying to convey. Uh, I've had it, you know, remove backgrounds and generate new bits in into images really quickly. I feel more artistic than I have since I was a 5-year-old. You know, you don't you don't find a 5-year-old that doesn't feel like they're an artist. But as a 50year-old, you don't find many people who feel like they're artists. And for first time in ages, I feel like, hey, I'm kind of being creative visually. And for me, is exciting, playful. Yeah. I And I I love it when teachers are playing with these things cuz that it just boggles my mind. I saw somebody the other day where they're a literacy teacher that was asking the teachers, you know, what have you done? Literacy teacher created an image with quite a complex prompt and they in the class it was about the kids trying to copy that image with descriptive writing. Beautiful. Ah, it was a beautiful task. Yeah, I can't even describe how good it was. And sometimes you see some of those lessons, you think, wow. And the ideas the teachers come up with just keep getting better and better and better. Another thing I think is working well is feedback. I think The navana of personalized education that's catered for you as a learner was a bridge too far. But AI makes it possible. I have got chat JP4. So I would upload a rubric, upload a handwritten piece of work, ask it to generate some comments about that according to the rubric how it's gone. Then got it to suggest an improvement on what it would change. Then I got it to critique the original and the uh revive. and identify what elements it changed and why it could justify that paragraph by paragraph it was providing amazing feedback to a student who's wanting to improve their writing. Yeah, absolutely. It's that Sal Khan two sigma problem that they talked earlier on as well isn't it? The two things that have always been sort of out of the grasp of teachers have been personalization and assessment changing assessment and I think we're on a cusp of being able to change those. Absolutely. If you've got a grade as the motivator, you're creating an assessment that's asking to be gained. Yes. Because what's the outcome? The best thing is an A and that's what they hope for. But if it's about some intrinsically valuable thing that they want to improve and grow in, when you speak to students, they don't want to cheat that process. They want to improve themsel. And so it actually nudges assessments to look at what's intrinsically valuable. Yeah. To the learner, not just grade hunting. One thing you mentioned earlier on which I'd never heard before and I thought it was phenomenal where you talked about pushing students into beige. Mhm. And and that's really made me think today about the top end students and the negative elements that if you are chasing that grade we could bring down the top end. Do you want to explain to us a bit about that beige? Yeah. Yeah. So the idea is that you're going to compress to the middle that the obvious one is that the lowerend student is going to submit work that's better than they naturally might have done without AI. Yeah. And so they they've gone further to the middle. On the other end, the threat is will AI with its say with image creation, with its amazing ability to generate pretty impressive output, will it make those top- end students go, well, could I be bothered? It's a lot of hard work to get the skills required to be able to produce this. So, eh, and so they don't bother. They'll lower their effort. and then lower their growth in the face of AI. And so the threat is will students then all compress to the middle and everyone just becomes beige? Yeah. And I think the missing piece in that question is the joy of mastery that when you find something that you love, cooking the perfect steak, making perfect loaf of bread, painting that sunset, nobody wants to cheat that joy. In fact, there's a whole game industry about those micro moments of joy of mastery that we yes I failed last time but I got it this time and that's an addictive gaming strategy the gamification of things those micro moments of joy the dopamine hit that I'm on the right path I've made that next level that is what we're forgetting that people don't go beige people want to improve and you talk to students today cheaters going to cheat taylor Swift would say that you know haters going to hate cheaters going to cheat but the other students who don't want to cheat. They want to be their best self. They're worried because they don't want to be seen as shortch changing their own growth. Yes. And so they're worried if I use this, am I diminishing myself and nobody wants to do that. And I I think that's gold. I really do. I think this definitely you're on to something there. I mentioned something in the panel earlier on where they were talking generally about AI and how it was going to impact your brain function and my analogy was using Google Maps and you forget what everything is you just follow the sat lab these days and you don't even know where the cricket thing is which you've been to 20 times this term but you you can't remember where it is. So you get a cognitive amputation as Travis Smith from Microsoft calls it. But if you are intrinsically interested and want to master a particular thing then that doesn't even come into play. But if there's something that's boring for the kids it's about maybe the teachers lighting that fire and bringing learning opportunities to the students that really want to do. So the master is there. Yeah, every teacher wants to light the spark of curiosity, of finding a passion. No one's decides, hey, I want to be a teacher because I want to help achieve a minimum standard. Tick a box. That's I want to be able to give people B's and A's. They want to see people grow. And so lighting that spark, that's what teachers want to do. That's why we got into the game is to inspire the future generation to be their best self. Like that's inspire Spiring to see them grow and be better. That's what teachers love. Absolutely. Well, thank you so much for joining us today. Absolutely. Lovely to chat with you, Dan. Cheers. Bye. Well, Ray, that was another excellent interview to end the podcast today. You know, I find Anthony and some of his thoughts around education. He thinks so deeply about the implications for his staff and more importantly the students and the way that this technology moves. He's always thinking ahead of the game. game and like the Wayne Gretzky quotes about playing where the puck is going to be just absolutely phenomenal and his thoughts on art and the way he's developed his own thinking around this technology just blows my mind. This podcast is going to be the one I keep coming back to and referring other people to to get into the understanding of what people are doing, what might happen in the future. Yes, it's just so I mean just amazing voices that we got to hear there. AB: Absolutely. And I'm really looking forward to uh the next episode where we'll look at some of these in a little bit more detail and have some extended interviews with some of the characters you heard from today. Yeah, next time round it's a longer interview with Matt Estman and just understanding and a bit more detail what he's doing and then beyond that we've got some interviews we did we did with people from higher education that uh will be in an episode in the future. Thanks everybody who actually uh took the time out to uh be on our podcast during the event a couple of weeks ago. Thanks Dan. See you next week. Bye.
undefined
Nov 19, 2023 • 27min

Rapid Rundown - Another gigantic news week for AI in Education

Rapid Rundown - Series 7 Episode 3 All the key news since our episode on 6th November - including new research on AI in education, and a big tech news week! It's okay to write research papers with Generative AI - but not to review them! The publishing arm of American Association for Advancement of Science (they publish 6 science journals, including the "Science" journal) says authors can use "AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript" as long as their use is noted. But they've banned AI-generated images and other multimedia" without explicit permission from the editors". And they won't allow the use of AI by reviewers because this "could breach the confidentiality of the manuscript". A number of other publishers have made announcements recently, including the International Committee of Medical Journal Editors , the World Association of Medical Editors and the Council of Science Editors. https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models Learning From Mistakes Makes LLM Better Reasoner https://arxiv.org/abs/2310.20689 News Article: https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving Researchers from Microsoft Research Asia, Peking University, and Xi'an Jiaotong University have developed a new technique to improve large language models' (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn. The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week. The researchers first had models like LLaMA-2 generate flawed reasoning paths for math word problems. GPT-4 then identified errors in the reasoning, explained them and provided corrected reasoning paths. The researchers used the corrected data to further train the original models. Role of AI chatbots in education: systematic literature review International Journal of Educational Technology in Higher Education https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00426-1#Sec8 Looks at chatbots from the perspective of students and educators, and the benefits and concerns raised in the 67 research papers they studied We found that students primarily gain from AI-powered chatbots in three key areas: homework and study assistance, a personalized learning experience, and the development of various skills. For educators, the main advantages are the time-saving assistance and improved pedagogy. However, our research also emphasizes significant challenges and critical factors that educators need to handle diligently. These include concerns related to AI applications such as reliability, accuracy, and ethical considerations." Also, a fantastic list of references for papers discussing chatbots in education, many from this year More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems https://arxiv.org/abs/2311.04926 https://arxiv.org/pdf/2311.04926.pdf Parsons problems are a type of programming puzzle where learners are given jumbled code snippets and must arrange them in the correct logical sequence rather than producing the code from scratch "While some scholars have advocated for the integration of visual problems as a safeguard against the capabilities of language models, new multimodal language models now have vision and language capabilities that may allow them to analyze and solve visual problems. … Our results show that GPT-4V solved 96.7% of these visual problems" The research's findings have significant implications for computing education. The high success rate of GPT-4V in solving visually diverse Parsons Problems suggests that relying solely on visual complexity in coding assignments might not effectively challenge students or assess their true understanding in the era of advanced AI tools. This raises questions about the effectiveness of traditional assessment methods in programming education and the need for innovative approaches that can more accurately evaluate a student's coding skills and understanding. Interesting to note some research earlier in the year found that LLMs could only solve half the problems - so things have moved very fast! The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4 https://arxiv.org/pdf/2311.07361.pdf By Microsoft Research and Microsoft Azure Quantum researchers "Our preliminary exploration indicates that GPT-4 exhibits promising potential for a variety of scientific applications, demonstrating its aptitude for handling complex problem-solving and knowledge integration tasks" The study explores the impact of GPT-4 in advancing scientific discovery across various domains. It investigates its use in drug discovery, biology, computational chemistry, materials design, and solving Partial Differential Equations (PDEs). The study primarily uses qualitative assessments and some quantitative measures to evaluate GPT-4's understanding of complex scientific concepts and problem-solving abilities. While GPT-4 shows remarkable potential and understanding in these areas, particularly in drug discovery and biology, it faces limitations in precise calculations and processing complex data formats. The research underscores GPT-4's strengths in integrating knowledge, predicting properties, and aiding interdisciplinary research. An Interdisciplinary Outlook on Large Language Models for Scientific Research https://arxiv.org/abs/2311.04929 Overall, the paper presents LLMs as powerful tools that can significantly enhance scientific research. They offer the promise of faster, more efficient research processes, but this comes with the responsibility to use them well and critically, ensuring the integrity and ethical standards of scientific inquiry. It discusses how they are being used effectively in eight areas of science, and deals with issues like hallucinations - but, as it points out, even in Engineering where there's low tolerance for mistakes, GPT-4 can pass critical exams. This research is a good source of focus for researchers thinking about how it may help or change their research areas, and help with scientific communication and collaboration. With ChatGPT, do we have to rewrite our learning objectives -- CASE study in Cybersecurity https://arxiv.org/abs/2311.06261 This paper examines how AI tools like ChatGPT can change the way cybersecurity is taught in universities. It uses a method called "Understanding by Design" to look at learning objectives in cybersecurity courses. The study suggests that ChatGPT can help students achieve these objectives more quickly and understand complex concepts better. However, it also raises questions about how much students should rely on AI tools. The paper argues that while AI can assist in learning, it's crucial for students to understand fundamental concepts from the ground up. The study provides examples of how ChatGPT could be integrated into a cybersecurity curriculum, proposing a balance between traditional learning and AI-assisted education. "We hypothesize that ChatGPT will allow us to accelerate some of our existing LOs, given the tool's capabilities… From this exercise, we have learned two things in particular that we believe we will need to be further examined by all educators. First, our experiences with ChatGPT suggest that the tool can provide a powerful means to allow learners to generate pieces of their work quickly…. Second, we will need to consider how to teach concepts that need to be experienced from "first-principle" learning approaches and learn how to motivate students to perform some rudimentary exercises that "the tool" can easily do for me." A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models https://arxiv.org/abs/2311.07491 What this means is that AI is continuing to get better, and people are finding ways to make it even better, at passing exams and multi-choice questions Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study https://arxiv.org/abs/2311.07387 Good news for me though - I still have a skill that can't be replaced by a robot. It seems that AI might be great at playing Go, and Chess, and seemingly everything else. BUT it turns out it can't play Minesweeper as well as a person. So my leisure time is safe! DEMASQ: Unmasking the ChatGPT Wordsmith https://arxiv.org/abs/2311.05019 Finally, I'll mention this research, where the researchers have proposed a new method of ChatGPT detection, where they're assessing the 'energy' of the writing. It might be a step forward, but tbh it took me a while to find the thing I'm always looking for with detectors, which is the False Positive rate - ie how many students in a class of 100 will it accuse of writing something with ChatGPT when they actually wrote it themself. And the answer is it has a 4% false positive rate on research abstracts published on ArXiv - but apparently it's 100% accurate on Reddit. Not sure that's really good enough for education use, where students are more likely to be using academic style than Reddit style! I'll leave you to read the research if you want to know more, and learn about the battle between AI writers and AI detectors Harvard's AI Pedagogy Project And outside of research, it's worth taking a look at work from the metaLAB at Harvard called "Creative and critical engagement with AI in education" It's a collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. It includes an AI starter, an LLM tutorial, lots of resources, and a set of assignments https://aipedagogy.org/ Microsoft Ignite Book of News There's way too much to fit into the shownotes, so just head straight to the Book of News for all the huge AI announcements from Microsoft's big developer conference Link: Microsoft Ignite 2023 Book of News ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 7 Episode: 3 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Hi everybody. Welcome to the Iron Education podcast and the news episode. You known so well last time, Ray. We thought we better keep this going, right? Well, do you remember Dan? I said there's so much that's happened this week that it'll be difficult to keep up. Well, there's so much that's happened this week. It'll be difficult to keep up, Dan. So, as you know, I've been reading the research papers and my goodness, there has been another massive batch of research papers coming out. So, here's my rundown. This is like top of the pops in the UK, you know, like add 10. Here's my rundown of the interesting research papers this year. So, interestingly, there's some news out that apparently it is okay to write research papers with generative AI. So, the publishing arm, the American Association for the Advancement of Science, now that is a mouthful, for Unfortunately, their top journal is called science, which is not a mouthful. So, they've said authors can use AI assisted technologies as components of their research study or as aids in writing or presentation of the manuscript. So, you're allowed to use chat GPT and use that to help you write a paper as long as you note the use of it. And the other interesting thing is however they have banned AI generated images or other to media unless there's explicit permission. So that's interesting because some of the other papers are saying they'll allow you to create the charts using AI. And they also have said you cannot use AI as a reviewer to review the manuscript. And their worry is you'll be uploading it into a public AI service and it could breach the confidentiality. Now that was a big one because Science Journal is a big proper journal. Uh but a bunch of other academics journals also big proper journals have come out with the same. So the International Committee of Medical Journal editors came out with a policy in May, the World Association of Medical Editors and the Council of Science Editors all came out with policies. So it would appear that although there are many many schools that won't let you write anything with AI, the official journals will as long as you're declaring it. And maybe that's a good policy. So that there's there's a link in the show notes to that. So whizzing down the other research apparently learning from its own mistakes makes learn large learning ma models a better reasoner. So this is interesting. This is research from the Microsoft research Asia team, the peaking university and Xanong University. They developed a technique where what they do is they generate some flawed reasoning um using llama 2 and then they get chat GPT to correct it. And what they're finding is learning from those mistakes makes the large language model a better reasoner at solving difficult problems. That is really interesting. Another bit of research, there was a really good bit of research about the the title is the role of AI chatbots in education colon systematic literature review. So now what that means is somebody has spent their time reading all the other papers about the use of AI chatbots in education. They read 67 papers and they pulled out the both the benefits and the concerns. So um the benefits for students you know helping with homework, helping with study, personalizing the learning experience and development of new skills. They also found that there's benefit for teachers. So time saving and improved pedagogy. Are you a pedagogy or a pedagogy person then? Pedagogy. Okay. And they also then pulled out there are challenges and things like reliability, accuracy and ethical consideration. So none of that should be surprise to people. The paper is a good summary about all the research. It's also a fantastic list of these 67 other papers, many of which have come out this year. So, good paper. Uh really good if you if you're faced with colleagues who are going, I don't understand what this is all about. I don't understand why this would be good for teachers or students. Give it to them. The next next paper was titled more robots are coming. Large multimodal models can solve visually diverse images. You're picking all of the research articles. really long titles here. No, Dan, that is just the title. That isn't the abstract or the whole paper. So, Parson, do you know what Parson's problems are, Dan? Okay. Parson's problems are what they did with computer science, which was basically give you a bunch of code and then jumble it up and you have to try and work out what's wrong and where it should be in the right place. You imagine that? So, uh it's like me returning to code I when I was 16. Got no idea what order it should be in. So, what they do is that's a way that they think of giving students interesting challenges where it's more of a visual challenge, you know, structuring the code. Unfortunately, they thought that was a way to defeat large language models and it isn't because large language models, as you know, have worked on developing large visual models. So, they can actually look at code, work out what's wrong, and tell you how to do it. Statistic time, they can do it 96.7% of the time. That's using chat GPT4 vision. So, significantly applications for computing education because it's really good at solving parts and and it's a multimodal effect using images when you were talking through that then I think okay you know you're analyzing text or code but it's actually using the pictures to organize that that's that's fascinating and it's been moving fast because halfway through this year it could only solve half the problems now we're towards the end of the year 7% that's yet hugely significant Did you ever get 96.7% on any of your exams now? Okay, next paper. I promise you I'll only read the title. The impact of large language models on scientific discovery colon a preliminary know this one as well. So this is this is mentioned in like yeah this is really exciting. Yeah. So basically what they did was look at how good is it at helping in scientific discovery across a bunch of scientific domains. So drug discovery, biology, comp computational chemistry, materials designing, and solving partial differential equations. I don't know what that it's like a differential equation, but only a part of it presumably. Okay. Well, I'll never have to do one in my life because chat GPT can do it for me. So, what it's found is it's really good at tackling tough problems in that area. They say that the research underscores the fact that they can bring different domains of knowledge together, predict things, and help with interign. where they were talking about materials and compounds that they'd found in a matter of weeks rather than 9 months or so. So, you know, obviously material science this is going to have a a profound impact. So, that's quite interesting. So, the next one has got interdicciplinary title is an inter oh blimey Dan an interdisiplinary outlook on large language models for scientific research. I need a translator for the titles. So, basically it talks about how large language models can do scientific research. So just like the last paper, so it talks about ah things are going to be faster, they're going to be more efficient, but it's looking at the research processes themselves. It also talks about the downside. So things like integrity and ethical standards and how you manage that and deals with things like hallucinations. But it points out that even in something like engineering where there really is not that much tolerance for mistakes, it can pass the exams. So it's great for archers that need to think about how these models can help them in their own research and help them with communication. I I built a GPT to rewrite a scientific paper for the reading age of a 16-year-old. And the reason I did that is that honestly I find many of the papers quite inaccessible. So helping out with scientific communication could well be yes building a wider audience. Okay. Uh let's whiz through some others now really fast. So as fast as I can read the titles. A paper called with chat GPT. Do we have to rewrite our learning objectives? A case study in cyber security. So basically they looked at and said how does it change the way that we both teach and learn about cyber security. The great things that they found was that chat GPT working alongside the student helps them to learn more quickly and helps them to understand complex concepts much better. But it then raises some questions like if chat GPT or AI you can do the early study stuff. Will students just skip past it? And so the question was, how do we keep them engaged in the simple to-do things that they can then build upon as they go further through? Really good paper. I think it applies to other areas of learning as well. Uh the next paper, a step closer to comprehensive answers. So, oh sorry that wasn't the whole title. The rest of the title was constrained multi-stage question decomposition with large language models. So the whole paper boiled down in a sentence says AI is getting better and people are finding ways to make it even better at passing. There seems to be a lot of that now, doesn't there? There's several you've just quoted there all talking about the actual the pass rates and the way they're actually getting more accurate. Excellent. Yeah. And also the question about how do we change assessment and I know we've got an interview coming up with Matt Esman where we'll talk about some of the assessment stuff. Okay. So other things Uh, next paper, assessing logical puzzle solving in large language models. Insights from a mind sweeper case study. Okay, so Dan, I know that you were playing Mindcraft when you were a kid. I was a mind sweeper kid. Good news. I have a skill that cannot be replaced by a robot. It seems that although AI is great at playing Go and chess and every other game, but apparently can't play mind sweeper as well as I've got a unique not to be this by robots's job. Now, there were two different papers. I'm only going to reference one. One's called Demask, unmasking the chat GPT wordsmith. Now, that's quite a Reddit friendly title for the paper. Uh, but basically, they proposed a completely new way of being able to do AI detection. And what do we think about detection? They do not work. So, demask was demasked. The next day, they said this can detect things. The next day, somebody proved that it couldn't. Uh, there was another paper that came out that said, "Oh, we've got a great way of detecting things and it detects everything." I am ever so suspicious with this research whenever it doesn't talk about false positives. So, a false positive is where it says this was written with chaps GPT and it wasn't. And unless the false positive rate is super super low, a teacher is going to be accusing a student of cheating when they have not. We always talking low percentages, but if say it's got a false positive rate of 1%, which is very low. Very low. That means if you've got a class of 30, every three assignments you're going to be telling a student that they cheated when they absolutely did not. So look, pretty much we can be sure that they do not work. And then the last thing, this wasn't a paper, but something I thought I'd mention. Harvard have got a really, really good website called AI pedagogy.org. Well, because otherwise it's AI pedagogy. org about and critical engagement with AI and education. It's some really good stuff with the humanities. There's syllabi, there's activities, assignments you can give to students. It's worth watching that as it develops. Thanks for sharing those. What about the tech world, Dan? Because it was Microsoft in Ignite last week, and I know that on this podcast you do not officially represent the voice of Microsoft despite the fact that that's who you work for day and night. But Dan, you'll have been watching Ignite. Tell me what's exciting. A AI infused through everything as we know. And I do think there was there was a quite nice story narrative to this. It started off with the hardware side of it. I know the the partnerships that companies okay in this context Microsoft were doing with Nvidia, but also the first party chips that were being created. So there's an Azure Maya chip that's now being created, an Azure Cobalt CPU that's now created. So there's several different interesting pushes. and architectures which is all meant to kind of support all these AI workloads in the cloud. So I I think there was a lot of coverage in that section. You know everybody was mentioned Intel Microsoft own inhouse stuff Nvidia also some of the Nvidia software which is interesting is also running in Azure now as well. So I think it's very much bringing lots of the hardware acceleration together. So I thought that was a good opening for Satia. So it's not just new data centers being built around the world. There's new data centers with new computing capacity. Yes, that's right. And even interconnected capabilities where even down to the level of the different fibers, there was a hollow core fiber that was introduced as well. It's always interesting to know the things that are going on in these data centers which is individual atoms being sent through holo fiber rather than via light. So very very interesting technology and from the hardware side. But obviously then spinning to the software side there was a lot of things which came out. Some of the big notable things for the podcast listeners Azure AI studio is now in public preview that brings together a lot of the pre-built services and models prompt flow the security responsible AI tools it brings it all together in one studio to to manage that going through there's lots of that's based on the power platform if people have been playing with that in the past so there's a lot of drag and drop interfaces going on to help you kind of automate a lot of this prompt generation which which if people are technically minded on the on the podcast people and bots for quite a lot of time with those kind of tools. So that's kind of good to see that emerging out of the the keynotes. So look out for that Azure AI studio. So our public preview definitely worth having a play with. There's a there was an extra announcements around the copyright commitment which might not sound that interesting but it's quite you know if you do something if you're legal firm or a commercial firm and you use co-pilots to do something and generate content for you then there's a copyright commitment has just been expanded to include OpenAI which means that Microsoft will support any legal costs if anything should be picked up by third parties around that. I love that Dan because I know that it's been there in the Microsoft copilot but I love the announcement is now extended out to the Azure Open AI service and the reason I'm excited about that is because that's what we build in my world. Uh we're building on top of the Azure Open AI services. So being able to pass on that copy Right. Protection is really important for organization. Hey Dan, before they mentioned CCC, the copyright copyright, the co-pilot copyright commitment. Um, they also mentioned the Azia AI content safety thing which is what was used in South Australia. I remember reading the case study about that which was about helping to protect. Yeah, that's right. So that's a good call up. There's so many things here. Yeah. The Azure AI content safety is available inside Azure AI studio and that allows you to evaluate models in one platform. So rather have to go out and check it elsewhere and that's there's also the preview of the features which identify and prevent attempted jailbreaks on your models and and things. So that you know exactly for the South Australia case study they were using that quite a lot but now it's actually prime time that it's now available to people who are developing those those models which is great. Lots of announcements around 3 being available in Azure Open AAI. which is the the image generation tools. There's lots of different models now. GPD4 turbo in preview, GPD 3.5 turbo. So, there's a lot of stuff which are now coming up in GA as well. So, there's lots in the model front as well, including GPD4 Turbo Vision. Yeah, I I like that turbo thing because that seemed to add more capabilities a bit like the OpenAI announcements. They mentioned the Turbo stuff, but the other thing was just like the Open AI announcement It was also better faster better parody as the open AI costs that were announced at their dev day as well around um developer productivity. So the stuff which is announcing go github so co-pilot chat and then github copilot enterprise will be available from February next year. So for devs there was a lot of things have a look at the book of news we put that in the in the connection there. One of my really exciting announcements that I listened to was or were more excited about I suppose was the Microsoft fabric has been available and I know that doesn't relate technically to generative AI but it's really good for a lot of my customers that are using data warehousing as one of their first steps into AI analytics and then all of the generative AI elements on top of that so co-pilot and fabric co-pilot and powerbi lots of announce there including things around governance and the expansion of purview along there so that was really excited but then we went into the the kind of really exciting bits around the productivity platform So then we've talked about Mrosoft copilot. So one of the first things to to think about as well is that that it has been a bit complicated with Bing Chat Enterprise and and Bing Chat tools. They now going to be renamed Microsoft C-Pilot essentially. So that's going to be the kind of co-pilot that you'll get which will be inside any of your browsers, Safari, whatever and also inside your sideloaded bar in Edge as well. So C-pilot is going to be that new name for Copilot Enterprise um which is being chat enterprise. So I'm even making a meal this this thing where we try and make it a bit easier for people to understand. And then the good thing is as well that they've now announced copilot studio which brings together all of these generator plugins custom GPTs. I'm sure that's something that you're going to be working with quite a bit. Right. So that's going to be able you to customize your co-pilot and co-pilots within the Microsoft 365 organization. If you're an enterprise or customer, create your own co-pilot. pilots, build them, test them, publish them, and customize your own GPTs in your own um organization. So, that'll be really exciting. I am I'm excited also by the fact that um I can't always remember all the names. I remember there being Viva and Yava, which I love, has been renamed into something else, but now I only need to remember the product name Copilot because Microsoft 365 copilot, Windows Copilot, Bing Chat's been called Copilot, Power Apps Copilot. All I need to do is think of a product name and I copilot on That's exactly right. Yeah, there's a lot of other other new interesting copilotes that were announced as well around new teams meeting experiences with co-pilot with collaborative notes. I've been using quite a lot of these internally recently and lots of the intelligent recap stuff is really good as well. So there's a lot of co-pilot announcements as well you can get lost in the weeds with around PowerPoint, Outlook and all of those tools. But really, really good in integrations and I suppose you know we're going to see a lot more of that. The the interesting element as well is that Windows AI Studio is available in preview coming soon as well. So that's the that's the other thing I'm sure you'll be working on Rey where is being able to develop co-pilots and Windows elements to your C-pilot ecosystem as well. So you'll be able to deploy SLM so smaller language models inside the Windows ecosystem uh to be able to do things offline as well. So there's going to be a big model catalog in there. So that'll be quite interesting. So you've got the copilot stuff and you've got the Windows AI studio. studio tools as well for devs. So that'll be quite interest. Great. So everything's in the cloud and everything's got a coil. Exactly. There's lots of copilot stuff included for security as well and I've been playing with security copilot. That's that's essentially your generative AI. If you get an incident that happens in your environment and there might be ransomware attack called, I don't know, north wind 568 or whatever it might be. That's probably something that exists, isn't it? But anyway, that that'll then tell you where that origin of that ransomware might be from. give you information about what what that actually does. So it's it's like guide for security size or so that'll be really really interesting when that when that comes into GA because it does get quite complex in the security area. There was a lot around the I suppose the the tools around dynamics and things like that. So co-pilot for service management, co-pilot for sales or more enterprises who might be using dynamics there was a whole heap of of co-pilot automations around the power platform which is citizen development platform the Microsoft release so power automate I've got a whole heap of things around there about generating scripts generating documentation for governance there was a whole raft of products now available around you know your supportive tools with inside app development but also the way you can use copalot to create things for you as well so there's a lot of stuff um in the in the power platform which is quite exciting but there was so many connections we put the book of news in the show notes here, but very very exciting for right from the hardware right up to citizen development. So, you know, I'm looking forward to seeing these coming. So, if I'm in the IT team, I should go and read the book of news. If I'm outside of the IT team, I should just add the word co-pilot onto anything I'm talking about. Okay. So, Dan, we've just done the whole two weeks of news about research and the Ignite stuff and all the developments there. We've been talking for about 20 minutes. So, we just need to go and check the internet because there has been one other piece of news going on which is that Sam Olman may or may not be CEO or chair or not CEO of open AI. I mean in the last 20 minutes fascinating the the thing that really intrigued me and it's made me think obviously there's been a lot happening in this space like your thoughts on this um the board of open AI I supposing about the actor structure the board of the open AAI it's a nonfor-profit board board. It's a 501c I think they call it in in the US which is your your kind of nonprofit entity and it it feels like that there's some tension going on with our nonforprofit entity but nobody really knows there's so many things going on on online about this but I was the interesting thing for me was that there's six people on the board and even just doing some research and trying to understand who those six people were and how that all works was quite interesting fe what what does that mean? Well, it's interesting because yeah, I've been doing the director's courses recently and it's all about the strategic way of thinking as a director which has tended to stay detached from the execution so that you're setting strategy. So I I find it quite interesting that the board have made something that gets really really close to execution. You know, normally they're working much longer time scales but perhaps they're not working in longer time scales because uh just before we restarted the I saw the news story was that they might be talking to Sam about coming back as CEO cuz I I had thought of it as a Steve Jobs kind of thing when he left Apple and then he came back what is it a decade later and saved the business. It's kind of like that. I hadn't expected it to be over the course of one weekend though. So, uh we've got to try and get this out on Monday just so that we're vaguely up to date. I don't think I'd realize that OpenAI is a not for-p profofit that is focused on how to achieve AGI. So, kind of achieving that general intelligence is what they're going for as the as the nonprofit. So, I don't think I'd really understood that that everything they're doing at the moment is a step towards reaching that general intelligence position artificial intelligence. That's what it's for, Dan. Oh my goodness. We'll have to go back and edit it so we don't look stupid. But, you know, that is what Open AI is all about is how do they reach that general intelligence level? I think this is a little road bump on the way. for everybody, right? Because it doesn't matter how big you are as a company, when things are moving so quickly, whether you're a school or whether you're a university or a commercial customer or a or a large nonforprofit, you know, you have to be very careful about the direction like you're saying. I suppose there are things in place like you're saying for the courses you've done where you do stay strategically at arms length so you can make long-term decisions and there's a lot going on. very very quickly with open AI specifically. I don't think I can't think of any company that has propelled so quick and had such an impact. So these things do happen and they have ripple effect. They do send ripple effects down through the the communities but it does give us a bit of a thoughtprovoking pause to think okay where where are we going with this technology? Would would you ever have believed that the news like the CEO of an AI company being sacked would be like number three or number four story on BBC news or the Guardian or those to make it into mainstream news is really fascinating in such a short period of time. Damn, there has been so much news. We have been covering two weeks worth of news. That's why it's taken us so long. But my goodness, we better stop because this is supposed to be a quick snap of the news. But the key for everybody would be find the links, find the papers, find the news on the show notes. We definitely won't put anything in about OpenAI. Go and just Open your favorite website to find out what's happening on that because you'll be more upate than us.
undefined
Nov 10, 2023 • 16min

Rapid Rundown : A summary of the week of AI in education and research

This week's rapid rundown of AI in education includes topics such as false AI-generated allegations, UK DfE guidance on generative AI, Claude's undetectable AI writing, the contrast between old and new worlds of AI, Open AI's exciting announcements, specialization and research bots, GPT4 updates, and gender bias in AI education.
undefined
Nov 1, 2023 • 29min

Regeneration: Human Centred Educational AI

After 72 episodes, and six series, we've some exciting news. The AI in Education podcast is returning to its roots - with the original co-hosts Dan Bowen and Ray Fleming. Dan and Ray started this podcast over 4 years ago, and during that time Dan's always been here, rotating through co-hosts Ray, Beth and Lee, and now we're back to the original dynamic duo and a reset of the conversation. Without doubt, 2023 has been the year that AI hit the mainstream, so it's time to expand our thinking right out. Also, New Series Alert! We're starting Series 7 - In this episode of the AI podcast, Dan and Ray discuss the rapid advancements in AI and the impact on various industries. They explore the concept of generative AI and its implications. The conversation shifts to the challenges and opportunities of implementing AI in business and education settings. The hosts highlight the importance of a human-centered approach to AI and the need for a mindset shift in organizations. They also touch on topics such as bias in AI, the role of AI in education, and the potential benefits and risks of AI technology. Throughout the discussion, they emphasize the need for continuous learning, collaboration, and understanding of AI across different industries. ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 7 Episode: 1 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI podcast. Uh, I'm Dan and look who's beside me. Hey Dan, it's Ray. You'll remember me from when we first set up the podcast together in 2019. I sure do. This is exciting time. This is podcast reboot, right? This is like getting the band back together, Dan. It is. So, what we're thinking is, as we've I alluded to in a couple of episodes previously because AI is moving so quickly and the technology space is really driving a lot of change and transformation specifically around generative AI now which is one aspect of the entire debate around AI. We can really start to focus in on looking at some of these new trends because it's moving so quickly, right? Oh gosh, it is moving so fast. I I I think about genative AI from the moment I wake up to the moment I go to bed because the the business I'm involved in is all about genative AI. You go to bed sometimes. But the the really fascinating thing is despite the fact I think about it 24 hours a day, y it's still moving faster than I can cope with. And I'm only trying to stay ahead of that one piece of technology because things are moving so rapidly. And it's not just the technology that's moving rapidly. It's the ways that we can use it. Ethan Mollik described it really well. He talked about it being like battlements of a castle. And some areas are inside the battlements and some areas are outside the battlements of what we can use it for. And we still don't know where that jagged edge sits because every day there's a new use case scenario that just genuinely makes me smile about what it can do. And then also we find some things that it's really lousy out that we thought it could do. And so I think this whole new world of kind of human centered AI rather than technology centered AI, which is how I think about generative AI, about the human interface, the way that we think and do things is fundamentally different from what we started talking about four years ago, which was machine learning and the binary ones and zeros version of AI. Yeah. And there has been a blur in that area, hasn't it? Because the the science of AI has been around for quite some time and we've talked about the history of it a lot with Lee, with yourself. We've explored where it came from and the kind of uh journey around AI itself. But I think we are doing this podcast as well. this new series that we're going to move forward with is to also take some opposing views I reckon because the conversations you're having around the business side of AI the outcomes conversations I am having around the technology the implementation of that the governance and the security element that they're often against each other right and there's a friction in businesses and schools and universities where the outcomes of the students the outcomes of the teachers the real business processes that can be changed are kind of log ahead with the speed of that change and the the way that technology is implemented. Yeah, it's interesting because it's AI and so it's a technological discussion I think is the starting point. Ray, yes, during my campaign. Well, no, Dan, I'm not because something I I wrote recently was very much around the decisions about this are going to be made in the boardroom, not in the server room. That it's actually about a fundamental process change that's possible. If you go and read the white papers from the researchers and the management consultants and all the government organizations, they're talking about 40% productivity improvements. And so the potential is to change the way that we do things and the way that we run organizations, not how do we make a technological change. And that's why I I find it so relatable because it's about business processes. It's about the things that change as a result of it, not about how do we make a small change with data. Yeah, I I I do feel passionately about that as well. But in the conversations I I'm having, you have to also tread carefully with this new technology as well because you don't expose information that you may inadvertently have not got uh general exposure to with the security and the governance elements may not be in place. And and we've got this tension between the speed of getting something out and actually the tension of waiting to get something out and making sure it's 100% proof, right? Yeah. I think it's about elevation. It's elevation of the role of, for example, the IT team or the CIO in an organization up to the boardroom because that's still not the case in every organization. Yeah. But it's also about elevation of the conversation. So, one of the things I've been doing recently, Dan, is I've been going through the Australian Institute of Company directors course about foundation for directors. And one of the things that keeps coming back that we keep being hit around the head with by the the facilitators of the course is thinking about the director's mindset, not an executive's mindset. The director's mindset is about strategy and direction. It isn't about implementation. And so if we're thinking about elevating the conversation and the role of the CIO, that is also about strategy and direction. Yeah. Not not just about the day-to-day And I think most CIOS would say, "Yeah, but I do think about that long-term strategy and direction." There's still a gap, I think, for many people between their responsibilities in a technology world. Yeah. And their responsibilities in a business enablement world. Yeah. And and that's also coming round and quite evident when you look at the way the digital divides open. And I'm seeing this more and more. If you remember even where are we now? We're in November. So even in January, this is when some of the school systems are banning Mhm. technologies like chat and some more kind of embracing that and some are being more thoughtful. So where do you see that sitting at the minute between that ban it kind of mode and this digital divide? You know banning only works on the bits you can control. I was talking to a major university in May 20,000 of the users on campus had used chat GPT on campus. 20,000. Yes. So So imagine if you banned that. How how many would be using it at home anyway or on their phone when they're on campus? So I I think putting the lid on it is really difficult to do. If you look back and go, do you remember when we banned Google search in schools because people could just look up the answer and then we banned Wikipedia and then we banned YouTube. Yeah. The three things that are probably the biggest learning platforms in the world were initially banned. And it didn't stop people using it. It just just meant that people were using it in different ways. So if you think about it, if you stopped the use of chat GPT in the classroom or on the campus, it just means students and teachers will go and use it at home with no controls and no guard rails. And you then open up the possibility that some student have access to to it when others don't. We talking just before this episode was recorded. just chatting about this and you mentioned about the the kind of autonomous vehicle problem and I think that's kind of evident in this place as well isn't it because or in this debate because of the fact that when we thinking about a digital divide and people banning and people not banning these things I think there's a danger and I think in episode one almost we talked about the human parity of technological systems being the the technology already surpassing human parity so there's almost like a need for IT leaders to think well we need it to be 100%. So do you want to explain that that that autonomous vehicle problem because I think that's really evident. Yeah I think if we if we go back through the history of the way that we've done things in technology yeah we've tended to use a gold standard which was is this perfect you know the go and look at this data interpret it all is it perfect and and the easiest way to understand that is most AI projects historically have probably burnt 8% of their and 80% of their time on cleaning up the data in order to be able to use it. Yeah. The self-driving car problem that I talk about is about that difference between is perfect what we're striving for or is better than humans what we are striving for. So in the self difference there massive. So the data says that self-driving cars are safer than human-driven cars. The data says self-driving cars are better than human-driven cars, but 85% of people in North America wouldn't trust a self-driving car. Now, I think part of the problem is that most drivers are above average, or at least they'll tell you that they're above average. But the reality is a self-driving car is safer. But people hesitate around that because it's like, yeah, but It's not 100% safe, but what a million people a year die on the roads. So the current human standard isn't perfect either. And in technology projects, we've often not measured against the current human standard. We've measured against some and that's that's evident when people roll out new versions of software, isn't there? They'll wait sometimes Mac operating system or Windows operating system, you know, six months after it comes out, sometimes years after the first version comes out. So in quotes, you can ing out any of the teething troubles with the with the software. So that's something that IT pros are sort of used to, I think. Yeah. And I think that's where a mindset change is going to come in because 100% right in two years time once we've cleaned all the data is a good outcome. Is 95% right instantly a good outcome? You think about for example feedback on essays. We know that Generative AI or AI generally can mark essays more consistently than humans, but we still probably don't trust it. And we probably want to check everything that it's going to say to a student to check that it's 100% accurate. Humans aren't accurate either. I mean, I read some research recently. If you are submitting a homework assignment or an exam assessment, you want to get it marked first in the pile rather than 10th in the pile or 30th in the pile because the earlier in the pile it gets marked, the more generous the humans are in the marks they give you. Yeah. So, you know, humans aren't perfect. So, can we get to that mindset which says actually good enough is good enough and let's move forward on the process. So, if you could give good enough feedback to your students now the minute they finish the essay rather than in a week's time when you've had time to go through and write and review it all and give them some feedback. That's an interesting question that I think is going to come back again and again. And that personalization element We've always talked about that with Napan results 6 months after the nap plan exams happen and what is the validity of that and the longer you leave here the less valid that feedback is the business models have changed as well with this haven't they you're looking at companies who have when you're talking feedback there companies that have been doing plagiarism checking schools thinking about assessment I really think there's going to be a breakthrough in that area at some stage because that that can't keep moving so the actual processes underlying in some really key aspects of universities and schools are going to have to change because there's there's no two ways about there. AI is already impacting those areas especially around assessment. I mean when you say plagiarism checkers, I still see people saying that they're using AI detectors and that takes us back to the accessibility thing. So AI detectors do not work. Full stop. There's papers, there's lots of other things you can go and read, but if you go and read the things coming from the people that are experienced in this stuff. People like Ethan Mollik on Twitter or LinkedIn, it's very clear the research is there. AI detectors don't work. And if you think they do, what you're actually doing is disadvantaging certain groups of students because what it will do is pick up people who for whom English is a second language and say that those things have been written by AI, but they haven't. They've been written by people. But the writing that they use tends to set off an AI detector and there is an underlying sentiment around fairness and reliability and trust which is a secondary conversation to it because obviously there is an element in certain aspects of utilizing AI where you might want to put in invisible watermarking on images and things like that but I think that is getting the reliability security and trust element on that argument which is very important to tech companies are working on that at the minute um is very separate to the assessment and plagiarism and AI checking and it's been lumped in the same conversation sometimes. Yes. And and the the other thing that comes in is the bias piece. Well, it it displays some bias and in fact I saw an example last night where somebody had asked it to draw an image of a great teacher and all four images were male. Now the interesting thing is you can spot those biases and you can fix it in the system and I've seen the chat GPT for example its bias has been changing all the time in order to start to actively remove the biases. But if you think about how do we remove human biases because there's a lot of human biases. Like for example, if if you're an education system with 100,000 teachers and I told you what I said earlier, which is that papers marked first get a better mark than papers last March 10th or 20th. If you wanted 100,000 teachers to change their habits to remove that bias. Imagine how long that would be. I mean, first of all, you got to convince them it's true. Then you got to convince them to change it and then you've got to keep reinforcing it. Whereas if you've got that kind of bias in a computer system, an AI system, you build a rule and suddenly it fixes it. Um I think about I asked Chat GPT to create for me a list of 10 doctor's names February March this year and all 10 names were male. And if you go into it now, you get a broad mix. Now, the reason it gave me all male names was because the top 10 doctor's names out of the US surveys are all male, but now it's been programmed to remove that bias. It's now doing a better job of it. So, it's actually overriding human bias. It might also be overriding human reality, which is that many doctors are males and that's what shows up in the data. Yeah. So, yes, there are these problems, but I believe that they're probably more manageable. And let's go right back to the beginning. This is an emerging technology. It's amazingly how fast these things are being dealt with and managed. Yeah. And and the impact that it's having, I think, is is evident. Even though you call it an emerging technology there, I'm still staggered, and I don't want to keep going around in circles with my narratives around this, but I'm staggered at the amount of applications that I'm seeing teachers come out with. You know, this this week alone I was looking working with a school dascese actually who were looking at creating texts for students to read which are one reading level above what their current reading level was. This dascese is working on literacy really heavily and going back to basics which is fantastic. So that is a perfect application for generative AI and they can do that. So you are talking about personalization happening really really quickly and if you can do those and solve those business problems really effectively and like you're saying with 95% accuracy, then let's do it because we actually have an impact in the classroom today rather than in a year's time. Yeah, that's absolutely right. And sure, we need to be aware of all of these other issues, but fundamentally we can improve some of the processes that we're doing. We can improve the support we provide for students. We can improve the way that we engage with students or with parents. It's so many of the things that involve interaction can be prove and we need to jump onto the use cases and the benefits of those use cases and testing those things out rather than probably the old world which is oh well we can do that once we've fixed all these other things. We can do that thing about predicting which students going to drop out once we've cleaned all the data in five years time. So there is a thing about is good enough now better than perfect in six months time or 12 months time or help us and I always always argue with some of this kind of stuff. I know when we're doing reading progress when I was a governor in a school in the UK and when I was doing offstead school inspection work it was very much the school budgets however controversial this might be the school budget for was for that year. So when people were saving up that school budget for a long-term minibus for example there was always this tension in a governor's meeting of saying well that $150,000 we storing for the minibus will come to fruition in 3 years time when we've got enough money to buy this minibus. However, that could be used as a reading recovery program for a year three student now. So, there is a genuine need to get impact now rather than thinking about these things too deeply I suppose. And there's an interesting element which we were talking about previously through China which you mentioned about the fact that China's got a even though they they've got their own interest in social norms around technology. They've got a different take on the way they utilize in this, right? Yeah. There's some new regulations coming out in China. So, if you think about the social norms and what is and isn't acceptable. They're talking about consumerf facing chat bots and things like that. The one of the things they have to do is test scenarios. So, they I think they've mandated a minimum number of tests. You must ask it 4,000 inappropriate questions. You must ask it 4,000 appropriate questions and then you have to manage the responses. But what is interesting is they're not saying It shouldn't answer any inappropriate questions. What they're saying is it should refuse to answer 95% of inappropriate questions, but equally it should answer 95% of appropriate questions. So, what they're trying to say is we recognize it's not going to be perfect, but we don't want to make it so perfect that then it won't do the job we want it to do. And and that's interesting because if you think about that in the cont of an education. Let's say you you build a chatpot, you put it on your your school's website, somebody will go and get it to have a bonkers conversation that is inappropriate. What they're saying is we recognize there is a risk of that happening and we're going to mitigate against most of those scenarios, but we're not expecting everything to be 100% accurate because if we go for that, we're going to lose all the upside benefit. And the upside benefit in that scenario of a chatbot on a school website is perhaps you're making information more accessible to parents or students or they can get help on their assignment in the middle of the night and quickly rather than waiting for somebody to you know be at the other end to support them with their tutoring or whatever it might be. Give them a LLM example of their maths question they stuck on immediately which could solve you know 70% of all of the queries that come through maybe more. And that's why I'm thinking about now we got the old bang back together Dan is you and I it's it's that reset point because we're going to have a ation going forward about how do we help this staff to improve the processes going on in education? What can it do? What can't it do? But it's going to be very different from I think where we first started off where it was a lot of technology conversation. Yes, this is about a human- centered conversation about how technology helps rather than it being a technology centric conversation. And and I think that's the fundamental difference because I I spend almost all my time now not talking to IT people and talking to leaders of organizations about the way that the organization can be transformed or the processes can be transformed not about the technology piece because we can have a human centric conversation because when you're talking about generative AI you can show things that everybody can relate to you show a real conversation you show it interpreting real information it's not a It's a not a bits and bites and widgets conversation. It's about genuinely transforming a process. But but I think as well and and this is why this is going to go really really in a in a different direction because generative is moving things forward. But we do need to also have a goal in mind with this podcast as we walk through to make sure that people listening do understand where the different types of AI fit because there is confusion. There is a divide happening at the minute and we want to bring everybody along on this journey. to make sure it's equitable for all. So, you know, there's AI, which is the generative style of AI, you've got the data analytics AI, you've got cognitive services, you've got all these AIs that can read documents, and then it's the use cases that are the key to say, well, where does that fit in? Is that in the generative space? Is that actually that's actually data analytics problem, which is where we kind of focused in season one. I suppose it was that data and AI element, the cognitive services, the machine learning But now it's really ramped up and moved into a you know completely different service of its own I suppose. Well and the other thing we have to add into that is the blend of consumer services and enterprise you know kind of enterprise services because actually many of the scenarios now you can test with consumer services. So imagine a scenario around the learning reading levels for example that you were talking about. You can test that that scenario works with chat GPT or with one of the other models and know that your scenario is going to work, but then you go and build it in enterprise services. You'll go and build it in Azure Open AI, but you can test it with a consumer level service. And so that opens up many more opportunities. It also opens up a whole load of conversations about well what happens when students are accessing consumer services or teachers are accessing consumer services. Is that okay? And in which scenarios is it okay and not okay? And where do you provide the guidance. And the thing I'm thinking about that we're going to get to over the next however many episodes is going to be about how do we get a blend of different voices. So I I don't mean your your voice, but three different voices. So one would be people that are the practitioners. They're not AI experts. They're not technology experts, but they can see a process, an opportunity for yes, help. The second will then be the the kind of generative AI experts and by that I'm meaning the the the people that understand the potential to transform something, but they see it from the perspective of what this allows this human- centered AI. And then the third voice will be the technology voice, the CIOS and the IT teams who are going to have a perspective built out of their legacy and history. I often used to say the main role of a CIO is to keep the head teacher off the front page of the newspaper. Yeah. Yeah. But there's a legacy that comes from that. It's a fasile example, but there's a legacy that comes from that about what you do about risk that what you do about accuracy and all that kind of stuff. We need to blend those voices and also Yeah, absolutely. And I I also I was reading a blog post by one of our interviewees uh recently as well, Nick Woods and the health team as well from from Microsoft. And health is another example. And I think we always look back could help and say you can take a teacher out of a school today and put them in the school 100 years ago and they can do the same thing and they know the board and they at the front and the sage on the stage kind of mentality and I know that's been facitious and teachers are much more technologically advanced these days but you take a doctor in a surgery even 10 years ago and they wouldn't understand the robots and the use of that technology and I think the health sector is always a good litmus test for me because that's really really innovative and actually has a massive impact today. They don't think about what's coming up in the next 10 years even well they do but you can see some of this AI technology already impacting patient care. So I think from my perspective as well it'll be good to bring in people from other industries and see that speed to make sure the schools and education innovate as quickly because the the post from one of our I think it was it might have been Simon Cos or Nick Woods one of the health executives but it was a really well thoughtout post saying we need to grasp this opportunity now and and move really quickly in the health sector with AI. Yeah. And health is interesting because it's a highly regulated industry, more regulated than education, but somehow it's able to I think innovate at system level uh a little bit faster than other regulated industries. You know, banking is highly regulated, but they're using it in banking. So yes, I think you're right. There's both regulated industries where we can get some examples from. There's also add in the commercial world. I'm speaking at a in a few weeks time at an event with somebody from Penfold Wine about how they're using generative AI. Oh, we have to get that in on the conversation then. Brad will be happy. But there's a lot of scenarios being used in other industries and yeah, let's get those examples in as well. One of the the benefits I've got now is I'm working alongside people working in those other industries. Let's hear what's happening in retail. Let's hear how it's going to disrupt global logistics. Let's hear how it's going to disrupt disrupt the wine industry because out of those sto is I think we will find interesting parallels that might excite some ideas in education. Brilliant. And and go back right to the start here for that human centered approach which is now different different to any other kind of era we've been in before. This is really driven by end users, isn't it? So get those end users on the on the podcast and get that conversation moving. Yeah. What's exciting to me, Dan, is that I've always had difficulty engaging my children in what I do because yeah, they didn't have that much interest in technology or or at least they pretended not to. And now there's some really fascinating conversations going on with my kids because of the potential of changing things not in a technology way. I think we need to get my daughter on as well because yeah, I was on a conversation in the um car yesterday actually and she's asking her Snapchat AI for quizzes on princesses all the time. Give me 10 questions of Disney. princesses cuz you went to Disneyland and it's interesting conversations she's having with that and it's interesting just listening to the way she interacts with AI. So I think getting those different perspectives would be excellent. So we should interview some students. The other thing we should do down yes is we should interview AI. We should get Oh wow, that's a great idea. We should get We should do an interview with Chat GPT and make that a whole episode. Do you remember when we had a bot join one of our podcasts in in series one I think it was? And it was a very robotic voice. Let's have a crack at having a podcast with as well. Yeah, bring on this season. I can't wait. Thanks again for rejoining the podcast. And and if Lee's out there listening as well, you know, Lee's got a new gig out there supporting the legal organization in Asia, supporting the AI conversations there. So, if you're out there listening in, Lee, thanks for holding the for you in another episode and see what's happening in the the world of AI, literally. Brilliant. I am very excited to get this uh going and uh find some interesting people to talk to. Let's do it.
undefined
12 snips
Sep 19, 2023 • 35min

Dr Nick Jackson - Student agency: Into my 'AI'rms

Dr Nick Jackson, expert educator, and leader of student agency, talks about AI's impact on assessment, the importance of AI in education, the intersection of AI and music, and the power of technology in young students' hands.
undefined
Aug 11, 2023 • 45min

AI - The fuel that drives insight in K12 with Travis Smith

In this Epsiode Dan talks to Travis Smith about many aspects of Generative AI and Data in Education. Is AI the fuel that drives insight? Can we really personalise education? We also look at examples of how AI is currently being used in education. ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 6 Episode: 2 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI education podcast. Uh this is where we talk to great minds of the day around AI and its impact and in edu and and other contexts. And today we are super lucky because we've got the Australian Edu legend that is Travis Smith. He is a exchool executive K12 industry leader. Microsoft Australia, one of the best edu humans in the world and a an above average golfer, I think you'd say. Travel, do you reckon? Is this about golf? Because if it is, I'm very excited. It can be it can be about whatever you want. Great. How you doing, mate? You're right. I'm very good. Thank you for having me, Dan. I look forward to a chat about AI in Edu. In your role as the K12 lead, you've been doing a lot on this recently, haven't you? Can you explain a little bit more about what you've been doing? Yeah, we have. We've been doing a lot on it for the last last um well since the start of the year really. But there's a lot of um a lot of hype about it, a lot of talk about it, and people trying to work out what's real, what's not, what they need to harness, what they don't, what they should ignore, and then let alone all the all the fear and the worry about AI more generally. And it it's kind of it's interesting because it's it's a topic that has never really been as mainstream as it is now. You know, everyone you talk to in the street has an idea about what generative AI is and and has an opinion on it and that's not been you know that's not been the case before and we we know that AI has been a part of everything we've done for a long time you know whether it's education or otherwise but this seems to have really captured the imagination of of everyone so we're having lots of lots of good conversations and good good thinking about it how does it actually lend itself in executives cuz I see a lot of stuff on LinkedIn with people going hey I've just found this new AI tool you know I've I've used this in this particular context, but I suppose you mentioned hype there. You know, I suppose some people are thinking whether this is hype or is it real like you mentioned and then also how do they actually implement that in a in a school or a school system. Uh I know you're on conversations around that. What are your thoughts on on that? The first thing is it relates really clearly to their data strategy because we've been talking about data in schools for best part of 10 years deeply. you know about having a data platform and making sure your your ducks are in a row and you can get all information out of systems. Data is the fuel that's going to power AI basically. The the flip side of that though is that AI is going to fuel greater insights from their data than they've ever had before. So the ability for example for you know I mean cast our minds a couple of years forward the ability for for a teacher to write a natural language statement into a some kind of a data tool and have a dashboard built for them about the kids sitting in front of them is pretty powerful. So, so the data stuff is not going to go away and that's definitely one of the conversations that we've been having. The other main one is that you you can actually use these large language models like the GPT models in your own environment and and so because a lot of the fears around this stuff are actually because they're out in the wild. They're out on the internet, you know, they're, you know, and People are sort of worried about that because and as they probably should be because teachers might be, you know, accidentally putting personally identifiable information out there or you know there are certain use cases that in education are a little more um a little more worrying than others. And so you you know we talk to the executives and help them to understand that there is a way for them to bring those models inside their own environment which means that they can secure it and they can you know put some privacy and security around it and just like they do in everything in their network you know no one's just getting completely unfiltered access to everything on the internet in education. There's always some some things in place and this is and this is no different. You know, there's definitely a few of those conversations happening around the country for sure. Do you think it is there is a bit of hype around this like where are we? You know, you get a hype cycle where everybody's all frenzied up with it all and and I think I suppose I'm I'm trying I'm really trying not to be jaded about this, but like I mentioned in a in a previous podcast about COVID and how I thought that was going to help you know move things along but schools tick back into the norm they just wondering do you think personalization and stuff like that has really and changing assessment is it really going to make a difference you think this time I think we've got the biggest opportunity we've ever had to do some real good in education you know that data example I described before is one of them where you give the access to the data to the people who want to ask the questions without requiring any technical knowledge of them that could be profoundly impactful. I think there's a huge possibility for personalization. I mean, I I think that teachers now and kids now have the ability, provided it's done safely, securely, and everything else to have information personalized for them or to personalize information for their kids in a way that's never been possible before. You know, I I was having a chat to a textbook manufacturer who makes sort of e textbooks and I was talking to them about the idea that the technology exists now that means that why does every one of your 25 kids have to see the same page? Like they might see the same sentiment, but but if I'm reading three years lower than the other kid in the class, why why are we reading the same stuff? Like why aren't I getting a simplified version of it? The simple answer to that in the past is because it doesn't scale. You cannot do it at scale. But the technology can now do that. You know, imagine imagine You know, I went into my Bowen and Company e textbook. I've just invented a company for you. Yay. And yeah, and um I put in my Lexile reading level, which is my teachers told me, you know, you're a level 800, Trev. And and then the the whole textbook language is simplified. You know, that's the possibility of this. So, whilst I think there's some initial hype about it, I don't know whether we're um we should be thinking really big about the way that this could potentially change education forever. And I know that the, you know, there's work that Sal Khan and others are doing on creating personal AI tutors for kids. You know, the UAE Department of Education or Ministry of Education are doing a similar sort of thing. And I think provided it's done safely, securely, and it's designed by educators for the purpose of educating. I think it's a great idea, but you've just got to make sure that all the mechanisms for for controlling it are in are in place. And yeah, so I think there's Yeah, I I think that Whilst there is some initial hype, the reality the the way that we should be thinking about that this now in education is at you know especially at the system level which is where I sort of spend a lot of conversations with departments of education and Catholic dascese is what might we create that now we can do at scale that we could never do before and I think there's some huge possibilities in that space. Oh there are yeah that's very true and there's so many I suppose it's an ecosystem of people that can that can help with this, isn't it? There's lots of I think schools are going to get an abundance of people doing whatever.AI that's a concern, but it's also a really good opportunity where people are looking at their products and going, how can I embed some of that AI stuff like the Nurture AI? I saw your webinars this week with Dylan William and and uh the nurture team where they put generative AI into the the feedback process uh you know in their in their work which is brilliant and it's about thinking outside the box. box really and thinking, okay, well, how can I add that to my tool to make that more effective? Um, correct. And and I think there's I think there's ways that school systems could start thinking about how how to add this to their tool to make their tools more effective. Right? So, you think about internal platforms like student information systems, learning management systems, whatever. You know, one way they're going to innovate and build this stuff in is that the the the ma manufacturer, the designer of that proprietary product is going to build AI in at the front end, but there's nothing to stop them being able to build artificial intelligence in their local environment which is leveraging all of that stuff. You know, that is changing the way that their users interact with it through a power app or, you know, whatever other way to start thinking about how to apply artificial intelligence around the systems that they've got in their environment. I think one of the biggest opportunities um with AI generally speaking and and people have a a kind of limited view which is completely understandable um if you're if you're not sort of thinking about it every day that what this generative AI stuff is that a human can write a sentence and the AI can respond in a humanlike way and bring bring to it all of this knowledge or you can ask it to create something in a graphical form and it can understand what that is and and create a graphic. Yeah, I think the real power of this is going to be less about that stuff and more about Fundamentally, what this means is that we forever change the ways that humans interact with computers because now we should assume that the computer understands what we're talking about. You know, I don't have to press 16 buttons in a certain sequence to make something happen. I'll just tell it what I want it to do and it knows how to do it. And so that's, you know, that's like the co-pilot stuff that Microsoft are are um you know, uh triing at the moment in Office where you just sort of tell it what you want it to create and let it create it. And I think there's if you extrapolate that idea out this this this fundamentally is the era where humans interact with machines in a very different way than before. And so if you put the large language model like a GPT model inside your cloud environment inside Azure let's say in in your tenant it's even telling that that when you go into the um you know the playground interface where you can actually start to train the model if you like. It's not quite, you know, it's that sounds more technical than it is, but there's an interface where it's got like what's the system message and the system message is basically where you type sentences about how you want this thing to behave and what you want it to do. So, you can say to it, yeah, you know, you are going to act as a an aid for teachers to help them uh design curriculum based around the Australian curriculum in whatever state. And you know, I mean, you can b if you do not know answer then say you do not know the answer. Uh never respond with blah. You're always going to put in hyperlinks to the point in the document you found. You know, you're you're actually and and all there's no coding in that. You're just telling it how you want it to behave. And then you can do stuff like upload sample responses. You know, here's here's an example of the way I want you to respond. And you can connect it to your own data sources in your environment. And and that fundamentally, you know, is is the big shift I think which we haven't quite um I don't think at scale we've understood yet that this is this is these large language models are going to fundamentally mean that computers can understand our textbased sentence based or even voice-based inputs and do stuff for us whereas we had to do 55 clicks to get stuff happening in the past yes it's so true isn't it that I think it's our user interface educational user it's a book for us educational user interface is 2.0 I suppose cuz it was um was it Sharon Oviet's future of educational interfaces I suppose it's kind of the next paradigm along from that you know the inking kind of stuff and it's the the natural input and conversations you can have with interfaces and it's not just edu is going to happen to our cars our mobile phones it'll be interesting to see what the next iterations of those operating systems will have in it as well it will and the possibilities are only being sort of the surface is just being scratched at the moment. I think like I think the you know once we work it out there'll be some really profound changes to the way we do it and and look I think back to your personalization question this is an opportunity to really level the playing field. I mean we're never going to get around at the moment the the challenge you know the tyranny of distance the tyranny of access to internet or a mobile phone or a computer or a tablet or whatever that that is that is a lasting challenge. But if you think about the possibility of Every learner in this country, regardless of where they are, if they've got an internet connected device, having a personal tutor for their learning, that can be pretty profound. Not as good as having a human tutor, not as good as having another teacher or a onetoone support person sitting beside you, but we we know that that doesn't scale and can't scale. So, if these systems are designed by educators, so it's got the rigor behind it. And then to your point there, if the interface is right, Then we could we could come up with something pretty cool like you know imagine a student had a I don't know a tool a dashboard a thing that they then sort of said well what do I want to do here's here's the top 15 things that way that AI can help me as a learner you know rephrase this paragraph because I don't get it press that button paste it in done you know um simplify this language um give me um give me an alternative uh example of this, you know, whatever these inputs are, if we if we could find some way of making it easy enough for for kids to use that and we could put the protection around it, which we know we can do, then I think there's some real possibilities for for us using this stuff. Yeah. And and you know what I was thinking as you were talking there, the the other interesting element to this as well in these in terms of these large language models is is that the the way you can connect with multiple disciplines. Whereas I I was just thinking as you're talking, you know, say 10 years ago or even probably even now today there's people making a living out there training um teachers in schools about apps on the iPad. You know, there's literacy apps, there's maths apps, there's this that and the other. Whereas with say a large language model like chat GBT, people are asking it to come up with maths lessons for them, geography lessons for them, science lessons. It's not so specializ So it's like one interface for everything. You know, I had a chat with a math teacher the other day who was trying to talk to their faculty uh about mathematics education and they were they were looking at the embedding inside the Bing chat um facility of maths and it's got connections into kind of um uh lex kind of graphics. So the the mathematical kind of um uh visualizations of things using those particular open formats and stuff and I I think oh chat GBT's got their connections into Wolf from Alpha and and all that kind of stuff. So it's almost like a one interface is also the gold in here for teachers because they haven't got to go well what app have I got to use for planning my maths lesson you know you can do everything in one area I suppose. Yeah I think that's I think that's true and I think when we think about interface too it doesn't always have to be something that I type into you know I remember we were doing some work with um with good start early learning who run childcare centers around the country. And um you know we were we were sort of hacking if you like the ideas around you know what are the challenges for educators in their centers and and they use some great tools at the moment um you know to communicate with parents about their child's learning while they're at the at the at the um center for that day and sort of stuff they're doing. But the reality is that even though it's a it's a pretty um short process for a educator to gather evidence of something going on that's cool in little Dan's day that I want to share with his mom and dad. Um it's still when I've got 25 30 35 kids in a room and a few educators roaming, it's it's still an overhead, right? And so if we think about the interface in, you know, imagine I could just say to my phone, capture a learning moment for Dan, you know, and then you know or record record a video of Dan about him playing with the frog next to the pond or whatever and then it just knows what that means and starts my camera and I record it and then it just does what it does with it like so that there's kind of educators can stay in the flow while give that kind of feedback to parents about not feedback that information to parents about what's going on in in learning and you know maybe it's about learning four or five or six voice commands to document learning that somehow automatically automatically happen and sort of get stitched together to give a to provide that really important bit of communication back to the family. So yeah, I think the as I as I say we're kind of at the tip of the iceberg and thinking about it just with text and I think that the the possibilities are are much broader than that. Yeah, I I agree. I I was speaking to a uh Catholic dascese the other day who you who then thinking about it in terms of they wanted to try to triage their call center. So it was about well like where can we how can we more effectively you know if a parent rings into the school system how do we triage that across so we're not putting people in a call queue or in a service ticket with something which is quite straightforward you know and bots have been around for a while around that but also what they were trying to do is analyze sentiment around conversations that people are having so that they can get sentiment analysis and actually prioritize things um around that. So it's like that's a different look the entire and I know we've talked about it for a while. There's entire business of schools that can utilize AI to smarten up those processes as well and be able to share those insights. You've got the two kind of elements I suppose that come together I think. So and and there's other tools that are being created at the moment. Um you know, that are about um process automation within, you know, large systems like school systems or or schools or whatever. But there's actually artificial intelligence that's being designed to look across the data within your school system, let's say, and find the workflows that people are doing that are annoyingly slow. You know, if if a thousand times someone gets something from here, puts it over there, saves it as something else and puts it over there and then adds it to this system with a comment. Then the system can find that and say, you know, this is a process that can be, you know, expediated, made made easier for people um and then even um get to a point where eventually the artificial intelligence may be able to create a solution to the problem, create a workflow for it. Yeah, that's so clever. And and then connection together, what I'm thinking is some of the stuff that we've got which you're using from a sales point of view, that would be gold. in a school point of view cuz if when I was teaching I remember you had so many kids the that came through when you did a parents evening and you know I had parents evening the other day and it's even it's even shorter these it feels to me that you know it's a 5 minutes in and out conversation whereas at least when you were going into the school not everybody got there though but it is more equitable cuz people are doing that via teams and things like that now but the um if I was a teacher and he was bringing that information to me you know sometimes you know I know this is bad practice. But sometimes you'll be you'll be halfway through the conversation before you even realize who the student is. Because you teach so many kids, you're kind of thinking it takes a while when you're having the conversation to say, "Oh, yeah. I I remember teaching David and you're looking at your notes and you're thinking where who's who's David again?" You know, which which David is it? I teach 50 Davids this year and like which one is it? So, it takes a while, but if you can have that personalized context before you even get to a teacher meeting to say, "Hey, this is David's background. This is David's family circumstance. You know, you might be speaking to his mom because his dad's passed away or whatever it might be. Uh and he's just dropped out of his sports soccer class and you know, this is his latest uh information. You know, the teachers in my recent um parent teacher interview, they obviously had a markbook cuz they, you know, like a traditional markbook. Some might have been using Excel, but they basically read to me three marks. based on what my kids had had, you know, like this is the this is how they performed in the last three assessments and they gave like an average kind of statement about them, you know, and and some had lots of really good insights, but they I just imagine if they had more information, it would be even better. Yeah. There's there's research that shows that there's something there's over 60 data places in a in your average school where data is stored about kids. Wow. And and so that's obviously teachers markbooks and it's LMS's and it's student information systems and it's all the traditional things but it's also the signup sheet in the in the school hall for the production and the you know the year 8 volleyball team list and you know all that stuff um the the instrumental music class lists all of all of that and it's it's basically just about um triangulating data the first part of it's about triangulating data to work out you know what is the full picture of of little Mary or whatever? What is the full picture? Um but then I think the second part as I alluded to before is shifting the ownership of that data to the people who actually want to ask questions of it. So one of the things we do see is and this is where the natural language input part of GPT models and the like are going to help. What we what we currently see is people creating data dashboards who are using data read people, you know, data scientists or switched on people with data creating dashboards that don't quite land with the end user if the end user isn't a data user or, you know, a data native because they, you know, I mean, they say take it on face value, but they can't do any other manipulation of it or drill down and read look at different variables or whatever. So, you got to get it absolutely right, which is which is really hard in the complexity of a school. But flip that on its head and allow the teacher to write a statement about what they want to find out before the parent teacher interview so that they can just say show me show me um you know Dan's learning across the last six months across all subjects including mine include truency data blah blah blah blah blah and then bang up pops this rich report of information but I suppose even taking it to the next level as a parent you know like if I got access to that I wouldn't need to have a parent teaching me I'm not trying to take the humanity out to teaching you because I am okay well like there is more education system than the than the the context there. But to be fair, the reporting the reports that I've just had for my kids, you know, nap plan, we can pick those up and put those straight in the bin really because they're so old and outdated. They give you a bit of a litmus test of stuff, but well, that's my personal opinion anyway. I know they've got their own kind of value, but the general reports, you know, there was certain tick boxes on there, you know, does does my daughter do dance? Yes, she does. Does she, you know, what the extracurricular things? What is she doing in English and whatever? If I've got that information to hand all the time and I can ask that myself for that data and I could say, you know, how is Megan doing this week? You know, oh, by the way, you know, rather than the the um, you know, the the text message of your son's been late three times in the last month or whatever, he's now in a detention. You could correlate it all together and go this term, you know, he's tracking on this. You know, having that tracking, you you wouldn't even need those parent teacher meetings. Yeah. I think all the conversation would be very different because it's not about information sharing, right? It's about a discussion of how they're going. The I mean the other thing to say about that is too is that you know even even the report con the content in the written part of a report is fairly contrived because it has to be you have to be sensitive to everyone's needs. You know you it's you know oh you can't say this or you can't say that or let's not be you know, let's try and find a positive way of saying everything. And, you know, sometimes you need to, you know, we need to have a a a good honest conversation, which happens in parent meetings all the time at schools. But I I do agree with the idea that what happened if you know, what happens if any parent in a classroom or any parent in a school had access to a set of data that they could also ask any questions of about their own child. So, I think I think this this idea of, you know, data being the um fuel that that runs AI. I think I think it's true that AI could be the fuel that drives insight out of data because it it gives it gives different audiences who are not data literate per se. Like myself as a teacher, I I taught psychology, a bit of English, history, geography, you know, I wasn't a maths or or a science teacher who, you know, was big into data and stats and numbers and, you know, so if I was presented something It was a bit confusing. I I couldn't sort of get my way through it. So, we yeah, we we we've got a possibility, I think, to think about, you know, not only the importance of data continuing, but the way that people digest data, um I think is a real possibility. And there's absolutely no reason in my my mind either while kids shouldn't have access to their own data. Yeah, that's true. Mention that. Yeah. Because if students can do that, then they can improve their own learning as they go in, you know. Um I know my bit of a when and I'm looking at my kids at the minute, you know, they're using GPT tools to kind of help them. So, you know, my my current example, I think I mentioned in the last podcast episode as well, is like my son was given the Handmaid's Tale to read for English and it's like it's pretty hard going the Handmaid's Tail even for an adult to go through it and and he got it to summarize and he wasn't going to read the book 100% he was not going to read the book. I could see in his eyes. So, he used Chat GB to summarize chapter at the time. So, could get a gist of the book and understand the context in the book. Now, that is negating from the fact that he wasn't reading and the point of him doing reading is to get used to reading and contextualizing and comprehension in text, but he did get an idea and a better handle on the book um and the context within it to do with the tensions between women and men and things like that much more effectively using summarization of chat GPT. But I think there's also for like we are in a bubble. We're in our own context bubble here and I think we we assume every teacher can use these things and they don't. There's going to be teachers out there that just type something into chat G GPT or Bing chat and go give me a lesson plan for your science volcanoes. It gives them some junk and then they go well that's rubbish then they move on right. Um so I think student agency is really good. I'm going to try to um speak to Nick Jackson because he's doing a lot u Dr. Nick Jackson down in um South Australia. is doing a lot with student agency and getting students involved in AI and how that works. But but when we step back from all of this stuff like as an executive team, you know, when you were speaking to these executives and departments of education and dascese and things, what what what should they be looking at? You know, if you had like a couple of simple things for them to do if they listening into this podcast, if there a couple of things that they could do now to kind of um help them manage AI and this age of AI going forwards. Well, I think I think the first one is to is to keep that data journey going because we know that that's going to be even more important once tools are infused with AI. The second the second one I think is to is to have um have discussions with all parts of the organization around what they could do with it to help them. So you know teacher meetings where I know that this happened in Melbourne uh a Catholic arch dascis where they pulled some teachers together and they had an amazing conversation about what teachers are doing, could be doing, might do with AI to save themselves time. We know we've got a massive crisis in the industry at the moment. People leaving leaving the industry. Teaching is hard work. It always has been. And um you know, everything is just piling up. There are real possibilities for us to fix this. And I think that we need to have good conversations with people about how to do how to use AI safely and securely to save themselves time to take a load off. you know, to sit get a first draft of something. Um, you know, I saw, you know, a teacher who was had, you know, a gazillion things on a to-do list, and one of them was to write an email to a to a student because they cheated in a year 12 assessment, and they had to come to a meeting with the assistant principal. It was, you know, a Victorian Curriculum Assessment Authority authority approved process. And, you know, they were just like, it's not that I can't do it. It's that I've got to sit down in front of a blank email and craft this thing. But with with Bing Chat, they were able to put in what they needed, obviously no personally identifiable information, and get a first draft of that email in about 2 minutes. Yeah. And then with a with another three or four minutes of re-editing it and changing it cuz it wasn't quite right, they were able to send it. It's just we we've got to have conversations at every level about what are the things that we could do with this tool that are going to save time or what are the things we're worried about or what are the things we should be protecting or what are the non-negotiables we shouldn't be doing. All of those conversations need to occur because there's a gazillion use cases out there at every level of an organization like a department of education or a Catholic dascese whether it's the marketing team at the at the central office of a dascese or of a large private school or whether it's the you know the teachers in the classroom or whether it's kids or whether it's parents or whether it's anyone. There's so there's so many um conversations that need to be had. And the third thing I'd say is that you know it's possible right now to put these models in your own environment and protect it and try it. And so it might be just um you know worth send and we know that many of the departments and dasces are doing this now right they're getting the large language model they're putting it in their own environment where it's where it's protected and and only accessible by certain permissions and etc etc and then feeding it their own data so that um it can respond based on the knowledge base of that organization, not the knowledge base of the internet. Yeah. Some of which is rubbish. So, so you know, get started with something small, think about what a use case is, but I think there's also um a conversation that needs to be had about not only having discussions and and meetings with different members of a uh a cross-section of a community, but also starting some basic training, you know, encouraging the you know, your IT teams in your organization to do some fundamental certifications or you know getting try and get ahead of the curve with this. There's teacher courses, there's you know there's stuff for for students like the Imagine Cup Junior stuff that we we run where kids can start thinking about artificial intelligence. There's courses for teachers, there's courses for IT people. There's you know there's a lot of entry points into this and I think it's just about you know knowledge is power in this space and and so you can have an informed discussion um about it. You need to kind of You need to get your head around what it is and what it isn't and what the threats are and what the opportunities are and and have a you know a conversation about it. Yeah. So true. And when you're talking about the way they disseminate it, one good practice that I I found one of my ex-colagues back in the UK, Chris Goodall, he's posting a lot about AI at the minute in on his um on his LinkedIn feed. And what he does with his staff at that next level down from the executives, he does a he does a post every a session every week, but he posts on LinkedIn and he's he basically splits into three things. He does try this. So he'll he'll he'll put a prompt in an example. Um so it'll be like a something they can try, you know, today, you know, this week. Go and try this. Then there's something to watch and there's something to read and it's something short, you know, something different because there's so many tools out there um you know, for different contexts. So you you'll kind of say, "Okay, try this prompt, but put it for you your a lesson and then watch this video which is something the I don't know the Khan Academy is doing or something that Microsoft's released recently or whatever and then um you know and something to read as well which might be around ethics and AI or something. So you think about different modalities of teachers as well that that some will read something will rather go and try it some want to watch something. Um so there's have you seen any tools recently or got any examples of anything that that you've seen in edu that um that people have utilized? There's lots there There's lots of them to your point before like everyone's coming up with a company called something.ai, right? And there's so there's so many there's so many tools and and Microsoft's at a different end of the spectrum to that because we're sort of providing a platform for people to build tools, right? Tools are um and and so I mean I if you look at you just join a Facebook group for example of educators globally or in Australia or whatever talking about how they're using AI, it's fascinating, you know, the the stuff that they're coming up with the way that they're discussing um you know quite quite um not sensitive topics but topics like um plagiarism, topics like you know is it a bit icky for teachers to be having autogenerated report comments? You know does that take the teacher out of the loop? You know they're having really good conversations and they're also sharing an awful lot of good tools. Now obviously once all of this gets out in the wild then you've got you've got to know the privacy and the efficacy and the ethics behind what is happening with your data and you know all that stuff again. So and that's why the you know the larger departments in Australia and New Zealand are thinking about bringing this in their own environment like South Australia have done you know they they've set up the open AI large language model in their own environment so that people can go crazy because they know it's safe they know it's protected they know there's not data leakage And so now they're exploring what are the use cases from that. So there's a there's a whole range of something.ai tools for summarizing large PDFs to whatever. And and I I should say too that you know the the the plagiarism challenge the um intellectual property is going to be a really interesting space and we're starting to see some challenges around that now. Um you know there's a there's a whole lot of stuff that the human race has to work through here. Um cuz we we've got some pretty cool cool tools and And like every other disruptive innovation, there's going to be some some things that we have to work through pretty seriously. Yeah. No, that's really good point actually because it's going to affect everything, isn't it? I don't know how deep it's going to go into golf to help you out with golf, but probably um there'll probably be something that'll appeal at some stage. What you reckon? No AI could help my golf mate. It's uh it's beyond beyond support. Well, I said yes then. But no, of course AI is going to help everything. You something something will happen. Something something will definitely happen. So, you know, thanks for joining us today on this uh podcast. Before we leave, I suppose, you know, is there anything that any one or two resources that you'd share that would be useful for these executive teams to kind of pick up on um to move forward in terms of AI in their in their schools? Yeah, I think it I think it's well worth um getting involved in communities who are discing ing this stuff whether it's on LinkedIn whether it's on you know Facebook groups with educators in it you know there's lots and lots of conversations happening at the moment there's a course that we've run for educators called uh which is at aka.mai for educators and it's a training course where teachers who are starting on this journey can understand a little bit about what AI is a good a good resource but I mean there's a whole myriad of places that they could go if you're an IT um you know person who's working in an IT capacity then there's a huge range of training yeah we got the fundamentals data I fundamentals um each cloud provider has got their own fundamentals training as well around that because obviously Google are doing stuff as are doing stuff as well and I I suppose the worry is you know without without ending on a bit of a downer here but my worry I was speaking to a diocese yesterday worry you know the old adage right and this isn't AI this happened years ago when I was when I was working in the school myself, there was a website called ratemyteers.co.uk and you could go in and you could rate every lesson that you go into and kids are going in and they were ranking uh teachers commenting about teachers. They could also do anonymous posts about teachers as well. Um and there was like the the entire UK education system and there was a global website as well. They're all, you know, came tumbling down. The same thing happened with the internet as well, you know. It was like What are we going to do about this? The same thing with calculators. Same thing with pens. Now we say AI. Um, you know, the conversation yesterday was about the worry that what happens if somebody puts the face of the CEO onto a naked picture, for example, or a picture of, you know, you know, like they've done with deep fakes of um Trump and and all of this kind of stuff and Barack Obama. Uh, and and there is a limit to what you can do, right? The cat's kind of out of the bag, so there is a limit to what you can do, but there are going to be cases where a company is going to do a brilliant tool like with GPT 4, they'll call it something, teachers will use it, they leak credentials into there because they need to log in and then, you know, these Russian hack groups or whatever or these, you know, state actors or whatever it might be, we'll just sort of mine credentials and stuff for teachers. So, you know, there's there's there's a bit this this isn't an old uh I suppose um conundrum and there are tools and security practices like you said about bring things in house to manage uh these applications when they're getting pushed out. But it's, you know, it is going to be a matter of time before something, you know, happens which is visible, you know, because you can't stop this because kids can go home and do this at home, right? That's right. Yeah. And that's why it's so important to start thinking seriously about how you can do this in an enterprise way, you know, which means you know, you know what that means. So, but you're right. I mean, there's it's it's no different to any of the myriad of tools that I have ever signed up for using a personal email address and a you know websites for this and shopping sites for that and you know it's it's all of that still that's not that's not a a new um that's not a new problem but it's it this is possibly an exacerbating factor for that and it is it is a responsibility of everybody I remember a comment which somebody made to me in a school once was about you know they it people tend to get in the neck for any of the cyber stuff any the security things because of the policies and the technical implications but I remember an IT person saying to me if a student brought a knife into school you wouldn't take them to the woodwork class and and and say like this you know you know everybody's responsibility is security whether it's um you know and the use of these tools. So actually sometimes the responsibility does have to land on that teacher's door to say well look if you're going to utilize um these technologies that you can't close up because that's all we did with the rate my teacher site. We basically said, "Well, we can't block this cuz we can block people accessing in school, but they're going to go home. They're going home and, you know, um putting ratings to the teachers on there. We'll actually um try to embrace that and some teachers will put in the link to it. Did you enjoy my lesson? Give me feedback. Post on there." You know, you got to kind of embrace that. And I think some of the teachers are already embracing the tools like um Bing Chat and the like because they're actually sharing that information and saying, "Look, this is what I done. Or they might even say to the the kids. I saw an English teacher the other day was trying to reverse um reverse or utilize the tool for the um the im midjourney IM and Bing image creator. They were trying to utilize English. So using prompt engineering to develop kids English and descriptive writing. So who could come up with a best image and then you have to try to reverse engineer what prompt to try to get that image. And then she started showing images of like a a cityscape in the dark with a cat in it. And then the kids had to try to duplicate that with English narrative by going, you know, um, show me a picture of a cat in the dark with neon lights saying cafe and ra in the style of, you know, whatever painter. And, um, you know, it's really interesting the way that people can embrace these technologies. Yeah. And I think that, you know, the creativity of of educators is just forever amazing and unlimited. Like there's very creative ways that teachers will think about using this stuff. Another example I saw was um this was months and months ago when when the plagiarism discussion was really on fire about this was when a student is asked to um you know do a write an essay or something at home or a response. If the teacher gets that electronically, what they would do is pop that into GPT and ask GP at to create seven comprehension questions based on that text. Then when the kids came into class, they'd sit down and be given the seven comprehension questions as half their marks. Wow. So if you didn't write the first half, like if you didn't write your essay, that's clever. You can't answer the questions. But if you did, you're fine. And so, you know, there's really interesting ways, and I'm sure that's the tip of the iceberg in terms of creative ways teachers are thinking about how assessment might change or some of the impacts of this. But anyway, I think we're at a, you know, although we're going through the Gartner hype cycle of this is going to be the biggest thing ever and then we're going to that trough of disillusionment it's called. I love the emotive language where where everyone's sort of thinking, oh, you know, oh, what about this and what about that? I I do think that we've got some profound opportunities if we invol if we if we involve um educators who are the specialists in learning in the way that these tools could be crafted to help kids understand more or the way they can be used to help teachers save more time and teach better. I think we've got a profound opportunity at the moment to change education for good. Yeah, definitely. Well, on that good note, like my really sour note about the security element, thank you Tra for joining us today. Your insights have been amazing. I'll put some of the links in the show notes and things, but thanks Travy. You know, it's phenomenal. Keep up the good work that you're doing in Edu and supporting these systems because we certainly on a on on for a ride in the next uh couple of years. Yeah. Thanks, Dan. Appreciate it. No problem.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app