AI in Education Podcast

Dan Bowen and Ray Fleming
undefined
Nov 19, 2023 • 27min

Rapid Rundown - Another gigantic news week for AI in Education

Rapid Rundown - Series 7 Episode 3 All the key news since our episode on 6th November - including new research on AI in education, and a big tech news week! It's okay to write research papers with Generative AI - but not to review them! The publishing arm of American Association for Advancement of Science (they publish 6 science journals, including the "Science" journal) says authors can use "AI-assisted technologies as components of their research study or as aids in the writing or presentation of the manuscript" as long as their use is noted. But they've banned AI-generated images and other multimedia" without explicit permission from the editors". And they won't allow the use of AI by reviewers because this "could breach the confidentiality of the manuscript". A number of other publishers have made announcements recently, including the International Committee of Medical Journal Editors , the World Association of Medical Editors and the Council of Science Editors. https://www.science.org/content/blog-post/change-policy-use-generative-ai-and-large-language-models Learning From Mistakes Makes LLM Better Reasoner https://arxiv.org/abs/2310.20689 News Article: https://venturebeat.com/ai/microsoft-unveils-lema-a-revolutionary-ai-learning-method-mirroring-human-problem-solving Researchers from Microsoft Research Asia, Peking University, and Xi'an Jiaotong University have developed a new technique to improve large language models' (LLMs) ability to solve math problems by having them learn from their mistakes, akin to how humans learn. The researchers have revealed a pioneering strategy, Learning from Mistakes (LeMa), which trains AI to correct its own mistakes, leading to enhanced reasoning abilities, according to a research paper published this week. The researchers first had models like LLaMA-2 generate flawed reasoning paths for math word problems. GPT-4 then identified errors in the reasoning, explained them and provided corrected reasoning paths. The researchers used the corrected data to further train the original models. Role of AI chatbots in education: systematic literature review International Journal of Educational Technology in Higher Education https://educationaltechnologyjournal.springeropen.com/articles/10.1186/s41239-023-00426-1#Sec8 Looks at chatbots from the perspective of students and educators, and the benefits and concerns raised in the 67 research papers they studied We found that students primarily gain from AI-powered chatbots in three key areas: homework and study assistance, a personalized learning experience, and the development of various skills. For educators, the main advantages are the time-saving assistance and improved pedagogy. However, our research also emphasizes significant challenges and critical factors that educators need to handle diligently. These include concerns related to AI applications such as reliability, accuracy, and ethical considerations." Also, a fantastic list of references for papers discussing chatbots in education, many from this year More Robots are Coming: Large Multimodal Models (ChatGPT) can Solve Visually Diverse Images of Parsons Problems https://arxiv.org/abs/2311.04926 https://arxiv.org/pdf/2311.04926.pdf Parsons problems are a type of programming puzzle where learners are given jumbled code snippets and must arrange them in the correct logical sequence rather than producing the code from scratch "While some scholars have advocated for the integration of visual problems as a safeguard against the capabilities of language models, new multimodal language models now have vision and language capabilities that may allow them to analyze and solve visual problems. … Our results show that GPT-4V solved 96.7% of these visual problems" The research's findings have significant implications for computing education. The high success rate of GPT-4V in solving visually diverse Parsons Problems suggests that relying solely on visual complexity in coding assignments might not effectively challenge students or assess their true understanding in the era of advanced AI tools. This raises questions about the effectiveness of traditional assessment methods in programming education and the need for innovative approaches that can more accurately evaluate a student's coding skills and understanding. Interesting to note some research earlier in the year found that LLMs could only solve half the problems - so things have moved very fast! The Impact of Large Language Models on Scientific Discovery: a Preliminary Study using GPT-4 https://arxiv.org/pdf/2311.07361.pdf By Microsoft Research and Microsoft Azure Quantum researchers "Our preliminary exploration indicates that GPT-4 exhibits promising potential for a variety of scientific applications, demonstrating its aptitude for handling complex problem-solving and knowledge integration tasks" The study explores the impact of GPT-4 in advancing scientific discovery across various domains. It investigates its use in drug discovery, biology, computational chemistry, materials design, and solving Partial Differential Equations (PDEs). The study primarily uses qualitative assessments and some quantitative measures to evaluate GPT-4's understanding of complex scientific concepts and problem-solving abilities. While GPT-4 shows remarkable potential and understanding in these areas, particularly in drug discovery and biology, it faces limitations in precise calculations and processing complex data formats. The research underscores GPT-4's strengths in integrating knowledge, predicting properties, and aiding interdisciplinary research. An Interdisciplinary Outlook on Large Language Models for Scientific Research https://arxiv.org/abs/2311.04929 Overall, the paper presents LLMs as powerful tools that can significantly enhance scientific research. They offer the promise of faster, more efficient research processes, but this comes with the responsibility to use them well and critically, ensuring the integrity and ethical standards of scientific inquiry. It discusses how they are being used effectively in eight areas of science, and deals with issues like hallucinations - but, as it points out, even in Engineering where there's low tolerance for mistakes, GPT-4 can pass critical exams. This research is a good source of focus for researchers thinking about how it may help or change their research areas, and help with scientific communication and collaboration. With ChatGPT, do we have to rewrite our learning objectives -- CASE study in Cybersecurity https://arxiv.org/abs/2311.06261 This paper examines how AI tools like ChatGPT can change the way cybersecurity is taught in universities. It uses a method called "Understanding by Design" to look at learning objectives in cybersecurity courses. The study suggests that ChatGPT can help students achieve these objectives more quickly and understand complex concepts better. However, it also raises questions about how much students should rely on AI tools. The paper argues that while AI can assist in learning, it's crucial for students to understand fundamental concepts from the ground up. The study provides examples of how ChatGPT could be integrated into a cybersecurity curriculum, proposing a balance between traditional learning and AI-assisted education. "We hypothesize that ChatGPT will allow us to accelerate some of our existing LOs, given the tool's capabilities… From this exercise, we have learned two things in particular that we believe we will need to be further examined by all educators. First, our experiences with ChatGPT suggest that the tool can provide a powerful means to allow learners to generate pieces of their work quickly…. Second, we will need to consider how to teach concepts that need to be experienced from "first-principle" learning approaches and learn how to motivate students to perform some rudimentary exercises that "the tool" can easily do for me." A Step Closer to Comprehensive Answers: Constrained Multi-Stage Question Decomposition with Large Language Models https://arxiv.org/abs/2311.07491 What this means is that AI is continuing to get better, and people are finding ways to make it even better, at passing exams and multi-choice questions Assessing Logical Puzzle Solving in Large Language Models: Insights from a Minesweeper Case Study https://arxiv.org/abs/2311.07387 Good news for me though - I still have a skill that can't be replaced by a robot. It seems that AI might be great at playing Go, and Chess, and seemingly everything else. BUT it turns out it can't play Minesweeper as well as a person. So my leisure time is safe! DEMASQ: Unmasking the ChatGPT Wordsmith https://arxiv.org/abs/2311.05019 Finally, I'll mention this research, where the researchers have proposed a new method of ChatGPT detection, where they're assessing the 'energy' of the writing. It might be a step forward, but tbh it took me a while to find the thing I'm always looking for with detectors, which is the False Positive rate - ie how many students in a class of 100 will it accuse of writing something with ChatGPT when they actually wrote it themself. And the answer is it has a 4% false positive rate on research abstracts published on ArXiv - but apparently it's 100% accurate on Reddit. Not sure that's really good enough for education use, where students are more likely to be using academic style than Reddit style! I'll leave you to read the research if you want to know more, and learn about the battle between AI writers and AI detectors Harvard's AI Pedagogy Project And outside of research, it's worth taking a look at work from the metaLAB at Harvard called "Creative and critical engagement with AI in education" It's a collection of assignments and materials inspired by the humanities, for educators curious about how AI affects their students and their syllabi. It includes an AI starter, an LLM tutorial, lots of resources, and a set of assignments https://aipedagogy.org/ Microsoft Ignite Book of News There's way too much to fit into the shownotes, so just head straight to the Book of News for all the huge AI announcements from Microsoft's big developer conference Link: Microsoft Ignite 2023 Book of News ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 7 Episode: 3 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Hi everybody. Welcome to the Iron Education podcast and the news episode. You known so well last time, Ray. We thought we better keep this going, right? Well, do you remember Dan? I said there's so much that's happened this week that it'll be difficult to keep up. Well, there's so much that's happened this week. It'll be difficult to keep up, Dan. So, as you know, I've been reading the research papers and my goodness, there has been another massive batch of research papers coming out. So, here's my rundown. This is like top of the pops in the UK, you know, like add 10. Here's my rundown of the interesting research papers this year. So, interestingly, there's some news out that apparently it is okay to write research papers with generative AI. So, the publishing arm, the American Association for the Advancement of Science, now that is a mouthful, for Unfortunately, their top journal is called science, which is not a mouthful. So, they've said authors can use AI assisted technologies as components of their research study or as aids in writing or presentation of the manuscript. So, you're allowed to use chat GPT and use that to help you write a paper as long as you note the use of it. And the other interesting thing is however they have banned AI generated images or other to media unless there's explicit permission. So that's interesting because some of the other papers are saying they'll allow you to create the charts using AI. And they also have said you cannot use AI as a reviewer to review the manuscript. And their worry is you'll be uploading it into a public AI service and it could breach the confidentiality. Now that was a big one because Science Journal is a big proper journal. Uh but a bunch of other academics journals also big proper journals have come out with the same. So the International Committee of Medical Journal editors came out with a policy in May, the World Association of Medical Editors and the Council of Science Editors all came out with policies. So it would appear that although there are many many schools that won't let you write anything with AI, the official journals will as long as you're declaring it. And maybe that's a good policy. So that there's there's a link in the show notes to that. So whizzing down the other research apparently learning from its own mistakes makes learn large learning ma models a better reasoner. So this is interesting. This is research from the Microsoft research Asia team, the peaking university and Xanong University. They developed a technique where what they do is they generate some flawed reasoning um using llama 2 and then they get chat GPT to correct it. And what they're finding is learning from those mistakes makes the large language model a better reasoner at solving difficult problems. That is really interesting. Another bit of research, there was a really good bit of research about the the title is the role of AI chatbots in education colon systematic literature review. So now what that means is somebody has spent their time reading all the other papers about the use of AI chatbots in education. They read 67 papers and they pulled out the both the benefits and the concerns. So um the benefits for students you know helping with homework, helping with study, personalizing the learning experience and development of new skills. They also found that there's benefit for teachers. So time saving and improved pedagogy. Are you a pedagogy or a pedagogy person then? Pedagogy. Okay. And they also then pulled out there are challenges and things like reliability, accuracy and ethical consideration. So none of that should be surprise to people. The paper is a good summary about all the research. It's also a fantastic list of these 67 other papers, many of which have come out this year. So, good paper. Uh really good if you if you're faced with colleagues who are going, I don't understand what this is all about. I don't understand why this would be good for teachers or students. Give it to them. The next next paper was titled more robots are coming. Large multimodal models can solve visually diverse images. You're picking all of the research articles. really long titles here. No, Dan, that is just the title. That isn't the abstract or the whole paper. So, Parson, do you know what Parson's problems are, Dan? Okay. Parson's problems are what they did with computer science, which was basically give you a bunch of code and then jumble it up and you have to try and work out what's wrong and where it should be in the right place. You imagine that? So, uh it's like me returning to code I when I was 16. Got no idea what order it should be in. So, what they do is that's a way that they think of giving students interesting challenges where it's more of a visual challenge, you know, structuring the code. Unfortunately, they thought that was a way to defeat large language models and it isn't because large language models, as you know, have worked on developing large visual models. So, they can actually look at code, work out what's wrong, and tell you how to do it. Statistic time, they can do it 96.7% of the time. That's using chat GPT4 vision. So, significantly applications for computing education because it's really good at solving parts and and it's a multimodal effect using images when you were talking through that then I think okay you know you're analyzing text or code but it's actually using the pictures to organize that that's that's fascinating and it's been moving fast because halfway through this year it could only solve half the problems now we're towards the end of the year 7% that's yet hugely significant Did you ever get 96.7% on any of your exams now? Okay, next paper. I promise you I'll only read the title. The impact of large language models on scientific discovery colon a preliminary know this one as well. So this is this is mentioned in like yeah this is really exciting. Yeah. So basically what they did was look at how good is it at helping in scientific discovery across a bunch of scientific domains. So drug discovery, biology, comp computational chemistry, materials designing, and solving partial differential equations. I don't know what that it's like a differential equation, but only a part of it presumably. Okay. Well, I'll never have to do one in my life because chat GPT can do it for me. So, what it's found is it's really good at tackling tough problems in that area. They say that the research underscores the fact that they can bring different domains of knowledge together, predict things, and help with interign. where they were talking about materials and compounds that they'd found in a matter of weeks rather than 9 months or so. So, you know, obviously material science this is going to have a a profound impact. So, that's quite interesting. So, the next one has got interdicciplinary title is an inter oh blimey Dan an interdisiplinary outlook on large language models for scientific research. I need a translator for the titles. So, basically it talks about how large language models can do scientific research. So just like the last paper, so it talks about ah things are going to be faster, they're going to be more efficient, but it's looking at the research processes themselves. It also talks about the downside. So things like integrity and ethical standards and how you manage that and deals with things like hallucinations. But it points out that even in something like engineering where there really is not that much tolerance for mistakes, it can pass the exams. So it's great for archers that need to think about how these models can help them in their own research and help them with communication. I I built a GPT to rewrite a scientific paper for the reading age of a 16-year-old. And the reason I did that is that honestly I find many of the papers quite inaccessible. So helping out with scientific communication could well be yes building a wider audience. Okay. Uh let's whiz through some others now really fast. So as fast as I can read the titles. A paper called with chat GPT. Do we have to rewrite our learning objectives? A case study in cyber security. So basically they looked at and said how does it change the way that we both teach and learn about cyber security. The great things that they found was that chat GPT working alongside the student helps them to learn more quickly and helps them to understand complex concepts much better. But it then raises some questions like if chat GPT or AI you can do the early study stuff. Will students just skip past it? And so the question was, how do we keep them engaged in the simple to-do things that they can then build upon as they go further through? Really good paper. I think it applies to other areas of learning as well. Uh the next paper, a step closer to comprehensive answers. So, oh sorry that wasn't the whole title. The rest of the title was constrained multi-stage question decomposition with large language models. So the whole paper boiled down in a sentence says AI is getting better and people are finding ways to make it even better at passing. There seems to be a lot of that now, doesn't there? There's several you've just quoted there all talking about the actual the pass rates and the way they're actually getting more accurate. Excellent. Yeah. And also the question about how do we change assessment and I know we've got an interview coming up with Matt Esman where we'll talk about some of the assessment stuff. Okay. So other things Uh, next paper, assessing logical puzzle solving in large language models. Insights from a mind sweeper case study. Okay, so Dan, I know that you were playing Mindcraft when you were a kid. I was a mind sweeper kid. Good news. I have a skill that cannot be replaced by a robot. It seems that although AI is great at playing Go and chess and every other game, but apparently can't play mind sweeper as well as I've got a unique not to be this by robots's job. Now, there were two different papers. I'm only going to reference one. One's called Demask, unmasking the chat GPT wordsmith. Now, that's quite a Reddit friendly title for the paper. Uh, but basically, they proposed a completely new way of being able to do AI detection. And what do we think about detection? They do not work. So, demask was demasked. The next day, they said this can detect things. The next day, somebody proved that it couldn't. Uh, there was another paper that came out that said, "Oh, we've got a great way of detecting things and it detects everything." I am ever so suspicious with this research whenever it doesn't talk about false positives. So, a false positive is where it says this was written with chaps GPT and it wasn't. And unless the false positive rate is super super low, a teacher is going to be accusing a student of cheating when they have not. We always talking low percentages, but if say it's got a false positive rate of 1%, which is very low. Very low. That means if you've got a class of 30, every three assignments you're going to be telling a student that they cheated when they absolutely did not. So look, pretty much we can be sure that they do not work. And then the last thing, this wasn't a paper, but something I thought I'd mention. Harvard have got a really, really good website called AI pedagogy.org. Well, because otherwise it's AI pedagogy. org about and critical engagement with AI and education. It's some really good stuff with the humanities. There's syllabi, there's activities, assignments you can give to students. It's worth watching that as it develops. Thanks for sharing those. What about the tech world, Dan? Because it was Microsoft in Ignite last week, and I know that on this podcast you do not officially represent the voice of Microsoft despite the fact that that's who you work for day and night. But Dan, you'll have been watching Ignite. Tell me what's exciting. A AI infused through everything as we know. And I do think there was there was a quite nice story narrative to this. It started off with the hardware side of it. I know the the partnerships that companies okay in this context Microsoft were doing with Nvidia, but also the first party chips that were being created. So there's an Azure Maya chip that's now being created, an Azure Cobalt CPU that's now created. So there's several different interesting pushes. and architectures which is all meant to kind of support all these AI workloads in the cloud. So I I think there was a lot of coverage in that section. You know everybody was mentioned Intel Microsoft own inhouse stuff Nvidia also some of the Nvidia software which is interesting is also running in Azure now as well. So I think it's very much bringing lots of the hardware acceleration together. So I thought that was a good opening for Satia. So it's not just new data centers being built around the world. There's new data centers with new computing capacity. Yes, that's right. And even interconnected capabilities where even down to the level of the different fibers, there was a hollow core fiber that was introduced as well. It's always interesting to know the things that are going on in these data centers which is individual atoms being sent through holo fiber rather than via light. So very very interesting technology and from the hardware side. But obviously then spinning to the software side there was a lot of things which came out. Some of the big notable things for the podcast listeners Azure AI studio is now in public preview that brings together a lot of the pre-built services and models prompt flow the security responsible AI tools it brings it all together in one studio to to manage that going through there's lots of that's based on the power platform if people have been playing with that in the past so there's a lot of drag and drop interfaces going on to help you kind of automate a lot of this prompt generation which which if people are technically minded on the on the podcast people and bots for quite a lot of time with those kind of tools. So that's kind of good to see that emerging out of the the keynotes. So look out for that Azure AI studio. So our public preview definitely worth having a play with. There's a there was an extra announcements around the copyright commitment which might not sound that interesting but it's quite you know if you do something if you're legal firm or a commercial firm and you use co-pilots to do something and generate content for you then there's a copyright commitment has just been expanded to include OpenAI which means that Microsoft will support any legal costs if anything should be picked up by third parties around that. I love that Dan because I know that it's been there in the Microsoft copilot but I love the announcement is now extended out to the Azure Open AI service and the reason I'm excited about that is because that's what we build in my world. Uh we're building on top of the Azure Open AI services. So being able to pass on that copy Right. Protection is really important for organization. Hey Dan, before they mentioned CCC, the copyright copyright, the co-pilot copyright commitment. Um, they also mentioned the Azia AI content safety thing which is what was used in South Australia. I remember reading the case study about that which was about helping to protect. Yeah, that's right. So that's a good call up. There's so many things here. Yeah. The Azure AI content safety is available inside Azure AI studio and that allows you to evaluate models in one platform. So rather have to go out and check it elsewhere and that's there's also the preview of the features which identify and prevent attempted jailbreaks on your models and and things. So that you know exactly for the South Australia case study they were using that quite a lot but now it's actually prime time that it's now available to people who are developing those those models which is great. Lots of announcements around 3 being available in Azure Open AAI. which is the the image generation tools. There's lots of different models now. GPD4 turbo in preview, GPD 3.5 turbo. So, there's a lot of stuff which are now coming up in GA as well. So, there's lots in the model front as well, including GPD4 Turbo Vision. Yeah, I I like that turbo thing because that seemed to add more capabilities a bit like the OpenAI announcements. They mentioned the Turbo stuff, but the other thing was just like the Open AI announcement It was also better faster better parody as the open AI costs that were announced at their dev day as well around um developer productivity. So the stuff which is announcing go github so co-pilot chat and then github copilot enterprise will be available from February next year. So for devs there was a lot of things have a look at the book of news we put that in the in the connection there. One of my really exciting announcements that I listened to was or were more excited about I suppose was the Microsoft fabric has been available and I know that doesn't relate technically to generative AI but it's really good for a lot of my customers that are using data warehousing as one of their first steps into AI analytics and then all of the generative AI elements on top of that so co-pilot and fabric co-pilot and powerbi lots of announce there including things around governance and the expansion of purview along there so that was really excited but then we went into the the kind of really exciting bits around the productivity platform So then we've talked about Mrosoft copilot. So one of the first things to to think about as well is that that it has been a bit complicated with Bing Chat Enterprise and and Bing Chat tools. They now going to be renamed Microsoft C-Pilot essentially. So that's going to be the kind of co-pilot that you'll get which will be inside any of your browsers, Safari, whatever and also inside your sideloaded bar in Edge as well. So C-pilot is going to be that new name for Copilot Enterprise um which is being chat enterprise. So I'm even making a meal this this thing where we try and make it a bit easier for people to understand. And then the good thing is as well that they've now announced copilot studio which brings together all of these generator plugins custom GPTs. I'm sure that's something that you're going to be working with quite a bit. Right. So that's going to be able you to customize your co-pilot and co-pilots within the Microsoft 365 organization. If you're an enterprise or customer, create your own co-pilot. pilots, build them, test them, publish them, and customize your own GPTs in your own um organization. So, that'll be really exciting. I am I'm excited also by the fact that um I can't always remember all the names. I remember there being Viva and Yava, which I love, has been renamed into something else, but now I only need to remember the product name Copilot because Microsoft 365 copilot, Windows Copilot, Bing Chat's been called Copilot, Power Apps Copilot. All I need to do is think of a product name and I copilot on That's exactly right. Yeah, there's a lot of other other new interesting copilotes that were announced as well around new teams meeting experiences with co-pilot with collaborative notes. I've been using quite a lot of these internally recently and lots of the intelligent recap stuff is really good as well. So there's a lot of co-pilot announcements as well you can get lost in the weeds with around PowerPoint, Outlook and all of those tools. But really, really good in integrations and I suppose you know we're going to see a lot more of that. The the interesting element as well is that Windows AI Studio is available in preview coming soon as well. So that's the that's the other thing I'm sure you'll be working on Rey where is being able to develop co-pilots and Windows elements to your C-pilot ecosystem as well. So you'll be able to deploy SLM so smaller language models inside the Windows ecosystem uh to be able to do things offline as well. So there's going to be a big model catalog in there. So that'll be quite interesting. So you've got the copilot stuff and you've got the Windows AI studio. studio tools as well for devs. So that'll be quite interest. Great. So everything's in the cloud and everything's got a coil. Exactly. There's lots of copilot stuff included for security as well and I've been playing with security copilot. That's that's essentially your generative AI. If you get an incident that happens in your environment and there might be ransomware attack called, I don't know, north wind 568 or whatever it might be. That's probably something that exists, isn't it? But anyway, that that'll then tell you where that origin of that ransomware might be from. give you information about what what that actually does. So it's it's like guide for security size or so that'll be really really interesting when that when that comes into GA because it does get quite complex in the security area. There was a lot around the I suppose the the tools around dynamics and things like that. So co-pilot for service management, co-pilot for sales or more enterprises who might be using dynamics there was a whole heap of of co-pilot automations around the power platform which is citizen development platform the Microsoft release so power automate I've got a whole heap of things around there about generating scripts generating documentation for governance there was a whole raft of products now available around you know your supportive tools with inside app development but also the way you can use copalot to create things for you as well so there's a lot of stuff um in the in the power platform which is quite exciting but there was so many connections we put the book of news in the show notes here, but very very exciting for right from the hardware right up to citizen development. So, you know, I'm looking forward to seeing these coming. So, if I'm in the IT team, I should go and read the book of news. If I'm outside of the IT team, I should just add the word co-pilot onto anything I'm talking about. Okay. So, Dan, we've just done the whole two weeks of news about research and the Ignite stuff and all the developments there. We've been talking for about 20 minutes. So, we just need to go and check the internet because there has been one other piece of news going on which is that Sam Olman may or may not be CEO or chair or not CEO of open AI. I mean in the last 20 minutes fascinating the the thing that really intrigued me and it's made me think obviously there's been a lot happening in this space like your thoughts on this um the board of open AI I supposing about the actor structure the board of the open AAI it's a nonfor-profit board board. It's a 501c I think they call it in in the US which is your your kind of nonprofit entity and it it feels like that there's some tension going on with our nonforprofit entity but nobody really knows there's so many things going on on online about this but I was the interesting thing for me was that there's six people on the board and even just doing some research and trying to understand who those six people were and how that all works was quite interesting fe what what does that mean? Well, it's interesting because yeah, I've been doing the director's courses recently and it's all about the strategic way of thinking as a director which has tended to stay detached from the execution so that you're setting strategy. So I I find it quite interesting that the board have made something that gets really really close to execution. You know, normally they're working much longer time scales but perhaps they're not working in longer time scales because uh just before we restarted the I saw the news story was that they might be talking to Sam about coming back as CEO cuz I I had thought of it as a Steve Jobs kind of thing when he left Apple and then he came back what is it a decade later and saved the business. It's kind of like that. I hadn't expected it to be over the course of one weekend though. So, uh we've got to try and get this out on Monday just so that we're vaguely up to date. I don't think I'd realize that OpenAI is a not for-p profofit that is focused on how to achieve AGI. So, kind of achieving that general intelligence is what they're going for as the as the nonprofit. So, I don't think I'd really understood that that everything they're doing at the moment is a step towards reaching that general intelligence position artificial intelligence. That's what it's for, Dan. Oh my goodness. We'll have to go back and edit it so we don't look stupid. But, you know, that is what Open AI is all about is how do they reach that general intelligence level? I think this is a little road bump on the way. for everybody, right? Because it doesn't matter how big you are as a company, when things are moving so quickly, whether you're a school or whether you're a university or a commercial customer or a or a large nonforprofit, you know, you have to be very careful about the direction like you're saying. I suppose there are things in place like you're saying for the courses you've done where you do stay strategically at arms length so you can make long-term decisions and there's a lot going on. very very quickly with open AI specifically. I don't think I can't think of any company that has propelled so quick and had such an impact. So these things do happen and they have ripple effect. They do send ripple effects down through the the communities but it does give us a bit of a thoughtprovoking pause to think okay where where are we going with this technology? Would would you ever have believed that the news like the CEO of an AI company being sacked would be like number three or number four story on BBC news or the Guardian or those to make it into mainstream news is really fascinating in such a short period of time. Damn, there has been so much news. We have been covering two weeks worth of news. That's why it's taken us so long. But my goodness, we better stop because this is supposed to be a quick snap of the news. But the key for everybody would be find the links, find the papers, find the news on the show notes. We definitely won't put anything in about OpenAI. Go and just Open your favorite website to find out what's happening on that because you'll be more upate than us.
undefined
Nov 10, 2023 • 16min

Rapid Rundown : A summary of the week of AI in education and research

This week's rapid rundown of AI in education includes topics such as false AI-generated allegations, UK DfE guidance on generative AI, Claude's undetectable AI writing, the contrast between old and new worlds of AI, Open AI's exciting announcements, specialization and research bots, GPT4 updates, and gender bias in AI education.
undefined
Nov 1, 2023 • 29min

Regeneration: Human Centred Educational AI

After 72 episodes, and six series, we've some exciting news. The AI in Education podcast is returning to its roots - with the original co-hosts Dan Bowen and Ray Fleming. Dan and Ray started this podcast over 4 years ago, and during that time Dan's always been here, rotating through co-hosts Ray, Beth and Lee, and now we're back to the original dynamic duo and a reset of the conversation. Without doubt, 2023 has been the year that AI hit the mainstream, so it's time to expand our thinking right out. Also, New Series Alert! We're starting Series 7 - In this episode of the AI podcast, Dan and Ray discuss the rapid advancements in AI and the impact on various industries. They explore the concept of generative AI and its implications. The conversation shifts to the challenges and opportunities of implementing AI in business and education settings. The hosts highlight the importance of a human-centered approach to AI and the need for a mindset shift in organizations. They also touch on topics such as bias in AI, the role of AI in education, and the potential benefits and risks of AI technology. Throughout the discussion, they emphasize the need for continuous learning, collaboration, and understanding of AI across different industries. ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 7 Episode: 1 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI podcast. Uh, I'm Dan and look who's beside me. Hey Dan, it's Ray. You'll remember me from when we first set up the podcast together in 2019. I sure do. This is exciting time. This is podcast reboot, right? This is like getting the band back together, Dan. It is. So, what we're thinking is, as we've I alluded to in a couple of episodes previously because AI is moving so quickly and the technology space is really driving a lot of change and transformation specifically around generative AI now which is one aspect of the entire debate around AI. We can really start to focus in on looking at some of these new trends because it's moving so quickly, right? Oh gosh, it is moving so fast. I I I think about genative AI from the moment I wake up to the moment I go to bed because the the business I'm involved in is all about genative AI. You go to bed sometimes. But the the really fascinating thing is despite the fact I think about it 24 hours a day, y it's still moving faster than I can cope with. And I'm only trying to stay ahead of that one piece of technology because things are moving so rapidly. And it's not just the technology that's moving rapidly. It's the ways that we can use it. Ethan Mollik described it really well. He talked about it being like battlements of a castle. And some areas are inside the battlements and some areas are outside the battlements of what we can use it for. And we still don't know where that jagged edge sits because every day there's a new use case scenario that just genuinely makes me smile about what it can do. And then also we find some things that it's really lousy out that we thought it could do. And so I think this whole new world of kind of human centered AI rather than technology centered AI, which is how I think about generative AI, about the human interface, the way that we think and do things is fundamentally different from what we started talking about four years ago, which was machine learning and the binary ones and zeros version of AI. Yeah. And there has been a blur in that area, hasn't it? Because the the science of AI has been around for quite some time and we've talked about the history of it a lot with Lee, with yourself. We've explored where it came from and the kind of uh journey around AI itself. But I think we are doing this podcast as well. this new series that we're going to move forward with is to also take some opposing views I reckon because the conversations you're having around the business side of AI the outcomes conversations I am having around the technology the implementation of that the governance and the security element that they're often against each other right and there's a friction in businesses and schools and universities where the outcomes of the students the outcomes of the teachers the real business processes that can be changed are kind of log ahead with the speed of that change and the the way that technology is implemented. Yeah, it's interesting because it's AI and so it's a technological discussion I think is the starting point. Ray, yes, during my campaign. Well, no, Dan, I'm not because something I I wrote recently was very much around the decisions about this are going to be made in the boardroom, not in the server room. That it's actually about a fundamental process change that's possible. If you go and read the white papers from the researchers and the management consultants and all the government organizations, they're talking about 40% productivity improvements. And so the potential is to change the way that we do things and the way that we run organizations, not how do we make a technological change. And that's why I I find it so relatable because it's about business processes. It's about the things that change as a result of it, not about how do we make a small change with data. Yeah, I I I do feel passionately about that as well. But in the conversations I I'm having, you have to also tread carefully with this new technology as well because you don't expose information that you may inadvertently have not got uh general exposure to with the security and the governance elements may not be in place. And and we've got this tension between the speed of getting something out and actually the tension of waiting to get something out and making sure it's 100% proof, right? Yeah. I think it's about elevation. It's elevation of the role of, for example, the IT team or the CIO in an organization up to the boardroom because that's still not the case in every organization. Yeah. But it's also about elevation of the conversation. So, one of the things I've been doing recently, Dan, is I've been going through the Australian Institute of Company directors course about foundation for directors. And one of the things that keeps coming back that we keep being hit around the head with by the the facilitators of the course is thinking about the director's mindset, not an executive's mindset. The director's mindset is about strategy and direction. It isn't about implementation. And so if we're thinking about elevating the conversation and the role of the CIO, that is also about strategy and direction. Yeah. Not not just about the day-to-day And I think most CIOS would say, "Yeah, but I do think about that long-term strategy and direction." There's still a gap, I think, for many people between their responsibilities in a technology world. Yeah. And their responsibilities in a business enablement world. Yeah. And and that's also coming round and quite evident when you look at the way the digital divides open. And I'm seeing this more and more. If you remember even where are we now? We're in November. So even in January, this is when some of the school systems are banning Mhm. technologies like chat and some more kind of embracing that and some are being more thoughtful. So where do you see that sitting at the minute between that ban it kind of mode and this digital divide? You know banning only works on the bits you can control. I was talking to a major university in May 20,000 of the users on campus had used chat GPT on campus. 20,000. Yes. So So imagine if you banned that. How how many would be using it at home anyway or on their phone when they're on campus? So I I think putting the lid on it is really difficult to do. If you look back and go, do you remember when we banned Google search in schools because people could just look up the answer and then we banned Wikipedia and then we banned YouTube. Yeah. The three things that are probably the biggest learning platforms in the world were initially banned. And it didn't stop people using it. It just just meant that people were using it in different ways. So if you think about it, if you stopped the use of chat GPT in the classroom or on the campus, it just means students and teachers will go and use it at home with no controls and no guard rails. And you then open up the possibility that some student have access to to it when others don't. We talking just before this episode was recorded. just chatting about this and you mentioned about the the kind of autonomous vehicle problem and I think that's kind of evident in this place as well isn't it because or in this debate because of the fact that when we thinking about a digital divide and people banning and people not banning these things I think there's a danger and I think in episode one almost we talked about the human parity of technological systems being the the technology already surpassing human parity so there's almost like a need for IT leaders to think well we need it to be 100%. So do you want to explain that that that autonomous vehicle problem because I think that's really evident. Yeah I think if we if we go back through the history of the way that we've done things in technology yeah we've tended to use a gold standard which was is this perfect you know the go and look at this data interpret it all is it perfect and and the easiest way to understand that is most AI projects historically have probably burnt 8% of their and 80% of their time on cleaning up the data in order to be able to use it. Yeah. The self-driving car problem that I talk about is about that difference between is perfect what we're striving for or is better than humans what we are striving for. So in the self difference there massive. So the data says that self-driving cars are safer than human-driven cars. The data says self-driving cars are better than human-driven cars, but 85% of people in North America wouldn't trust a self-driving car. Now, I think part of the problem is that most drivers are above average, or at least they'll tell you that they're above average. But the reality is a self-driving car is safer. But people hesitate around that because it's like, yeah, but It's not 100% safe, but what a million people a year die on the roads. So the current human standard isn't perfect either. And in technology projects, we've often not measured against the current human standard. We've measured against some and that's that's evident when people roll out new versions of software, isn't there? They'll wait sometimes Mac operating system or Windows operating system, you know, six months after it comes out, sometimes years after the first version comes out. So in quotes, you can ing out any of the teething troubles with the with the software. So that's something that IT pros are sort of used to, I think. Yeah. And I think that's where a mindset change is going to come in because 100% right in two years time once we've cleaned all the data is a good outcome. Is 95% right instantly a good outcome? You think about for example feedback on essays. We know that Generative AI or AI generally can mark essays more consistently than humans, but we still probably don't trust it. And we probably want to check everything that it's going to say to a student to check that it's 100% accurate. Humans aren't accurate either. I mean, I read some research recently. If you are submitting a homework assignment or an exam assessment, you want to get it marked first in the pile rather than 10th in the pile or 30th in the pile because the earlier in the pile it gets marked, the more generous the humans are in the marks they give you. Yeah. So, you know, humans aren't perfect. So, can we get to that mindset which says actually good enough is good enough and let's move forward on the process. So, if you could give good enough feedback to your students now the minute they finish the essay rather than in a week's time when you've had time to go through and write and review it all and give them some feedback. That's an interesting question that I think is going to come back again and again. And that personalization element We've always talked about that with Napan results 6 months after the nap plan exams happen and what is the validity of that and the longer you leave here the less valid that feedback is the business models have changed as well with this haven't they you're looking at companies who have when you're talking feedback there companies that have been doing plagiarism checking schools thinking about assessment I really think there's going to be a breakthrough in that area at some stage because that that can't keep moving so the actual processes underlying in some really key aspects of universities and schools are going to have to change because there's there's no two ways about there. AI is already impacting those areas especially around assessment. I mean when you say plagiarism checkers, I still see people saying that they're using AI detectors and that takes us back to the accessibility thing. So AI detectors do not work. Full stop. There's papers, there's lots of other things you can go and read, but if you go and read the things coming from the people that are experienced in this stuff. People like Ethan Mollik on Twitter or LinkedIn, it's very clear the research is there. AI detectors don't work. And if you think they do, what you're actually doing is disadvantaging certain groups of students because what it will do is pick up people who for whom English is a second language and say that those things have been written by AI, but they haven't. They've been written by people. But the writing that they use tends to set off an AI detector and there is an underlying sentiment around fairness and reliability and trust which is a secondary conversation to it because obviously there is an element in certain aspects of utilizing AI where you might want to put in invisible watermarking on images and things like that but I think that is getting the reliability security and trust element on that argument which is very important to tech companies are working on that at the minute um is very separate to the assessment and plagiarism and AI checking and it's been lumped in the same conversation sometimes. Yes. And and the the other thing that comes in is the bias piece. Well, it it displays some bias and in fact I saw an example last night where somebody had asked it to draw an image of a great teacher and all four images were male. Now the interesting thing is you can spot those biases and you can fix it in the system and I've seen the chat GPT for example its bias has been changing all the time in order to start to actively remove the biases. But if you think about how do we remove human biases because there's a lot of human biases. Like for example, if if you're an education system with 100,000 teachers and I told you what I said earlier, which is that papers marked first get a better mark than papers last March 10th or 20th. If you wanted 100,000 teachers to change their habits to remove that bias. Imagine how long that would be. I mean, first of all, you got to convince them it's true. Then you got to convince them to change it and then you've got to keep reinforcing it. Whereas if you've got that kind of bias in a computer system, an AI system, you build a rule and suddenly it fixes it. Um I think about I asked Chat GPT to create for me a list of 10 doctor's names February March this year and all 10 names were male. And if you go into it now, you get a broad mix. Now, the reason it gave me all male names was because the top 10 doctor's names out of the US surveys are all male, but now it's been programmed to remove that bias. It's now doing a better job of it. So, it's actually overriding human bias. It might also be overriding human reality, which is that many doctors are males and that's what shows up in the data. Yeah. So, yes, there are these problems, but I believe that they're probably more manageable. And let's go right back to the beginning. This is an emerging technology. It's amazingly how fast these things are being dealt with and managed. Yeah. And and the impact that it's having, I think, is is evident. Even though you call it an emerging technology there, I'm still staggered, and I don't want to keep going around in circles with my narratives around this, but I'm staggered at the amount of applications that I'm seeing teachers come out with. You know, this this week alone I was looking working with a school dascese actually who were looking at creating texts for students to read which are one reading level above what their current reading level was. This dascese is working on literacy really heavily and going back to basics which is fantastic. So that is a perfect application for generative AI and they can do that. So you are talking about personalization happening really really quickly and if you can do those and solve those business problems really effectively and like you're saying with 95% accuracy, then let's do it because we actually have an impact in the classroom today rather than in a year's time. Yeah, that's absolutely right. And sure, we need to be aware of all of these other issues, but fundamentally we can improve some of the processes that we're doing. We can improve the support we provide for students. We can improve the way that we engage with students or with parents. It's so many of the things that involve interaction can be prove and we need to jump onto the use cases and the benefits of those use cases and testing those things out rather than probably the old world which is oh well we can do that once we've fixed all these other things. We can do that thing about predicting which students going to drop out once we've cleaned all the data in five years time. So there is a thing about is good enough now better than perfect in six months time or 12 months time or help us and I always always argue with some of this kind of stuff. I know when we're doing reading progress when I was a governor in a school in the UK and when I was doing offstead school inspection work it was very much the school budgets however controversial this might be the school budget for was for that year. So when people were saving up that school budget for a long-term minibus for example there was always this tension in a governor's meeting of saying well that $150,000 we storing for the minibus will come to fruition in 3 years time when we've got enough money to buy this minibus. However, that could be used as a reading recovery program for a year three student now. So, there is a genuine need to get impact now rather than thinking about these things too deeply I suppose. And there's an interesting element which we were talking about previously through China which you mentioned about the fact that China's got a even though they they've got their own interest in social norms around technology. They've got a different take on the way they utilize in this, right? Yeah. There's some new regulations coming out in China. So, if you think about the social norms and what is and isn't acceptable. They're talking about consumerf facing chat bots and things like that. The one of the things they have to do is test scenarios. So, they I think they've mandated a minimum number of tests. You must ask it 4,000 inappropriate questions. You must ask it 4,000 appropriate questions and then you have to manage the responses. But what is interesting is they're not saying It shouldn't answer any inappropriate questions. What they're saying is it should refuse to answer 95% of inappropriate questions, but equally it should answer 95% of appropriate questions. So, what they're trying to say is we recognize it's not going to be perfect, but we don't want to make it so perfect that then it won't do the job we want it to do. And and that's interesting because if you think about that in the cont of an education. Let's say you you build a chatpot, you put it on your your school's website, somebody will go and get it to have a bonkers conversation that is inappropriate. What they're saying is we recognize there is a risk of that happening and we're going to mitigate against most of those scenarios, but we're not expecting everything to be 100% accurate because if we go for that, we're going to lose all the upside benefit. And the upside benefit in that scenario of a chatbot on a school website is perhaps you're making information more accessible to parents or students or they can get help on their assignment in the middle of the night and quickly rather than waiting for somebody to you know be at the other end to support them with their tutoring or whatever it might be. Give them a LLM example of their maths question they stuck on immediately which could solve you know 70% of all of the queries that come through maybe more. And that's why I'm thinking about now we got the old bang back together Dan is you and I it's it's that reset point because we're going to have a ation going forward about how do we help this staff to improve the processes going on in education? What can it do? What can't it do? But it's going to be very different from I think where we first started off where it was a lot of technology conversation. Yes, this is about a human- centered conversation about how technology helps rather than it being a technology centric conversation. And and I think that's the fundamental difference because I I spend almost all my time now not talking to IT people and talking to leaders of organizations about the way that the organization can be transformed or the processes can be transformed not about the technology piece because we can have a human centric conversation because when you're talking about generative AI you can show things that everybody can relate to you show a real conversation you show it interpreting real information it's not a It's a not a bits and bites and widgets conversation. It's about genuinely transforming a process. But but I think as well and and this is why this is going to go really really in a in a different direction because generative is moving things forward. But we do need to also have a goal in mind with this podcast as we walk through to make sure that people listening do understand where the different types of AI fit because there is confusion. There is a divide happening at the minute and we want to bring everybody along on this journey. to make sure it's equitable for all. So, you know, there's AI, which is the generative style of AI, you've got the data analytics AI, you've got cognitive services, you've got all these AIs that can read documents, and then it's the use cases that are the key to say, well, where does that fit in? Is that in the generative space? Is that actually that's actually data analytics problem, which is where we kind of focused in season one. I suppose it was that data and AI element, the cognitive services, the machine learning But now it's really ramped up and moved into a you know completely different service of its own I suppose. Well and the other thing we have to add into that is the blend of consumer services and enterprise you know kind of enterprise services because actually many of the scenarios now you can test with consumer services. So imagine a scenario around the learning reading levels for example that you were talking about. You can test that that scenario works with chat GPT or with one of the other models and know that your scenario is going to work, but then you go and build it in enterprise services. You'll go and build it in Azure Open AI, but you can test it with a consumer level service. And so that opens up many more opportunities. It also opens up a whole load of conversations about well what happens when students are accessing consumer services or teachers are accessing consumer services. Is that okay? And in which scenarios is it okay and not okay? And where do you provide the guidance. And the thing I'm thinking about that we're going to get to over the next however many episodes is going to be about how do we get a blend of different voices. So I I don't mean your your voice, but three different voices. So one would be people that are the practitioners. They're not AI experts. They're not technology experts, but they can see a process, an opportunity for yes, help. The second will then be the the kind of generative AI experts and by that I'm meaning the the the people that understand the potential to transform something, but they see it from the perspective of what this allows this human- centered AI. And then the third voice will be the technology voice, the CIOS and the IT teams who are going to have a perspective built out of their legacy and history. I often used to say the main role of a CIO is to keep the head teacher off the front page of the newspaper. Yeah. Yeah. But there's a legacy that comes from that. It's a fasile example, but there's a legacy that comes from that about what you do about risk that what you do about accuracy and all that kind of stuff. We need to blend those voices and also Yeah, absolutely. And I I also I was reading a blog post by one of our interviewees uh recently as well, Nick Woods and the health team as well from from Microsoft. And health is another example. And I think we always look back could help and say you can take a teacher out of a school today and put them in the school 100 years ago and they can do the same thing and they know the board and they at the front and the sage on the stage kind of mentality and I know that's been facitious and teachers are much more technologically advanced these days but you take a doctor in a surgery even 10 years ago and they wouldn't understand the robots and the use of that technology and I think the health sector is always a good litmus test for me because that's really really innovative and actually has a massive impact today. They don't think about what's coming up in the next 10 years even well they do but you can see some of this AI technology already impacting patient care. So I think from my perspective as well it'll be good to bring in people from other industries and see that speed to make sure the schools and education innovate as quickly because the the post from one of our I think it was it might have been Simon Cos or Nick Woods one of the health executives but it was a really well thoughtout post saying we need to grasp this opportunity now and and move really quickly in the health sector with AI. Yeah. And health is interesting because it's a highly regulated industry, more regulated than education, but somehow it's able to I think innovate at system level uh a little bit faster than other regulated industries. You know, banking is highly regulated, but they're using it in banking. So yes, I think you're right. There's both regulated industries where we can get some examples from. There's also add in the commercial world. I'm speaking at a in a few weeks time at an event with somebody from Penfold Wine about how they're using generative AI. Oh, we have to get that in on the conversation then. Brad will be happy. But there's a lot of scenarios being used in other industries and yeah, let's get those examples in as well. One of the the benefits I've got now is I'm working alongside people working in those other industries. Let's hear what's happening in retail. Let's hear how it's going to disrupt global logistics. Let's hear how it's going to disrupt disrupt the wine industry because out of those sto is I think we will find interesting parallels that might excite some ideas in education. Brilliant. And and go back right to the start here for that human centered approach which is now different different to any other kind of era we've been in before. This is really driven by end users, isn't it? So get those end users on the on the podcast and get that conversation moving. Yeah. What's exciting to me, Dan, is that I've always had difficulty engaging my children in what I do because yeah, they didn't have that much interest in technology or or at least they pretended not to. And now there's some really fascinating conversations going on with my kids because of the potential of changing things not in a technology way. I think we need to get my daughter on as well because yeah, I was on a conversation in the um car yesterday actually and she's asking her Snapchat AI for quizzes on princesses all the time. Give me 10 questions of Disney. princesses cuz you went to Disneyland and it's interesting conversations she's having with that and it's interesting just listening to the way she interacts with AI. So I think getting those different perspectives would be excellent. So we should interview some students. The other thing we should do down yes is we should interview AI. We should get Oh wow, that's a great idea. We should get We should do an interview with Chat GPT and make that a whole episode. Do you remember when we had a bot join one of our podcasts in in series one I think it was? And it was a very robotic voice. Let's have a crack at having a podcast with as well. Yeah, bring on this season. I can't wait. Thanks again for rejoining the podcast. And and if Lee's out there listening as well, you know, Lee's got a new gig out there supporting the legal organization in Asia, supporting the AI conversations there. So, if you're out there listening in, Lee, thanks for holding the for you in another episode and see what's happening in the the world of AI, literally. Brilliant. I am very excited to get this uh going and uh find some interesting people to talk to. Let's do it.
undefined
12 snips
Sep 19, 2023 • 35min

Dr Nick Jackson - Student agency: Into my 'AI'rms

Dr Nick Jackson, expert educator, and leader of student agency, talks about AI's impact on assessment, the importance of AI in education, the intersection of AI and music, and the power of technology in young students' hands.
undefined
Aug 11, 2023 • 45min

AI - The fuel that drives insight in K12 with Travis Smith

In this Epsiode Dan talks to Travis Smith about many aspects of Generative AI and Data in Education. Is AI the fuel that drives insight? Can we really personalise education? We also look at examples of how AI is currently being used in education. ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 6 Episode: 2 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI education podcast. Uh this is where we talk to great minds of the day around AI and its impact and in edu and and other contexts. And today we are super lucky because we've got the Australian Edu legend that is Travis Smith. He is a exchool executive K12 industry leader. Microsoft Australia, one of the best edu humans in the world and a an above average golfer, I think you'd say. Travel, do you reckon? Is this about golf? Because if it is, I'm very excited. It can be it can be about whatever you want. Great. How you doing, mate? You're right. I'm very good. Thank you for having me, Dan. I look forward to a chat about AI in Edu. In your role as the K12 lead, you've been doing a lot on this recently, haven't you? Can you explain a little bit more about what you've been doing? Yeah, we have. We've been doing a lot on it for the last last um well since the start of the year really. But there's a lot of um a lot of hype about it, a lot of talk about it, and people trying to work out what's real, what's not, what they need to harness, what they don't, what they should ignore, and then let alone all the all the fear and the worry about AI more generally. And it it's kind of it's interesting because it's it's a topic that has never really been as mainstream as it is now. You know, everyone you talk to in the street has an idea about what generative AI is and and has an opinion on it and that's not been you know that's not been the case before and we we know that AI has been a part of everything we've done for a long time you know whether it's education or otherwise but this seems to have really captured the imagination of of everyone so we're having lots of lots of good conversations and good good thinking about it how does it actually lend itself in executives cuz I see a lot of stuff on LinkedIn with people going hey I've just found this new AI tool you know I've I've used this in this particular context, but I suppose you mentioned hype there. You know, I suppose some people are thinking whether this is hype or is it real like you mentioned and then also how do they actually implement that in a in a school or a school system. Uh I know you're on conversations around that. What are your thoughts on on that? The first thing is it relates really clearly to their data strategy because we've been talking about data in schools for best part of 10 years deeply. you know about having a data platform and making sure your your ducks are in a row and you can get all information out of systems. Data is the fuel that's going to power AI basically. The the flip side of that though is that AI is going to fuel greater insights from their data than they've ever had before. So the ability for example for you know I mean cast our minds a couple of years forward the ability for for a teacher to write a natural language statement into a some kind of a data tool and have a dashboard built for them about the kids sitting in front of them is pretty powerful. So, so the data stuff is not going to go away and that's definitely one of the conversations that we've been having. The other main one is that you you can actually use these large language models like the GPT models in your own environment and and so because a lot of the fears around this stuff are actually because they're out in the wild. They're out on the internet, you know, they're, you know, and People are sort of worried about that because and as they probably should be because teachers might be, you know, accidentally putting personally identifiable information out there or you know there are certain use cases that in education are a little more um a little more worrying than others. And so you you know we talk to the executives and help them to understand that there is a way for them to bring those models inside their own environment which means that they can secure it and they can you know put some privacy and security around it and just like they do in everything in their network you know no one's just getting completely unfiltered access to everything on the internet in education. There's always some some things in place and this is and this is no different. You know, there's definitely a few of those conversations happening around the country for sure. Do you think it is there is a bit of hype around this like where are we? You know, you get a hype cycle where everybody's all frenzied up with it all and and I think I suppose I'm I'm trying I'm really trying not to be jaded about this, but like I mentioned in a in a previous podcast about COVID and how I thought that was going to help you know move things along but schools tick back into the norm they just wondering do you think personalization and stuff like that has really and changing assessment is it really going to make a difference you think this time I think we've got the biggest opportunity we've ever had to do some real good in education you know that data example I described before is one of them where you give the access to the data to the people who want to ask the questions without requiring any technical knowledge of them that could be profoundly impactful. I think there's a huge possibility for personalization. I mean, I I think that teachers now and kids now have the ability, provided it's done safely, securely, and everything else to have information personalized for them or to personalize information for their kids in a way that's never been possible before. You know, I I was having a chat to a textbook manufacturer who makes sort of e textbooks and I was talking to them about the idea that the technology exists now that means that why does every one of your 25 kids have to see the same page? Like they might see the same sentiment, but but if I'm reading three years lower than the other kid in the class, why why are we reading the same stuff? Like why aren't I getting a simplified version of it? The simple answer to that in the past is because it doesn't scale. You cannot do it at scale. But the technology can now do that. You know, imagine imagine You know, I went into my Bowen and Company e textbook. I've just invented a company for you. Yay. And yeah, and um I put in my Lexile reading level, which is my teachers told me, you know, you're a level 800, Trev. And and then the the whole textbook language is simplified. You know, that's the possibility of this. So, whilst I think there's some initial hype about it, I don't know whether we're um we should be thinking really big about the way that this could potentially change education forever. And I know that the, you know, there's work that Sal Khan and others are doing on creating personal AI tutors for kids. You know, the UAE Department of Education or Ministry of Education are doing a similar sort of thing. And I think provided it's done safely, securely, and it's designed by educators for the purpose of educating. I think it's a great idea, but you've just got to make sure that all the mechanisms for for controlling it are in are in place. And yeah, so I think there's Yeah, I I think that Whilst there is some initial hype, the reality the the way that we should be thinking about that this now in education is at you know especially at the system level which is where I sort of spend a lot of conversations with departments of education and Catholic dascese is what might we create that now we can do at scale that we could never do before and I think there's some huge possibilities in that space. Oh there are yeah that's very true and there's so many I suppose it's an ecosystem of people that can that can help with this, isn't it? There's lots of I think schools are going to get an abundance of people doing whatever.AI that's a concern, but it's also a really good opportunity where people are looking at their products and going, how can I embed some of that AI stuff like the Nurture AI? I saw your webinars this week with Dylan William and and uh the nurture team where they put generative AI into the the feedback process uh you know in their in their work which is brilliant and it's about thinking outside the box. box really and thinking, okay, well, how can I add that to my tool to make that more effective? Um, correct. And and I think there's I think there's ways that school systems could start thinking about how how to add this to their tool to make their tools more effective. Right? So, you think about internal platforms like student information systems, learning management systems, whatever. You know, one way they're going to innovate and build this stuff in is that the the the ma manufacturer, the designer of that proprietary product is going to build AI in at the front end, but there's nothing to stop them being able to build artificial intelligence in their local environment which is leveraging all of that stuff. You know, that is changing the way that their users interact with it through a power app or, you know, whatever other way to start thinking about how to apply artificial intelligence around the systems that they've got in their environment. I think one of the biggest opportunities um with AI generally speaking and and people have a a kind of limited view which is completely understandable um if you're if you're not sort of thinking about it every day that what this generative AI stuff is that a human can write a sentence and the AI can respond in a humanlike way and bring bring to it all of this knowledge or you can ask it to create something in a graphical form and it can understand what that is and and create a graphic. Yeah, I think the real power of this is going to be less about that stuff and more about Fundamentally, what this means is that we forever change the ways that humans interact with computers because now we should assume that the computer understands what we're talking about. You know, I don't have to press 16 buttons in a certain sequence to make something happen. I'll just tell it what I want it to do and it knows how to do it. And so that's, you know, that's like the co-pilot stuff that Microsoft are are um you know, uh triing at the moment in Office where you just sort of tell it what you want it to create and let it create it. And I think there's if you extrapolate that idea out this this this fundamentally is the era where humans interact with machines in a very different way than before. And so if you put the large language model like a GPT model inside your cloud environment inside Azure let's say in in your tenant it's even telling that that when you go into the um you know the playground interface where you can actually start to train the model if you like. It's not quite, you know, it's that sounds more technical than it is, but there's an interface where it's got like what's the system message and the system message is basically where you type sentences about how you want this thing to behave and what you want it to do. So, you can say to it, yeah, you know, you are going to act as a an aid for teachers to help them uh design curriculum based around the Australian curriculum in whatever state. And you know, I mean, you can b if you do not know answer then say you do not know the answer. Uh never respond with blah. You're always going to put in hyperlinks to the point in the document you found. You know, you're you're actually and and all there's no coding in that. You're just telling it how you want it to behave. And then you can do stuff like upload sample responses. You know, here's here's an example of the way I want you to respond. And you can connect it to your own data sources in your environment. And and that fundamentally, you know, is is the big shift I think which we haven't quite um I don't think at scale we've understood yet that this is this is these large language models are going to fundamentally mean that computers can understand our textbased sentence based or even voice-based inputs and do stuff for us whereas we had to do 55 clicks to get stuff happening in the past yes it's so true isn't it that I think it's our user interface educational user it's a book for us educational user interface is 2.0 I suppose cuz it was um was it Sharon Oviet's future of educational interfaces I suppose it's kind of the next paradigm along from that you know the inking kind of stuff and it's the the natural input and conversations you can have with interfaces and it's not just edu is going to happen to our cars our mobile phones it'll be interesting to see what the next iterations of those operating systems will have in it as well it will and the possibilities are only being sort of the surface is just being scratched at the moment. I think like I think the you know once we work it out there'll be some really profound changes to the way we do it and and look I think back to your personalization question this is an opportunity to really level the playing field. I mean we're never going to get around at the moment the the challenge you know the tyranny of distance the tyranny of access to internet or a mobile phone or a computer or a tablet or whatever that that is that is a lasting challenge. But if you think about the possibility of Every learner in this country, regardless of where they are, if they've got an internet connected device, having a personal tutor for their learning, that can be pretty profound. Not as good as having a human tutor, not as good as having another teacher or a onetoone support person sitting beside you, but we we know that that doesn't scale and can't scale. So, if these systems are designed by educators, so it's got the rigor behind it. And then to your point there, if the interface is right, Then we could we could come up with something pretty cool like you know imagine a student had a I don't know a tool a dashboard a thing that they then sort of said well what do I want to do here's here's the top 15 things that way that AI can help me as a learner you know rephrase this paragraph because I don't get it press that button paste it in done you know um simplify this language um give me um give me an alternative uh example of this, you know, whatever these inputs are, if we if we could find some way of making it easy enough for for kids to use that and we could put the protection around it, which we know we can do, then I think there's some real possibilities for for us using this stuff. Yeah. And and you know what I was thinking as you were talking there, the the other interesting element to this as well in these in terms of these large language models is is that the the way you can connect with multiple disciplines. Whereas I I was just thinking as you're talking, you know, say 10 years ago or even probably even now today there's people making a living out there training um teachers in schools about apps on the iPad. You know, there's literacy apps, there's maths apps, there's this that and the other. Whereas with say a large language model like chat GBT, people are asking it to come up with maths lessons for them, geography lessons for them, science lessons. It's not so specializ So it's like one interface for everything. You know, I had a chat with a math teacher the other day who was trying to talk to their faculty uh about mathematics education and they were they were looking at the embedding inside the Bing chat um facility of maths and it's got connections into kind of um uh lex kind of graphics. So the the mathematical kind of um uh visualizations of things using those particular open formats and stuff and I I think oh chat GBT's got their connections into Wolf from Alpha and and all that kind of stuff. So it's almost like a one interface is also the gold in here for teachers because they haven't got to go well what app have I got to use for planning my maths lesson you know you can do everything in one area I suppose. Yeah I think that's I think that's true and I think when we think about interface too it doesn't always have to be something that I type into you know I remember we were doing some work with um with good start early learning who run childcare centers around the country. And um you know we were we were sort of hacking if you like the ideas around you know what are the challenges for educators in their centers and and they use some great tools at the moment um you know to communicate with parents about their child's learning while they're at the at the at the um center for that day and sort of stuff they're doing. But the reality is that even though it's a it's a pretty um short process for a educator to gather evidence of something going on that's cool in little Dan's day that I want to share with his mom and dad. Um it's still when I've got 25 30 35 kids in a room and a few educators roaming, it's it's still an overhead, right? And so if we think about the interface in, you know, imagine I could just say to my phone, capture a learning moment for Dan, you know, and then you know or record record a video of Dan about him playing with the frog next to the pond or whatever and then it just knows what that means and starts my camera and I record it and then it just does what it does with it like so that there's kind of educators can stay in the flow while give that kind of feedback to parents about not feedback that information to parents about what's going on in in learning and you know maybe it's about learning four or five or six voice commands to document learning that somehow automatically automatically happen and sort of get stitched together to give a to provide that really important bit of communication back to the family. So yeah, I think the as I as I say we're kind of at the tip of the iceberg and thinking about it just with text and I think that the the possibilities are are much broader than that. Yeah, I I agree. I I was speaking to a uh Catholic dascese the other day who you who then thinking about it in terms of they wanted to try to triage their call center. So it was about well like where can we how can we more effectively you know if a parent rings into the school system how do we triage that across so we're not putting people in a call queue or in a service ticket with something which is quite straightforward you know and bots have been around for a while around that but also what they were trying to do is analyze sentiment around conversations that people are having so that they can get sentiment analysis and actually prioritize things um around that. So it's like that's a different look the entire and I know we've talked about it for a while. There's entire business of schools that can utilize AI to smarten up those processes as well and be able to share those insights. You've got the two kind of elements I suppose that come together I think. So and and there's other tools that are being created at the moment. Um you know, that are about um process automation within, you know, large systems like school systems or or schools or whatever. But there's actually artificial intelligence that's being designed to look across the data within your school system, let's say, and find the workflows that people are doing that are annoyingly slow. You know, if if a thousand times someone gets something from here, puts it over there, saves it as something else and puts it over there and then adds it to this system with a comment. Then the system can find that and say, you know, this is a process that can be, you know, expediated, made made easier for people um and then even um get to a point where eventually the artificial intelligence may be able to create a solution to the problem, create a workflow for it. Yeah, that's so clever. And and then connection together, what I'm thinking is some of the stuff that we've got which you're using from a sales point of view, that would be gold. in a school point of view cuz if when I was teaching I remember you had so many kids the that came through when you did a parents evening and you know I had parents evening the other day and it's even it's even shorter these it feels to me that you know it's a 5 minutes in and out conversation whereas at least when you were going into the school not everybody got there though but it is more equitable cuz people are doing that via teams and things like that now but the um if I was a teacher and he was bringing that information to me you know sometimes you know I know this is bad practice. But sometimes you'll be you'll be halfway through the conversation before you even realize who the student is. Because you teach so many kids, you're kind of thinking it takes a while when you're having the conversation to say, "Oh, yeah. I I remember teaching David and you're looking at your notes and you're thinking where who's who's David again?" You know, which which David is it? I teach 50 Davids this year and like which one is it? So, it takes a while, but if you can have that personalized context before you even get to a teacher meeting to say, "Hey, this is David's background. This is David's family circumstance. You know, you might be speaking to his mom because his dad's passed away or whatever it might be. Uh and he's just dropped out of his sports soccer class and you know, this is his latest uh information. You know, the teachers in my recent um parent teacher interview, they obviously had a markbook cuz they, you know, like a traditional markbook. Some might have been using Excel, but they basically read to me three marks. based on what my kids had had, you know, like this is the this is how they performed in the last three assessments and they gave like an average kind of statement about them, you know, and and some had lots of really good insights, but they I just imagine if they had more information, it would be even better. Yeah. There's there's research that shows that there's something there's over 60 data places in a in your average school where data is stored about kids. Wow. And and so that's obviously teachers markbooks and it's LMS's and it's student information systems and it's all the traditional things but it's also the signup sheet in the in the school hall for the production and the you know the year 8 volleyball team list and you know all that stuff um the the instrumental music class lists all of all of that and it's it's basically just about um triangulating data the first part of it's about triangulating data to work out you know what is the full picture of of little Mary or whatever? What is the full picture? Um but then I think the second part as I alluded to before is shifting the ownership of that data to the people who actually want to ask questions of it. So one of the things we do see is and this is where the natural language input part of GPT models and the like are going to help. What we what we currently see is people creating data dashboards who are using data read people, you know, data scientists or switched on people with data creating dashboards that don't quite land with the end user if the end user isn't a data user or, you know, a data native because they, you know, I mean, they say take it on face value, but they can't do any other manipulation of it or drill down and read look at different variables or whatever. So, you got to get it absolutely right, which is which is really hard in the complexity of a school. But flip that on its head and allow the teacher to write a statement about what they want to find out before the parent teacher interview so that they can just say show me show me um you know Dan's learning across the last six months across all subjects including mine include truency data blah blah blah blah blah and then bang up pops this rich report of information but I suppose even taking it to the next level as a parent you know like if I got access to that I wouldn't need to have a parent teaching me I'm not trying to take the humanity out to teaching you because I am okay well like there is more education system than the than the the context there. But to be fair, the reporting the reports that I've just had for my kids, you know, nap plan, we can pick those up and put those straight in the bin really because they're so old and outdated. They give you a bit of a litmus test of stuff, but well, that's my personal opinion anyway. I know they've got their own kind of value, but the general reports, you know, there was certain tick boxes on there, you know, does does my daughter do dance? Yes, she does. Does she, you know, what the extracurricular things? What is she doing in English and whatever? If I've got that information to hand all the time and I can ask that myself for that data and I could say, you know, how is Megan doing this week? You know, oh, by the way, you know, rather than the the um, you know, the the text message of your son's been late three times in the last month or whatever, he's now in a detention. You could correlate it all together and go this term, you know, he's tracking on this. You know, having that tracking, you you wouldn't even need those parent teacher meetings. Yeah. I think all the conversation would be very different because it's not about information sharing, right? It's about a discussion of how they're going. The I mean the other thing to say about that is too is that you know even even the report con the content in the written part of a report is fairly contrived because it has to be you have to be sensitive to everyone's needs. You know you it's you know oh you can't say this or you can't say that or let's not be you know, let's try and find a positive way of saying everything. And, you know, sometimes you need to, you know, we need to have a a a good honest conversation, which happens in parent meetings all the time at schools. But I I do agree with the idea that what happened if you know, what happens if any parent in a classroom or any parent in a school had access to a set of data that they could also ask any questions of about their own child. So, I think I think this this idea of, you know, data being the um fuel that that runs AI. I think I think it's true that AI could be the fuel that drives insight out of data because it it gives it gives different audiences who are not data literate per se. Like myself as a teacher, I I taught psychology, a bit of English, history, geography, you know, I wasn't a maths or or a science teacher who, you know, was big into data and stats and numbers and, you know, so if I was presented something It was a bit confusing. I I couldn't sort of get my way through it. So, we yeah, we we we've got a possibility, I think, to think about, you know, not only the importance of data continuing, but the way that people digest data, um I think is a real possibility. And there's absolutely no reason in my my mind either while kids shouldn't have access to their own data. Yeah, that's true. Mention that. Yeah. Because if students can do that, then they can improve their own learning as they go in, you know. Um I know my bit of a when and I'm looking at my kids at the minute, you know, they're using GPT tools to kind of help them. So, you know, my my current example, I think I mentioned in the last podcast episode as well, is like my son was given the Handmaid's Tale to read for English and it's like it's pretty hard going the Handmaid's Tail even for an adult to go through it and and he got it to summarize and he wasn't going to read the book 100% he was not going to read the book. I could see in his eyes. So, he used Chat GB to summarize chapter at the time. So, could get a gist of the book and understand the context in the book. Now, that is negating from the fact that he wasn't reading and the point of him doing reading is to get used to reading and contextualizing and comprehension in text, but he did get an idea and a better handle on the book um and the context within it to do with the tensions between women and men and things like that much more effectively using summarization of chat GPT. But I think there's also for like we are in a bubble. We're in our own context bubble here and I think we we assume every teacher can use these things and they don't. There's going to be teachers out there that just type something into chat G GPT or Bing chat and go give me a lesson plan for your science volcanoes. It gives them some junk and then they go well that's rubbish then they move on right. Um so I think student agency is really good. I'm going to try to um speak to Nick Jackson because he's doing a lot u Dr. Nick Jackson down in um South Australia. is doing a lot with student agency and getting students involved in AI and how that works. But but when we step back from all of this stuff like as an executive team, you know, when you were speaking to these executives and departments of education and dascese and things, what what what should they be looking at? You know, if you had like a couple of simple things for them to do if they listening into this podcast, if there a couple of things that they could do now to kind of um help them manage AI and this age of AI going forwards. Well, I think I think the first one is to is to keep that data journey going because we know that that's going to be even more important once tools are infused with AI. The second the second one I think is to is to have um have discussions with all parts of the organization around what they could do with it to help them. So you know teacher meetings where I know that this happened in Melbourne uh a Catholic arch dascis where they pulled some teachers together and they had an amazing conversation about what teachers are doing, could be doing, might do with AI to save themselves time. We know we've got a massive crisis in the industry at the moment. People leaving leaving the industry. Teaching is hard work. It always has been. And um you know, everything is just piling up. There are real possibilities for us to fix this. And I think that we need to have good conversations with people about how to do how to use AI safely and securely to save themselves time to take a load off. you know, to sit get a first draft of something. Um, you know, I saw, you know, a teacher who was had, you know, a gazillion things on a to-do list, and one of them was to write an email to a to a student because they cheated in a year 12 assessment, and they had to come to a meeting with the assistant principal. It was, you know, a Victorian Curriculum Assessment Authority authority approved process. And, you know, they were just like, it's not that I can't do it. It's that I've got to sit down in front of a blank email and craft this thing. But with with Bing Chat, they were able to put in what they needed, obviously no personally identifiable information, and get a first draft of that email in about 2 minutes. Yeah. And then with a with another three or four minutes of re-editing it and changing it cuz it wasn't quite right, they were able to send it. It's just we we've got to have conversations at every level about what are the things that we could do with this tool that are going to save time or what are the things we're worried about or what are the things we should be protecting or what are the non-negotiables we shouldn't be doing. All of those conversations need to occur because there's a gazillion use cases out there at every level of an organization like a department of education or a Catholic dascese whether it's the marketing team at the at the central office of a dascese or of a large private school or whether it's the you know the teachers in the classroom or whether it's kids or whether it's parents or whether it's anyone. There's so there's so many um conversations that need to be had. And the third thing I'd say is that you know it's possible right now to put these models in your own environment and protect it and try it. And so it might be just um you know worth send and we know that many of the departments and dasces are doing this now right they're getting the large language model they're putting it in their own environment where it's where it's protected and and only accessible by certain permissions and etc etc and then feeding it their own data so that um it can respond based on the knowledge base of that organization, not the knowledge base of the internet. Yeah. Some of which is rubbish. So, so you know, get started with something small, think about what a use case is, but I think there's also um a conversation that needs to be had about not only having discussions and and meetings with different members of a uh a cross-section of a community, but also starting some basic training, you know, encouraging the you know, your IT teams in your organization to do some fundamental certifications or you know getting try and get ahead of the curve with this. There's teacher courses, there's you know there's stuff for for students like the Imagine Cup Junior stuff that we we run where kids can start thinking about artificial intelligence. There's courses for teachers, there's courses for IT people. There's you know there's a lot of entry points into this and I think it's just about you know knowledge is power in this space and and so you can have an informed discussion um about it. You need to kind of You need to get your head around what it is and what it isn't and what the threats are and what the opportunities are and and have a you know a conversation about it. Yeah. So true. And when you're talking about the way they disseminate it, one good practice that I I found one of my ex-colagues back in the UK, Chris Goodall, he's posting a lot about AI at the minute in on his um on his LinkedIn feed. And what he does with his staff at that next level down from the executives, he does a he does a post every a session every week, but he posts on LinkedIn and he's he basically splits into three things. He does try this. So he'll he'll he'll put a prompt in an example. Um so it'll be like a something they can try, you know, today, you know, this week. Go and try this. Then there's something to watch and there's something to read and it's something short, you know, something different because there's so many tools out there um you know, for different contexts. So you you'll kind of say, "Okay, try this prompt, but put it for you your a lesson and then watch this video which is something the I don't know the Khan Academy is doing or something that Microsoft's released recently or whatever and then um you know and something to read as well which might be around ethics and AI or something. So you think about different modalities of teachers as well that that some will read something will rather go and try it some want to watch something. Um so there's have you seen any tools recently or got any examples of anything that that you've seen in edu that um that people have utilized? There's lots there There's lots of them to your point before like everyone's coming up with a company called something.ai, right? And there's so there's so many there's so many tools and and Microsoft's at a different end of the spectrum to that because we're sort of providing a platform for people to build tools, right? Tools are um and and so I mean I if you look at you just join a Facebook group for example of educators globally or in Australia or whatever talking about how they're using AI, it's fascinating, you know, the the stuff that they're coming up with the way that they're discussing um you know quite quite um not sensitive topics but topics like um plagiarism, topics like you know is it a bit icky for teachers to be having autogenerated report comments? You know does that take the teacher out of the loop? You know they're having really good conversations and they're also sharing an awful lot of good tools. Now obviously once all of this gets out in the wild then you've got you've got to know the privacy and the efficacy and the ethics behind what is happening with your data and you know all that stuff again. So and that's why the you know the larger departments in Australia and New Zealand are thinking about bringing this in their own environment like South Australia have done you know they they've set up the open AI large language model in their own environment so that people can go crazy because they know it's safe they know it's protected they know there's not data leakage And so now they're exploring what are the use cases from that. So there's a there's a whole range of something.ai tools for summarizing large PDFs to whatever. And and I I should say too that you know the the the plagiarism challenge the um intellectual property is going to be a really interesting space and we're starting to see some challenges around that now. Um you know there's a there's a whole lot of stuff that the human race has to work through here. Um cuz we we've got some pretty cool cool tools and And like every other disruptive innovation, there's going to be some some things that we have to work through pretty seriously. Yeah. No, that's really good point actually because it's going to affect everything, isn't it? I don't know how deep it's going to go into golf to help you out with golf, but probably um there'll probably be something that'll appeal at some stage. What you reckon? No AI could help my golf mate. It's uh it's beyond beyond support. Well, I said yes then. But no, of course AI is going to help everything. You something something will happen. Something something will definitely happen. So, you know, thanks for joining us today on this uh podcast. Before we leave, I suppose, you know, is there anything that any one or two resources that you'd share that would be useful for these executive teams to kind of pick up on um to move forward in terms of AI in their in their schools? Yeah, I think it I think it's well worth um getting involved in communities who are discing ing this stuff whether it's on LinkedIn whether it's on you know Facebook groups with educators in it you know there's lots and lots of conversations happening at the moment there's a course that we've run for educators called uh which is at aka.mai for educators and it's a training course where teachers who are starting on this journey can understand a little bit about what AI is a good a good resource but I mean there's a whole myriad of places that they could go if you're an IT um you know person who's working in an IT capacity then there's a huge range of training yeah we got the fundamentals data I fundamentals um each cloud provider has got their own fundamentals training as well around that because obviously Google are doing stuff as are doing stuff as well and I I suppose the worry is you know without without ending on a bit of a downer here but my worry I was speaking to a diocese yesterday worry you know the old adage right and this isn't AI this happened years ago when I was when I was working in the school myself, there was a website called ratemyteers.co.uk and you could go in and you could rate every lesson that you go into and kids are going in and they were ranking uh teachers commenting about teachers. They could also do anonymous posts about teachers as well. Um and there was like the the entire UK education system and there was a global website as well. They're all, you know, came tumbling down. The same thing happened with the internet as well, you know. It was like What are we going to do about this? The same thing with calculators. Same thing with pens. Now we say AI. Um, you know, the conversation yesterday was about the worry that what happens if somebody puts the face of the CEO onto a naked picture, for example, or a picture of, you know, you know, like they've done with deep fakes of um Trump and and all of this kind of stuff and Barack Obama. Uh, and and there is a limit to what you can do, right? The cat's kind of out of the bag, so there is a limit to what you can do, but there are going to be cases where a company is going to do a brilliant tool like with GPT 4, they'll call it something, teachers will use it, they leak credentials into there because they need to log in and then, you know, these Russian hack groups or whatever or these, you know, state actors or whatever it might be, we'll just sort of mine credentials and stuff for teachers. So, you know, there's there's there's a bit this this isn't an old uh I suppose um conundrum and there are tools and security practices like you said about bring things in house to manage uh these applications when they're getting pushed out. But it's, you know, it is going to be a matter of time before something, you know, happens which is visible, you know, because you can't stop this because kids can go home and do this at home, right? That's right. Yeah. And that's why it's so important to start thinking seriously about how you can do this in an enterprise way, you know, which means you know, you know what that means. So, but you're right. I mean, there's it's it's no different to any of the myriad of tools that I have ever signed up for using a personal email address and a you know websites for this and shopping sites for that and you know it's it's all of that still that's not that's not a a new um that's not a new problem but it's it this is possibly an exacerbating factor for that and it is it is a responsibility of everybody I remember a comment which somebody made to me in a school once was about you know they it people tend to get in the neck for any of the cyber stuff any the security things because of the policies and the technical implications but I remember an IT person saying to me if a student brought a knife into school you wouldn't take them to the woodwork class and and and say like this you know you know everybody's responsibility is security whether it's um you know and the use of these tools. So actually sometimes the responsibility does have to land on that teacher's door to say well look if you're going to utilize um these technologies that you can't close up because that's all we did with the rate my teacher site. We basically said, "Well, we can't block this cuz we can block people accessing in school, but they're going to go home. They're going home and, you know, um putting ratings to the teachers on there. We'll actually um try to embrace that and some teachers will put in the link to it. Did you enjoy my lesson? Give me feedback. Post on there." You know, you got to kind of embrace that. And I think some of the teachers are already embracing the tools like um Bing Chat and the like because they're actually sharing that information and saying, "Look, this is what I done. Or they might even say to the the kids. I saw an English teacher the other day was trying to reverse um reverse or utilize the tool for the um the im midjourney IM and Bing image creator. They were trying to utilize English. So using prompt engineering to develop kids English and descriptive writing. So who could come up with a best image and then you have to try to reverse engineer what prompt to try to get that image. And then she started showing images of like a a cityscape in the dark with a cat in it. And then the kids had to try to duplicate that with English narrative by going, you know, um, show me a picture of a cat in the dark with neon lights saying cafe and ra in the style of, you know, whatever painter. And, um, you know, it's really interesting the way that people can embrace these technologies. Yeah. And I think that, you know, the creativity of of educators is just forever amazing and unlimited. Like there's very creative ways that teachers will think about using this stuff. Another example I saw was um this was months and months ago when when the plagiarism discussion was really on fire about this was when a student is asked to um you know do a write an essay or something at home or a response. If the teacher gets that electronically, what they would do is pop that into GPT and ask GP at to create seven comprehension questions based on that text. Then when the kids came into class, they'd sit down and be given the seven comprehension questions as half their marks. Wow. So if you didn't write the first half, like if you didn't write your essay, that's clever. You can't answer the questions. But if you did, you're fine. And so, you know, there's really interesting ways, and I'm sure that's the tip of the iceberg in terms of creative ways teachers are thinking about how assessment might change or some of the impacts of this. But anyway, I think we're at a, you know, although we're going through the Gartner hype cycle of this is going to be the biggest thing ever and then we're going to that trough of disillusionment it's called. I love the emotive language where where everyone's sort of thinking, oh, you know, oh, what about this and what about that? I I do think that we've got some profound opportunities if we invol if we if we involve um educators who are the specialists in learning in the way that these tools could be crafted to help kids understand more or the way they can be used to help teachers save more time and teach better. I think we've got a profound opportunity at the moment to change education for good. Yeah, definitely. Well, on that good note, like my really sour note about the security element, thank you Tra for joining us today. Your insights have been amazing. I'll put some of the links in the show notes and things, but thanks Travy. You know, it's phenomenal. Keep up the good work that you're doing in Edu and supporting these systems because we certainly on a on on for a ride in the next uh couple of years. Yeah. Thanks, Dan. Appreciate it. No problem.
undefined
Jul 28, 2023 • 35min

What just happened?

To kick off series 6, Dan interviews Ray Fleming about 'What just happened?' in terms of the landing on Generative AI and ChatGPT into society. We lookat how it might change assessment, courses and more. AI Business School Artificial Intelligence Courses - Microsoft AI ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 6 Episode: 1 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Hi, welcome to the AI podcast where we've been exploring all the latest developments and trends in AI over the last few years and the impact they've had on society and education and all kinds of industries. I'm Dan and today we've got a special guest, Ray Fleming. Uh if you remember back in the podcast initial days which was gez about three or four years ago now. Rey and I started this the podcast has kind of morphed and changed and people have come and gone and we've gone full circle back around and welcome Ry. How you doing? I'm I'm good Dan and do you know it makes me feel old when you say three four years ago uh when when the podcast started I think it was five years ago but what is um five years ago where's my life going? It's absolutely fascinating is that I think people think that AI has just started. Yeah. Like people out in the general population, not the people that have been listening to this podcast because they know it's been around for a while, but I think to a huge number of people, AI means that chat thing, that thing that writes stuff for me. And and that's really where they're starting with AI. And I I was looking back last week, I did my first AI course in 2015. That's eight years ago. And Seven years ago, in fact, eight years ago, I was talking about AI to education customers when I was doing keynotes and stuff like that. It was all about AI, but but I think there's this consumer level of friendliness about AI that's happening at the moment. Yeah. And I remember you actually doing the the AI business school course and sharing that with um with customers years ago. Yeah, that was 2019 uh early 2019. So, yeah, five years ago. That is amazing, isn't it? Cuz that that course is just as relevant today. I I I tweeted about it or shared on LinkedIn I think uh like a couple of weeks ago I completely forgot about it and there was there was all these business schools for education and for retail and things like that and it's the content's great. Yeah, it's absolutely relevant. I I think um one of the things I used to say in all of those conferences and keynotes and stuff like that, especially when I was talking with nontechnical people, part of the message is always you've got to start now because you've got to begin to get your understanding. You can't go from zero to 100. And um you know I I think it's the same thing and that you're absolutely right that AI business school course uh that's um on the Microsoft site the learn it's on learn.microsoft.com that course I know has had a bit of an update but really the fundamentals that that course are the same as they were in early 2019 which is the potential to do things very differently. Yeah. And you've had an opportunity now to look outside across multiple like organizations and industries over over your career and you you were able to kind of look across multiple technologies. What kind of other things are kind of appearing to help people learn I suppose about all of this stuff? Cuz cuz I I've got my my kind of biased lens on on where I'm coming from on the things I'm seeing. But I'm sure there's a lot out there, isn't it? I think you're saying some of the best uh if I'm honest. Yeah. Well, you know me, I'm I'm I'm the kind of epitome of lifelong learning. So, if I'm not if I'm not running a course and studying a course, then there's something wrong with me. So, I've done about nine AI courses since wow 2015. So, 2015 then there was a bit of a gap. Then 2019 uh onwards I I've done all kinds of courses. I've done uh data and ethics uh and AI which of course is fascate. Oh, I did a generative AI course of course. Um, the AI business school you mentioned. Uh, I did a AI for product managers. So, it was about how do you manage product? I remember you saying that. Yeah. Yeah. And and they're all they're all good. They're all really good. But I say I would say that AI business school is a really good structure if you're nontechnical. Actually, forget the nontechnical. If you're nontechnical, it's good. If you are technical, it's good too because it helps you to relate the wizzy fantastic things your technology can do to the business problems that everybody is trying to solve. And you know the way I talk about this Dan it's okay to say business problems in education because the educ the business of education has you know a lot of things and processes and things it's trying to do and those are business problems. So that uh AI business school and then probably dive down I would say into the ethics and responsibility piece because that's really important. So I would jump from the Microsoft learn courses to Corsera and pick something that really interests you on Corsera around AI like the ethics piece uh like the privacy and and AI piece because all of those things are interrelated because you mentioned that responsibility. What I'm seeing at the minute now is currently just from my zone of reference being my LinkedIn feeds and things I'm seeing a lot of the folks whether in departments of education or um universities things start to sort of settle but people just creating responsible AI standards or principles or or that kind of element and I know we've spoken to people and we've spoken about responsible AI in the past for quite a lot of time but it seems to be like I don't know like a cycle the things have happened the cats out of the bag or whatever and things are happening and now people are trying to come to the table with policy we've seen something from the EU I think this week or one of the EU countries or maybe in the UK you know there's been big catchups with Richie Sunna there's been a lot of this policy happening now you know and I suppose we in June and when I when I sent a message out to all the CIOS that I manage in my particular role in December you know happy Christmas blah blah blah I sent out an email uh and said by the way you might want to try this chat GPT thing you might want to read Brad Smith's book about tools and weapon funds and you might want to read um a couple of other things and you know lo and behold we come back in January February time and then the GPT tools have just gone through the roof. What are your thoughts first of all on what's happened in the last 6 months and then where we are now. Yeah. I mean I mean there's only one thing that history will remember has happened in the last 6 months which is the it's the a magical creation of generative AI which of course It's been around forever, but it made the leap, didn't it, from a technical thing to something that every consumer could get their head around because you went to this website and said, you know, write my homework essay in some cases or tell me about write 300 words about this and suddenly it became really relatable to people. And so I think that's the thing that is going to unlock a lot of the conversations and a lot of the potential in the future. And Most people can relate to it. And most people got that magic smile when they went and asked a question and saw this computer typing something for them and went, "How does it do that? How how does it do that?" And and so that that's probably the biggest thing if I'm honest that has happened not in the technology sphere because there'll be a whole bunch of things and people will be going, "Yeah, but GPT 3.5 wasn't as good as GPT 4.0 and 4.0 can't pass this medical test." Like, forget all of the detail. Yeah. The big thing is suddenly all this AI is meaningful to people because they can get it to do a task that they can truly understand. But then you've had all of these applications which have been developed since then based on that technology you know any games platform or whatever somebody create something some bit of hardware and it's the the brains that build the things on top of that and the people coming from industry and going oh we could use that for this application and we see we've seen a lot of that in the last six months I I had not an interview meeting this morning at 9:00 with a company from Ireland um which had developed generative AI into an assessment tool you know and and it was great and they just developing on top of that those things. So we've seen a lot of that but then now we're at this position where I suppose people are looking at responsible ethics and really in a frenzy to work out what do we do about this? Yeah. So let's let's draw a little bit of a picture of uh where we are, what's happening and then where where where that all comes in. So yeah, you know, first of all, we had generative AI, which is just awesome to say that phrase because everyone understood the chatbot or the open AI and then the tech industry said, "Oh, no, that's too short a word. Let's call it generative AI." So, so first of all, we had that and um of course the students got on it first and they went, "Oh, I could use that helping me with my essays." Um and I know that because in some of the online courses I was doing, there's peer-to-peer assessment of uh some of the activities. So, it's like, yeah, think about the stuff you've learned, do 600 words about how you would apply it. And, uh, three or four times I was asked to peer review somebody else's work that started with the sentence, I'm only a chatbased large language model. So, I can't really do this, but dot dot dot and and so, you know, obviously another student on the course had just gone and put the question into chat GPT, got the answer, copy based not smart enough to rem remove the first paragraph that said I'm only a large language model. Um but there kind of is a question that if you can write it with an AI shouldn't you like actually you should you know and so there's a whole question about the future of assessment based on consumer tools that students have access to and and that point there as well the other thing that in terms of a perfect storm where you're saying that students went into this first part the other thing I'd add from an Australian perspective is all of the teachers are down tools at that time in January time from December to January doing planning and then suddenly people started to do their planning work use you know playing with chat GBT and then going oh this has just done my planning for me you know plan my lesson for for year seven chemistry but try to put a spin on it of Marvel characters or all of these kind of weird and wonderful things or write a rubric for me on this and suddenly like there was that that element as well from a teaching point of view So, so there was a I I I would say I would challenge you a little bit and say that there was a point in maybe for for three days in January where teachers were ahead of the students with CH GPT. Yeah. But but I think individuals have been you know it's been a race for individuals and then it's the system problem isn't there is so so one of the things that I've been watching really interestingly over the last few months is this whole thing about how do we detect that students are using AI in their assignments and how do we stop it? And that's just a that's just an arms race because, you know, I I I don't honestly believe we're going to be able to solve that problem because somebody will build a uh an open AI model that uh generates things that cannot be detected by anybody else. And so there's the kind of race between the detectors and the non-detectors. And but there's also the thing of the AI system is blackbox and the text the detector is also blackbox. You don't know how it decided that that thing was or wasn't written by a robot. And so you can't do that in education if you are going to start making judgments on students saying well this was created by an AI if you cannot be absolutely sure you know um you know the easiest way to build a a absolutely 100% sure of finding AI detector is to say that every single thing is written by AI. So if In the world of AI thinking, and I think we've talked about this on the podcast enough, the true positives like finding AI written text and being sure and detecting it, the easiest way is to say everything was written by AI. Yeah. Um, now unfortunately, you then get a whole load of work that wasn't written by AI that gets labeled that way. And if you look at the tech detectors, they've got very high rates of finding things, but they've also got what I would say are high rates of falsely finding things. Something written by a human and it says it was written by AI and even if it's only 10% that means three children in the class 30 students in a big engineering class whatever are going to be accused of writing something with uh chat GPT and they've got no way of going no no no it was me I I sat down and wrote it I I read um a really good example where um international students where English is a second language um often They will write their essay in their home language and then use a translation system and then polish it up to put it into English. Pretty much all the detectors say that those essays have been written by an AI system because of the style that comes from the translator. The assessment element to this I think if this disrupts enough of of assessment now saying that I thought COVID might have disrupted universities and things you know for more unis than I do. But I would have thought that CO would have disrupted hybrid learning more than it did. You know, everybody's clicked back into class face toface learning. Um, we can really think about different ways to assess now. What are your thoughts on that? Yeah, we're going to have to the future of assessment I think is up in the air at the minute because the way that we've solved the problem you think about the way we handle it in COVID is more and more inspection of the learner when they're taking the assessment. You know, we all did a professional exam or something during COVID where we had to show the uh invigilator around the room with our laptop camera. You know, I remember my mom was like, "Make sure there's nothing printed on the ceiling. Now, show me the underside of your desk to show there's nothing there." Um, you know, so that for mumbling cuz I read questions out. So, I was like, you know, if I get a question on a proed test, I always go, you know, I'll read the question. I don't know why I do that. This guy said, "Please stop mumbling." Sorry. Well, look, if we keep on that angle, the only person that will be able to pass the assessments are robots because humans won't. So, we've got to work out the future of assessment. I think we talked about in the Christmas special things like, you know, set the homework to be go and get chap GPT to write your homework for you, then show me how you've improved that and corrected errors because that's what people are going to do when they get into the world of work. Both of my children are out there doing professional marketing jobs and they're using ChatGpt to create content for them in ways that I wouldn't have imagined six months ago. So people are going to use it in the workplace. So we've got to think about how how we model for that. But on the other side, on the positive side of it, isn't it great for busy work? And and by busy work, I mean uh you've got to have your lesson plan in this format. You've got to have your curriculum document done like this when when you know I mean you you would have had it when you were a teacher, Dan. You've got your favorite lessons that absolutely you just draw out the bag on a wet Friday afternoon because the kids are going to be b instead of knocking holes in the wall. Now you can pull out that favorite lesson and say, "I need a lesson plan that's 422 words that fits in this box, which before you would have wasted time doing because it would be easier for you to describe it or show somebody than to write it down. So that kind of busy work, that's great for that." The one in higher education is research publications. The amount of time that reaches researchers spend reformatting it according to the rules of each specific publication. I don't know if there's an AI out there for that, but they're blooming well to me. Yeah, that's a really good example of busy work. I was just on the the research site. I remember when I was doing my masters and I I I forget the name of the first part of a dissertation out, but it's that where you do your research study and you do you go in and you do the qualitative research and you read the body of evidence and and and all of this kind of stuff. And you know, it it' be good to like just using things to summarize a lot of the the the pro was written, you know, and getting extracts of this stuff out and getting some what my my son did a good example of this the other day cuz I I asked him to use chat GPT for some of his English work cuz he was reading Handmaid's Tale and he was really struggling with this. So I got I got him on the computer and I said, "Look, ask the Bing chat, the Bing was using I said ask him to um ask her to summarize chapter one of the Handmaid's Tale and and he just did each chapter at the time, you know, in like three or four kind of sentences or bullet points and he just he could he got the gist of the book for his um he hasn't his results yet but it just gave him that and and I know it doesn't you know he's so he could be seen as cheating but he probably did more work on that than he would have I don't think he would have read the book I really don't think he would would have read the book at all he he started and he was finding it really hard but I think you know the next stage is going to be like personalized models you know personalized trained with your own data whether it's an institution level you level, a national level, you a global level, you know, the the personalized learning, the personalized AI system for biologists or for people studying computer science or whatever it might be. And and I think we'll start at kind of individual institution level because it'll take a while to bring it together at state or national levels. Um, but it we'll we'll end up with some really personalized stuff rather than the general stuff that's available. And that's where you might start to get something where you start to build real student helpers um that are designed to do that kind of thing. Uh we'll also get the return of the thesaurus. Dan, you're you remember when you were a kid, you would have had a thesaurus. Um because it sounds like your son would have needed that because they would have taken the uh information from chat GPT and it's like I've got to rewrite it into my own words. Where's my thesaurus? Yeah. Yeah. Yeah. Yeah. That's so true, isn't it? I know my daughter. So if thesaurus is huge. She takes a d she got every year got to buy the mccquory dictionary the mccquory thesaurus cuz we never find it seems to it goes into the school's library somewhere along the line I'm sure in lost property but it's huge but it's such a good point in terms of those tools you've seen that have come out personally you know I'm sure you've played with some of these as they've gone through what's really excited you which ones have excited you most have you seen any good applications you know generally it hasn't got to be an education context anything back back to where we started from the chat GPT the open AI stuff that's the most exciting thing because it's made it accessible to the everyday consumer you know there are no barriers for people to start using that the visual stuff you know midjourney you and I probably both follow on Twitter people that use midjourney a lot cleaves put some great stuff with midjourney you know I'm not so much of a visual designer as I am thinking about words and data and things like that so I'm loving watching that stuff. And every now and again, one of the things I did over over COVID, Dan, was I became an improviser. And I do um I've got a am I allowed to spru things? I'm I'm doing two man show in the Sydney Fringe Festival in September. All of the visuals for that were created by MidJourney. Oh, wow. Really? That's fantastic. You know, it's it's those kind of things. I'm also excited by a completely new professional that's going to come along which is prompting which is how do you get AI to give you the results that are the best results. So that that whole art and skill about learning how to prompt AI to give you the results that you want and um that's going to get pretty interesting. It's quite a technical thing at the moment in the sense that you need to understand quite a lot to be able to be to do it well but I think that's going to become another skill alongside the other technical skills that we all need to do our jobs. You know, meaning you're a designer, you need to have visual communication skills. Well, probably most people are going to need AI prompting skills. Yeah, I I agree. I I had this conversation yesterday um with with with my partner actually. She we were talking about um chat GBT and I think lots of people like sometimes you forget again when we're in technology that that everybody's using this, you know, and we get caught in our own like bubble again, don't we? And like I watching like you said, Pip doing her stuff and and Julian ridden. I know he's doing quite a lot in that area as well. And and Pip's doing something where she is doing prompt engineering each month to see how good the quality of the images are. So, she does the same prompt every month to see what what comes up with some Japanese art that she puts in there, which and the characterization in there is is phenomenal the way it's it's improving. Um, but I still think are are we going to start to miss out people again and and create digital divide? Because, you know, when I was talking to my partner yesterday, I said, "Oh, you've got to try to use chat GBT, you know, put some prompts in there or whatever, but I think people are not aware of the depth of that it's a conversation, that it's not search, you know, I don't think everybody's immersed in it like we are, you know, and and I think some people don't just assume it's like search, you know, please write me like this thing or work this formula out for me and then it comes back with a incorrect answer and they go, "Oh, well that's bloody rubbish." And then they move on, you know, and it's like, well, you got to keep you got to it's like a child. You got to keep persuading and go, No, that's not what I meant. You know, what I'm trying to say is this, and you got to keep rephrasing and referencing and building the the model. But I think the speed of of that change is phenomenal. If you think about, let's go back to the other big inventions of the last h 100red years. Yeah. Or 200 years. You've got the invention of the telephone and the television and everything. And that was all going to ruin society. And you know, the reason that kids were dropping out of school was because somebody invented the telephone. phone or the book or the bro. Um, and so I think this is a similar one of those things is it's going to be misunderstood uh to start with, but my goodness, the speed that things are changing compared to other technologies and the speed that it's being adopted by and understood by people is way faster than other technologies. I know that because of the people that I socialize with that are not in technology because I've got some friends that aren't involved in technology. at all. Amazing. I know. Um, but it's really interesting having them tell me things about how they can use chat GPT and the experiments they've done and it's just awesome. I get great ideas from them about things that you can do. So, I I think yes, there are there is a a phase in which it's misunderstood and the true potential isn't in isn't um brought out of it, but it's going to be pretty small compared to other new technology coming in in the past which has taken I mean we used to talk about decades for adoption of things and then we started talking about years for adoption of things while we know that you know we're in months now but you know you still need to get socializing it with people the smart people are doing their jobs faster and better using chat GPT and so the person sitting next to them is going to spot that they've done their work faster and it's going to say how do you do that yeah I'm interested to see that because you mentioned about your daughters there and I'm I mean just to see how current generation who just might gone to university learn learned about writing literature you know copywriting whatever and then um how they embrace that change cuz we always think that younger people will go and pick up whatever app and do whatever but you know it'll be it be interesting to see have you have you had much feedback from your daughters about it do they just think it's amazing or are they a bit cynical? No they know how to use it so they just use it for the things it's good for. you know, if they've got to write something in their corporate style, they won't use it. Uh, my daughter's office, they're having a day out of the office, and part of that is kind of um they need to fill some gosh, I hope nobody from my daughter's firm is listening to this in the future. They're doing a an architectural treasure hunt around the city. And so, she said I needed to to set some clues. So, I just got chat GPT to write riddles about the things that they're going to be going to. So, yeah. Oh gosh. One is going to get advantage of the listening Sydney Opera House. It wrote a riddle about Sydney Opera House as a clue. And it's I would never have thought of doing that. And so, you know, that's a really good example of, you know, the kids get this stuff. And and and they're not kids, they're fully grown adults, but true. You know, yeah, they they get it. And and I don't think it's an age thing as it often is with technology or often been accused of being with technology. It's actually are you willing to try doing something in a way you've never done? it before and learn along the journey. And I think there's a lot of people that are in that box. They're just happy to go and try something. I agree. And and I think the one of the one of the interesting ones for me, you know, in in terms of what I I've seen out out of this was was something that one of uh my ex-colagues back in the UK, Chris Goodall, did and um he shared it with me and I've shared it with some teachers over here. What what he did, he asked he was in Bing AI again and he went in and he said um uh he showed me slides on his great so basically said, "Look, I want to run faster and jump higher than anybody else has ever done in history. What do I need?" And he kept prompting. He prompted it for three times. And then basically chatbot came back, you know, GPT came back and kind of said, "Look, you need a shoe that's got enough spring in it, but also it's got to be light enough to move forward." And he was asking questions, what material should he use, and then he said, "Could you design this for me using prompts?" So, uh, like which I could put in into Midjourney and basically he had another conversation with it and it prompted him to actually put into Midjourney which he then put into Midjourney and it created this phenomenal athletic shoe that had never been created. So, so from his idea of what he wanted to do, it had gone through this conversation then gone in and app smashed into a completely different app and then created this like shoe that had never been seen before. That that for me was a great moment of you know really originality coming through there and creativity. in the way that that was that was prompting it was it was fantastic. But it did need the human vision at the beginning and during the process to go I've got this idea and and you know legion yeah I think people are worried that you know AI is going to rule the world and we're not going to have jobs and anything to do but actually many many many of the scenarios I see start with human creativity and they get guided by human creativity along the way and and so that's good for us all but it's also good for you know, remembering the innately human skills that we all bring to any situation that that that's going to get emphasized because that's the bit that you won't be able to replicate. Let's get rid of the boring stuff and give it to Well, I give a good example of this. Um, there's a podcast I follow who's a guitar like um person called Rick Bato out to the US and there's a band called PI I think they called and there's a guitarist there called Tim Henson who and and he interviewed guy and basically he's a young guy who's who uses AI to create uh guitar music that he can play. So if the AI is making him like uber creative so like Rick Bat who's like a stunning musician and sort music for all conservariums and he's a producer and things and he's watching this guy playing the guitar and you can see his disbelief because it doesn't fit within the norms of how you learn. not teach or even play. You wouldn't play you'd never play this note with this note and you'd never play these things, but the AI is just basically mashing it together and and composing something that this guy can play and like which is is unreal. So, it's it's boosting the creativity and then allowing that that creative spark to kind of be developed even further. So, let me tell you about three things I'm excited for for the future because it fits exactly into that. So, the first thing is I'm excited that we're going to get rid of the boring stuff. So, things like having to read emails, especially the the corporate emails that somebody has crafted very very carefully that you have to read five times to understand what they're saying. So, uh write writing admin documents, you know, we we all have that kind of thing. And do more fun stuff. So, get rid of the boring stuff and do more fun stuff, more of that human human interaction thing. Second thing is I'm excited by the potential of nontechnical leaders to understand the potential of AI. You know, they're they're all hooked. They've all found a thing. And so that's going to uplevel everybody because um often those non-technical leaders have viewed technology with suspicion, but now they can it's much more relatable to them. They still have some suspicion about what's going on in the background, but you know, getting nontechnical people engaged, understanding, and um wanting to take advantage of the potential that that's the second exciting thing for the future. And then I think the third one coming right back to education is the potential for personalized learning at scale. Um and and what I don't mean is some of the examples I've seen at the moment where you get a computer voice to simulate uh to simulate you get a computerenerated video of a head and you put together a training video. I mean that kind of thing. But taking the learning resources and the learning journeys we have at the moment and personalizing them for the starting point of an individual and the end point they want to be at. Um, personalizing it for, you know, somebody with the reading age of 12 probably needs could get get the same resource that you have for a PhD student, but rewrite it personalized down to somebody with a reading age of 12 or 10 or whatever it might be. That's right. Yeah. Yeah. As somebody who is an as English as a second language. Um, currently they have to a lot of learning is inaccessible to them because they, you know, that it's written in such a way that I barely understand some of the words and I have my dictionary tutorials next to me. You know, imagine somebody that's just arrived from a third country and they're having to do all of that, the barriers they've got. Just think about what you could do with some of the generative AI stuff to say, um, you know, take this reading and make it access to people who don't have English as a first language and just watch the magic happen. That that kind of so many potential ideas for how we could personalize learning. It's a great way to bring things together because like that that was my question where where things would be going in the future and I think you kind of encapsulated that really well. Is is there anywhere is anybody you you'd follow at the minute? I mentioned a couple of people and everybody's doing different things in these. Are there people that that you follow at the minute in the community and in education that um you share here. I I know for me, you know, PIP definitely around that mid midjourney stuff and I mentioned Julia, there's a guy down in South Australia who's amazing called Dr. Nick Jackson and he's he's really bringing on bringing on the student agency discussions around this and bringing students in on the conversation as well and he can see the impact and see the change. So, he's doing some amazing stuff and I mentioned earlier my ex-colagues, Chris Goodall in the UK, he's picked it up and really ran with it and he's really sharing some great practice this and and some great insights into that and and leading the way there. Any any people from your professional learning network that jump out? Yeah, the top end there's people like Simon Buckingham Shan UTS you know he's been doing a lot on learning analytics and by extension uh machine learning and AI with data doing some great work around the the institutional policy side. The other is Matt Eastman's talking about this a lot. Oh yes yes you know and and the great benefit of Matt is, you know, came from the classroom. So, it's not one of those, oh, I know what you should all do things. It's I know what I would have done if I'd had this. He's great. Phil Dawson, Philip Dawson at Philip Dawson, he's doing some great stuff around assessment and some of the conversations around assessment, but as part of that, doing some really good insights into how this all works. Um, and so, yeah, I I'm definitely getting value from those three as well. Yeah, that's brilliant. Well, well, thanks for coming back and and saying hi, Ray. I think we're going to be rebooting the podcast coming up. So, like I'm sure this is not the last time we're going to chat and we should keep this conversation going to kind of really support everybody out in this area because things are moving so quickly. Really, really are. So, thanks for jumping in today and sharing your insights. It's great to kind of uh chat again and um thanks so much. Brilliant. Thanks, Dan. See you soon. Bye. See you soon. Bye.
undefined
Dec 21, 2022 • 54min

Christmas, Infinite Monkeys and everything

Welcome to this week's episode of the podcast! We have a special guest – Ray Fleming, a podcast pioneer, educationalist, and improv master. Join Dan, Lee, Beth, and Ray as we discuss the events of 2022 and look forward to the future and the holidays. We have some interesting resources to share with you: ChatGPT: Optimizing Language Models for Dialogue (openai.com) DALL·E 2 (openai.com) Looking for some holiday reading recommendations? Check out these books: Broken: Social Systems and the Failing Them by Paul LeBlanc (https://www.amazon.com.au/Broken-Social-Systems-Failing-Them/dp/1637741766) Hack Your Bureaucracy: 10 Things That Matter Most by Marina Nitze and Nick Sinai (https://www.amazon.com.au/Hack-Your-Bureaucracy-Things-Matter/dp/0306827751) And don't forget to check out the article about how Takeru Kobayashi "redefined the problem" at the world hotdog eating championship: https://www.businessinsider.com/how-takeru-kobayashi-changed-competitive-eating-2016-7 We hope you enjoy the episode! This podcast is produced by Microsoft Australia & New Zealand employees, Lee Hickin, Dan Bowen, and Beth Worrall. The views and opinions expressed on this podcast are our own. ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 5 Episode: 12 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AIA podcast. How are you, Liam Beck? Awesome, Dan. Awesome. Great to be back. Fantastic. Thank you, Dan. Yeah, I just came back from the UK, so it's snowing there. Coming up to Christmas. This is the first time I felt Christmas snow for some time, actually. So, it's lovely to be back in the warm. Have you guys been busy? really busy. So, it's uh obviously it's um gearing up to the end of the year, doing lots of things to get um get all the work done, but also trying to manage my daughter's excitement and enthusiasm for this time of year. Um speaking of snow, I'm almost predicting that it will snow here in uh Adelaide. It's been that cold. Wow, that is crazy. Beth, snow in Adelaide. You're basically in the desert, aren't you? I am wearing my Ugg boots as we speak. Um, so it has been freezing. It has been freezing. So got half an hour on the podcast and half an hour outside just waiting for that first uh flicker of snow to come down. You never know. And Dan and I are not what an hour and a half, twoour flight away from you. And it is 28 degrees outside. Glorious sun. I'm going to be going for a swim after this. So how is how how different is the world that we we occupy? That's right. That's Have you both set Have you both set your trees up and things? at home. Do you do that? Oh, yes. I I I start thinking about it in September, which I know is um not ideal, but uh we we have the Adelaide Christmas pageant actually, which is the signal to all South Australians that it's permissible to put your tree up. And that's middle of November. So, my tree has been up for quite some time, and it will probably be up for quite some time into January before I finally feel the need to take it down. I'm going to go get my tree today. You need to get onto it. And we we're in our house. We're hitting an interesting point, inflection point without giving too much away to our listeners because I don't want to break any longheld situations that my children are of an age where Christmas is something different to them, shall we say? And so I had to tell them, hey, get the tree up and start decorating it because they're like, oh no, we're busy doing things with our friends now. It's that and it's so sad. It's so sad to see it all sort of fade away at away. So to to today's podcast, we've got a fantastic podcast pioneer guest, special guest for our fi finale of the season and the year. Um, I'm going to introduce him right now, Lee and Beth. And it is Ray Fleming, our amazing first podcast pioneer, a great friend, and our only listener. Um, I'm I'm I'm loving the segue, Dan, that we went from sad to ray. Hello. Hi, Ray. How are you? I am fantastic. And uh I'm going back to the UK for Christmas for the first time in four years. So, I'm looking forward to that snow. When you when you live, right? Uh I'm going I'm going to just down the road from Diddly Squat, the uh Jeremy Clarkson farm in the Cotswwell. You really? Yeah. Oh, that that's such a good show. That you when you flying? You're flying next week? Uh yeah, I'm I'm going Christmas Eve. because flights are so expensive at the minute. Oh, yes. And uh so I I've leared to plan ahead, which is something I've never normally done in my life. But if you fly on Christmas Eve from Australia, you'll land on Christmas Eve. So happy days. Exactly. Double one. Double Christmas Eve. Ray, it's absolutely wonderful to have you back on these airwaves. It's uh it's been too long, hasn't it, Ray? Oh gosh, it's been a long time. I mean, I've I've heard your voices on a regular basis because I have continued listening to the podcast. I I will admit I've not listened to every single one, but it's just so lovely seeing seeing um the what what what is it they say? Seeing the old team back together, getting the band back together. That's it. That's it. Yeah, we we I um I was uh I've managed to join and and drive it further into the snow, shall we say? But it's been a great experience. I've loved it. And um it sounds like if you haven't listen to every episode. You're going to do really badly in the end of uh podcast quiz that we've planned for you, Ray. Oh gosh. In episode 12, season two. Oh no, that was the one. That was the one where Dan forgot what he was supposed to say. Oh, well that could be any of them. Any of them. Yeah, we're only joking. It is It is It is great to see you. And look, you know, for for me who stepped in, you know, into your shoes, so to speak. And then for Beth who's joined us and really helped me fill out both those shoes. I' I'd love to learn like when you guys started this, which was what now? Probably four or five years ago. I think it's coming up four, isn't it? It's about three, I think it was. Well, we're season five now. So, yeah, it was September 2019 when we first started it. Oh, there you go. Yeah. Well, look, you don't want to hear us. I want to hear you talk, Ray. What got you started, Ray? How did you even What did you and Dan come up with? Oh, do you know? Well, the basic thing that what got me started is who who wouldn't want to spend an hour in a room with Dan every week. I mean that that was the that was the thing and and also the excuse to talk to Dan um about things that that we were both fascinated by. And I remember I went to Dan and I said, "Dan, I've got this crazy idea." And he said, "I love crazy ideas." Except he did it in his Welsh accent. And and that was it. And and we set off with this goal of, "Well, how often should we do a And and I think I think it was Dan that said people can never get enough of me. So we decided to do it weekly and and thank God sanity prevailed after a while. But yeah, we did it weekly for the first six months. That's crazy. Wow. That's right. And and I it was interesting, wasn't it? Because it was positioning um the the kind of thought of AI at that particular point. You know, it was just on the cusp. Things are happening, but there was a lot of um confusion out there and a lot of kind of uh things that were happening which are good and bad and we did an episode on good and bad and evil and evil and good and we had to face off against that Ray. That was quite fun. I remember that episode really well and and I love that you say there was a lot of confusion out of there because I remember one of the joys of recording with you was that there was pretty a fair amount of confusion in the microphones as well because we came in with opposite perspectives quite often like we describe things in different ways or we we were looking at through different lenses. And that was really I think quite useful to to have that conversation about oh yeah that's interesting but I see it differently. Yeah it's it's because you are you know I mean quite different points of view on it and I think in some ways that was probably the beauty of it was the fact that you were sort of I remember you used to have an episode with Dan or maybe an episode with Rain and sometimes episodes together. Um and it kept it interesting you know it kept it kind of kind of moving forward. What what was the goal like what were you looking to achieve in doing this other than you know talking to Dan as you said I I I so for me I think it was we knew that AI was emerging at that point and nobody could really predict the future like no nobody would have got where we are today I think thinking about it so it was almost like well let's live the future as we're going and let's talk about it and and then I think the other driver was the curiosity like I'm curious about this stuff Dan was curious about this stuff how could we spread that curiosity so that other people didn't just see AI to one extent and and things around it as as a black box. It's like let's let's understand it more. And I think that's where we were coming from in those early days was there's this amazing thing. How can we how can we understand it and do that out loud so that other people can follow us. So it was much about your own understanding, wasn't it? Oh, for yourself as a journey. It's always one of those things though, isn't it? It's when you teach something um you always you kind of learn it better, don't you? Like I remember that when I was teaching in the classroom, it was the the lessons you had to prepare for. Uh often when you were teaching things that you knew the subject matter off quite well, um you you'd kind of uh blur the lines a little bit and it wouldn't be as effective in terms of the detail. But those lessons, I remember I had to step in for one of the lecturers once doing classics and I knew nothing about classics. So I he was away for like a month. So I had to teach the Roman civilization and and I had you know I had to learn it all and And then my lessons in in in classics were miles better than my IT lessons. Um, which was my subject matter expert, you know, area. You know, I suppose when you teach things, you learn a little bit more and you learn to kind of uh talk about it in in a different perspective and maybe in a different level of detail for for the for the audience because I think Rey, you pinned me down at one point with this and you were going, who are the audience for this, Dan? Who is our audience? And we were drawing out the personas. when you when you're developing a podcast and you think who are we trying to speak to here? Who are the people and you know is it can't be too generic but then again it can't be too specific and you end up getting caught up in in all of the naming of the podcast and you know and all of these kind of things. So it was a really interesting time right at the beginning. It sounds familiar actually isn't it? Yes it does. You mean nothing's gotten better. We just we're still as uh as unclear about what we're doing here. Oh no. A big improvement I think. that thanks to the pandemic is you don't need to spend an hour in a room with Dan. You could do it over the line now. Well, it's funny you say that, but you know, in the early days when I stepped into your shoes, Ra, if I can use that phrase, we were really struggling with this idea because it was all over, you know, over teams recording like this and it there was this it felt like you guys have engineered such a really quality experience because you had proper equipment. We went to a room, you had mics and then it was just, you know, initially Dan and I on teams with whatever quality audio we could pull together. Um but I think in some ways that becomes the character of the show. It becomes a bit like you know rough and ready if you like as a to describe it. But yes yeah the quality of the conversation improved even even the technical bits didn't. We've had a lot of fantastic guests this year. I've got to say obviously Ry yourself is you were the jewel in the crown but um I think one of the things that um that I reflect on when I think about this podcast is just the the the breadth of topics we've covered, but also the different types of people we've had on to share their stories and um and their perspectives. And it's uh as as we think about where we're going to take the podcast next year, it it's an exciting opportunity to reflect on what we what we've covered, but you know, what what are people asking for? What what kind of content is really going to engage people going forward? I think one of my favorites actually as well, Beth, was you you connected with knowing and and like listening back on this season, Jan was just phenomenal. She was such a wealth of information and she had such a human I don't even know describe it. She had very humanistic approach to her thoughts on education and the use of technology in that particular area cuz we we we'd gone around and we interviewed people in technology around sustainability and things like that and we deviated from education but then when when we had her on you know it was very much about skills, but she was really um quite thoughtful in some of her responses. I I really loved that episode this season. I agree. Yeah, she was absolutely fantastic. I I think when I look back on it, the show really peaked around season 2, episode 5. That was kind of that I think you just had this really quality speaker on. He was the national technology officer for Microsoft Australia. Really quality. I was quickly looking through the notes. I was thinking it took me a second, didn't know, but I but I was looking back at the notes because I was one questions M Ray had said but he was absolutely right and I want to ask you about it the pace at which you were doing these back then you were started in in it was I think it's September 2019 but you were banging them out almost one a week it was almost one every week or week and a half did you do that how did you get that like because I see how much work it is now it must have been hard work back then like consuming a lot of your time do you know the hardest work was the editing afterwards like like there were some great moments and there were some terrible moments and probably the editing was the terrible moment and and the reason was certainly for me I didn't realize how many times I said um and I was determined to not let that stupidity come through on the podcast and so I took out all of my ums and because I was good to Dan I took out the three of his as well but listening to your own voice every week yes gosh that can be that can be wearing I I I remember you actually recorded I think before one of the Christmas episodes you you put all my arms together it was like five minutes worth of mys and h and I I don't know if that's still online, but if it is, that is an excellent. If it isn't, we need to attach it to this episode as as a bonus special. That would be great. I think it was I think it was December 19. Is it as a song? I' I'd love to see the charts. It was great actually. It's really good. Good out takes. And I think when people are listening to this podcast now over Christmas, they're trying to make it topical, but also thinking about out there in different ways to also bring that human element into it as well. I think we've been talking more about and at the end of this episode we're going to share our um top tips for Christmas and like gadgets you might be thinking about or things we might have got in line for our Christmas presents without sharing too much detail if the kids are listening. Uh you know lots of things like that which we've added as we've gone through and things work and some things don't. Yeah, it's been a we've experimented with lots of different things I think over the time which has been part of the problem Ray I guess. as you've been a listener is as a journey we've you know we started as you started the AI and education podcast and we became a bit of the AI podcast and then we became the technology podcast and now we're the kind of the storytelling podcast we it's yeah it's it's been it's evolved a lot I think it's fair to say you can see so much change so quickly across the breadth of the podcast but even just in the last year and the one thing that jumped into my mind was NFTTS and crypto I think earlier on in the year we did an episode on blockchain and metaverse and you know the tailwinds for the metaverse and NFTs and crypto and then in the last six months you know things are going going down the pan with crypto yeah how can you predict these things you know you mentioned there Ray about predicting what was coming up and having some idea it's almost impossible isn't it from year to year from month to month at the minute it's it's amazing as well I I feel that there's a little bit of the journey that we've gone on with AI where it's like the Wizard of Oz somebody's pulled back the curtain and there's just a guy rolling rolling the rollers because I I've been listening this this last week to the Robo Royal Commission and and I remember a few months ago thinking it was a terrible weapon of math destruction that AI had gone wrong and now I know it was a formula in a spreadsheet. It was like one cell in a spreadsheet that went wrong and it's got nothing to do with AI. So, you know, it's fascinating to kind of see the the workings behind some of the things. I I read somewhere that a lot of um startups are pitching themselves to venture capitalists as being AI driven, but they're really human driven. Um, they'll do the AI bit next once they've got the the funding. It it's it's funny you bring that one up. Oh, sorry, Beth. You got No, no, I was just going to say it sounds a little bit like Theronos, which was um, you know, the medical research equivalent of of that exact thing. You know, you sell the bells and whistles and the vision, but actually you're still doing the manual work behind the scenes, and it's Um, there's still a gap in terms of what the vision is and actually what you're doing in real life except for the fact maybe that Elizabeth Holmes made it all up and at least AI does kind of exist for that I suppose. But but yeah, it's but you're absolutely right. I hope it's not that bad. But I I was going to comment on the because you brought up robo debt and I think that's a really timely one and and not in any way to unell the massive impact that had on many people's lives. But it's not really AI, it's technology mis abused, it's data abused, but it's become the It's become the poster child for why AI is a problem in our world and I think that's really dangerous path we're on. So yeah and and just listening to the the commission what's really clear is it was a group of it seems to be a group of people were determined to make this thing happen and the technology was used as the tool for that and you know I think often we talk about biases in AI and I know you talk about it heaps over the the last three years um but It's a really good reminder that you can set out with the with the wrong intention but use the right technology to deliver the wrong intention. And you know that situation hasn't changed. No, agreed. So look, I think um we could talk about uh not so much the looking back on the show. I think it's been great to kind of look at some of those episodes and you guys really covered so much in that short time. Before we start looking to where we go to next and the world of AI that we're living in, I'd love to get a s if you have if your memory still serves you. Do you have highlights and low lightss of that time? I mean, things that you remember when you did the podcast, you just think, "Oh, that was just nailed it." kind of moment. Oh, two two moments for me. Um, one, there's the uh we'll cut that one out. Two moments for me. One was after the first podcast when we just had this little spidery uh mind map about the things we're going to talk to and I think it had six things on it and it was supposed to be for 25 minutes. and it was exactly 24 and a half minutes and the mics went off and I turned to Dan and said, you know, I think that worked and it was it was just like we've got this idea and we made it work. Um, the other best moment was this started as a skunk works project. We couldn't work out whether we were officially allowed to do a Microsoft podcast about AI in education, but we couldn't work out that we weren't allowed to either. And so, so what did is we said, "Well, why don't we start it and we'll do it, but we we won't put the Microsoft name to it. We'll just we'll just do it." And uh we got three weeks in and the global VP for education, Anthony Salito, just blasted out to his millions of Twitter followers, "There's this great AI and education podcast my team are doing. You should all listen to it." That that was a high because at that moment, we suddenly knew that we could get away with it. And and It's there's two things, right? It was that that that so British uh exuberance that you showed there when you you turned off the mics and turned to Dan and said, "Well, that was Jollywood, wasn't it?" It was very good show. There was no like, "Yeah, you bloody little ripper." It was just Yeah, it was great. Very good. Well done. But that's but I can imagine it would have been quite a moment. Yeah. So, looking back for 2022, then I mentioned my uh element of the NFT crypto, you know, bubble. bursting, growing, bursting, growing. Um, what what are your thoughts, Lee, Beth, and Ray? Maybe start with you, Lee. What were your memories of the last year? What things jump out to you? Well, because I'm old and I have a short-term memory. It's probably the last 15 seconds are the things that are most apparent in my mind. But I've been really quite deeply interested in this process of generative AI and this this acceleration we've gone through into the idea that we can use to create things. Dan, I know you and I did it and I think Ray, you might have done it back in the days, the conversation around whether AI could be creative or is AI able to kind of take on that humanistic content and we saying, you know, we were I remember arguing the point saying, no, AI is not creative. It is just a tool of human based on human cognition. But here we are seeing stuff created and I'm not just talking about, you know, the chat GPT stuff of really recent B, but that Dari moment and then Darly too, which I mean Darly was kind of interesting. clip before that which was the precursor to Deli. But Deli too, this idea that suddenly this stuff was really quite interesting and good for me has been like a it's a bit of a wake up moment. It's a bit of that moment, you know, when you something you've held to believe for some time suddenly has been shaken with this idea that actually it it's something different and not what you thought and you have to reset your thinking about what AI is really capable of. That for me has been a really interesting look back and I'm going to take us back in time because episode number four for was about chat bots and and we talked about the example then that absolutely fled me was the ability for a chatbot to understand that when Australians say aquatic center what they mean is swimming pool and so when I was putting in my query what time is the swimming pool open and the website knew it as an aquatic center it did that translation bit it's like oh my goodness it understands different variations of English and it's smart enough to do that and then you know link for to now and chat GPT we'll come back to it but chat GPT mind-blowing in the last uh two weeks for me I think you know you talk about AI and how it how it helps and assists our lives and technology when it works is such a tool for making our lives better but then it's also a massive problem when things go wrong and I'm thinking about some of the data breaches that Australians suffered towards the end of this year and so I'm a um um Optus customer and I had to have all of my personal documents reissued. Uh heaven only knows uh what um what has happened to any of the information that I had. And then of course off hot off the the heels of Optus was Medybank and I think it's you know shone a light on a whole part of the technology world that a lot of people aren't overly familiar with. Um and and really put the spotlight also on government to to understand you know what what how how do how do they protect Australians from these types of things um going forward and then also you know as customers what personal responsibility do we have to protect ourselves I think it's as we look into next year people are going to be really a lot more focused on security and and I think that's that's one that's quite sticks out in my mind is the unseen AI and know we've talked about that on the podcast previously as well you Ray just mentioned the black box of AI there, but the the AI that we see pervasively in tools that we might use and things, you know, when you're talking about security there, what jumped into my mind is all of the tools and technologies because of all the signals that are happening in whatever technologies are using all that AI in the background, which is the only way we can catch a lot of these hackers and and uh I suppose find out what signals are happening and where we can kind of um control those. The AI in the background to stop a lot of that. uh has been phenomenal over the last couple of months as well. So that's been a quite a good good year for unseen AI as well. You know, I want to come back to something that Ry mentioned earlier. Well, actually you mentioned it, Dan, and we will get to chat GPT because I think we've got a lot to talk about there because that's such a big thing. But you mentioned NFTTS and you know and and and crypto broadly and we think about that being fundamentally about blockchain which is about creating chains of trust which is about dealing with these issues of public disclosure of data in private in in private manageable ways but without central agencies that can kind of lose it on our behalf. Um, Ry, I'd like, you know, now love to get your view on on on that world of web 3 crypto and NFTTS because for me personally, I was a big NFT denier and I've sort of come around to the idea that there's a basis of a really good idea in there. It's just being executed poorly right now because we're, you know, doing stupid things with it. But I'd love to get your view on it because it is going to be one of the I I think it's going to be a big part of this privacy uh puzzle that we're trying to solve. Yeah, I I I think I'm with you in that I have been a cynic and I'm probably not out of the cynical box to be honest. Yeah, I remember reading once somebody saying there's nothing you can do with a blockchain that you can't do with a database and many many many of the scenarios that I've heard talked about especially in public sector and especially in education actually there is an authoritative source for for the ownership of the database and so So many times we're trying to solve a problem that's better solved in a in a different way. So I I'm still I'm still not there yet on the fence. Can I ask a a you know a dumb question to which I'm becoming quite famed for. How have we even describe NFTTS? So the I I should I should preface that by saying perhaps the most comprehensive explanation I've seen is by um an amazing woman on Twitter, Avalon Pen, Penrose, and she described NFT in the funniest funniest way. She also goes on to describe other things like blockchain and the stock market. So, if you if you've never seen some of her explanations, I will suggest that that that you do that. But for the normal person, how what is NFTTS? I I'll give it a try and then I think I'll throw it to Dan and Ray to correct me. But because I heard this one and ended up using it myself, a recent presentation I did which was this. If you think about the Mona Lisa hanging in the Lou that is a singular piece of art. It's a picture painted by Vinci and it's that picture and it's only one copy of it, the one that he hand painted in there and it's a wonderful amazing thing and it is priceless because it's the one he painted. There are a billion copies of the Mona Lisa around the world. I could put one on my desktop tomorrow. I could digital copies of it everywhere. They are not worth the same money as the Mona Lisa. They don't have the same value. They are the same image. totally, but they're not unique. They're not the one, the original one. And you think about an NFT as a way of saying, okay, if I create something digital, how do I make it as unique as the original Mona Lisa while still allowing copies of it because, you know, there's no protection for copying an NFT, but I still attribute a sense of rarity and value to a digital object, which inherently by its very definition is not rare or unique. That's what an NFT is trying to do. It's trying to attribute rarity to something that is inherently not rare. That's my understanding. I don't know if that makes it any clearer. Ray, Dan, any thoughts. I'm I'm still in a cynical box. I think Lee Well, so I'll stay there. I I was going to describe I was going to bring the Mona into my description of it as well. I think it's whacking a photocopy of the Mona up in a different building and go that copy is yours. Uh which is great until somebody loses the keys to the building, which is I think what's happening with a bunch of NFTs at the moment is the key holder has disappeared and suddenly your photocopy is inaccessible. But I think that's the guy that went missing, isn't it? He didn't he Yes. The But I I think that I think to the the the problem and why people are cynical about this particular technology and say the the crypto field as well is that you know some of the applications exactly like you said Ry have been developed in not a nonsensical way but ways that could have been done with with similar technologies like the database. Um, and I think you know when people are looking at things from outside and are involved in technology or even if you are involved in technology and you're saying well why did um the first tweet be sold as an NFT for X amount of million dollars or whatever it might be it it isn't tangible to put value to digital assets in a way that we do for for things like the Mona Lisa. So I think some of the some of the the examples that we've been using and seeing are sometimes, you know, quite easy to poke fun at. So, I think that's where where it all comes tumbling down. Right. So, I think we're on the basis of a good solid argument here. But the point being, you're absolutely right and I think this is back to almost to where AI is today because the problem is not the idea of the NFT. The idea that a somebody wants to attribute value to a digital object for the world that we all don't belong to, that our kids are going to belong to, that's actually a very realistic and probable outcome that they will live in. The problem we have today, much like AI, is the way it's being implemented and the NFT structure, the JSON model, the way it's been built, central clearing houses like OpenC that actually don't work because people are corrupt and stupid. Those are the things that are making it fall apart today. But if you break if you take that your head away from the what we're doing, but what what is the incubus of the idea? That's my thinking about NFTTS is that makes sense. The idea that you know a Minecraft asset, a thing you might build in Minecraft has value because you made it and it's unique even though it could be copied. That's something I think is worth exploring even though I'm not fully there myself in my head. Yeah. And I think um I I kind of think back to we had NFTTS before but they weren't called NFTs. They were called stamp collecting and stamps became incredibly valuable because people wanted them and now they're not because people don't want them because we moved on and now we're on to NFTts and you know whether it's NFTs of artwork or it's digital clothes for your Xbox player. It's people want them and I'm guessing that cycllically we'll go on to the next thing that your point is there's underlying technology there that can do amazing things. Let's disregard maybe what we're currently using it for and think about what we could use it for in a positive way. Well, look, I've I've heard your arguments. It still doesn't quite make sense to me. And I I I'm going to be honest and say I do prefer Avalon's explanation. So, I encourage you to listen listen to that. Now, you you know You're talking about chat GBT uh GPT. Tell me what is what is that? Go on Ray, you're the guest of honor. Well, look, this is uh this is me. I'm I'm really I'm really embarrassed being in the company of Lee trying to explain something because Lee would start with a really brilliant deep dive into the technology. I see it as a user thing which is this amazing way to generate answers to questions in in a textual kind of way. It's like The first thing I did was I went to it and said, um, I want you to tell me the story of the origin of McDonald's restaurants in the style of the first paragraph of the Bible, King James Bible, and it did it. And it's like, how does it generate those two completely bizarre ideas and put them together? Um, and what I watched as as probably in the first 10 days as people started to get their head around it, as they realized they could go and ask it to generate anything, whether it's a short thing or a long thing or a an article or whatever. What what I noticed was educators diving into it and first of all fearing what it would do about for their world with students and then changing completely to suddenly realizing it was going to change their world. So it started with people setting their assignment questions getting it to write an essay and then marking that essay and go well that's a B+. Every one of my students can get a B+ now in 10 seconds. And then the Ultimate I think by the weekend the first weekend after it had been released was I watched an academic tell his story about he went from I'll get it to set the question so it'll write the assessment question then I get it to write the rubric and then he went to the other one and said write the answer and then he went back to the first one and said mark the question and then finally he said write the feedback for the student and there was that scary moment that it managed all of those bits well and suddenly you don't need the teacher or the student like you can do automate the whole process. Well, right, it's it is scary when you put it in that context of just what it can do. But I would actually start the I would take the conversation a different way. I wouldn't have gone with a deeply technical one. I'd actually go with the Douglas Adams theory, which I know you will all accept and and love, which is the infinite number of monkeys theory, which is not Douglas Adams, of course. It's from many years before that, but I remember the Douglas Adams instantation of this, but that's what it is. It's a large language model. It's been given essentially we now have infinite monkeys in this large language model that are able to generate this script this content simply by having all of that data. But it doesn't mean that it is neither logical right or good in any way. It's just grammatically and logically correct, but could be total and utter gibberish in terms of its actual point. And we've seen some examples of that. I think been some interesting examples of it. But for me that's the it's it's the instantiation of something that for me since I was a little kid reading Douglas Adams has been this sort of this idea of something that is magical in my head that you know you create an infinite number of monkeys they can create the words of Shakespeare just by simply banging random if you didn't know that's the theory that by infinite number of monkeys generate the words of Shakespeare by random randomly banging on keyboards that's what we've got in a modern highly technical highly sort of scaled way but it still doesn't really know anything. It doesn't know the things it's telling you. It just knows how to emulate the styles, the concepts you're doing. But, you know, we should get back on point around education. It is quite scary in that it is able to generate that level of content that is good enough to fool exam boards. Uh I believe the Azure and AWS exams have both been passed by chat chat GPT. Now, um you know, that kind of stuff is It does make you stop for a minute and go, "Hold on." Dan, you're a teacher. Would, if you were still in the classroom at this point, would would you do something like tell your kids to go to chat GPT to write the essay and then to edit the essay and show you how they improved it? See, that's a great pedagogical way to do it, isn't it? That's a great way to do it. That is such a But but then you you know, it becomes that element of trust then because you you could feed that you exam feed that and just ask chat GPD to do exactly that. Um it's it's about, you know, we we get into the essence of learning here, aren't we? And I think that's where um all the teachers who I've seen, all the examples that I've seen on Twitter and LinkedIn and YouTube over the last couple of weeks. You know, it's all been, you know, this is the end of assessment. This is, you know, so it is going to make people think differently. Maybe it's a good silver bullet at the end of high stakes testing in one way. perform. Maybe it's a better way to start to think about different ways to assess kids abilities rather than just on our test. Yeah. And I think one of the ways that we can see that kind beyond the hype because it's really easy doing the hypy stuff and getting some really funny results or really deep results. Um I used it for a scenario I had a a I had a difficult email to to write and I wasn't quite sure how to write it. So I asked chat GPT to write it for me and then I I got it and it's like it's not quite right. It's a bit formal for me. So I said, "Can you make this a little bit more informal?" And it did. And then I said, "Can you include this example?" And I got to an email which was like, "This is this is pretty good. It'd be really funny if I sent it." It's like, "Well, maybe I should." So I did. I copied and paste and sent that. Um I haven't had a reply in 5 days. So I'm thinking the person I sent it to hasn't immediately gone over to chat gpt to say, "How do I respond to this email?" But like doing a real thing as opposed to a madeup scenario is I think the most revealing about his strengths and weaknesses. I um I've just translated some Australian traffic laws into Shakespearean English and I'm I must say I'm really impressed and I can see how this is instantly going to change my life. Do you think we could use it to write our um annual performance reviews? Guys, I'm I'm thinking that's the idea. So, not only can I get it to write my annual performance review, but I can do so and submit it in Shakespearean English, which I think adds a certain I think you're on to something there. But but but ju just just to go into the technology with us because it is I think it's well worth just spending a couple more minutes on it because you know every so often we come across things in technology which sort of spreads across you know the I was in my guitar making class like about a month ago and one of the guys was talking about Dari too. you know, um, and showing, you know, somebody else, you know, look at this image, you know, a a a kitten riding a bike on Mars, eating a pizza, you know, people adding all, you know, as much as you can add to it and he was generating those images. Um, in terms of the actual technology behind it, you you kind of alluded to the fact to say the chat GPT engine is is like reinforcement learning and quite like limited, doesn't really understand what's going on. What What What is the general premise behind these things in the last couple of weeks? Well, look, it's I was probably being a bit a bit lighthearted on that kind of a view, but um it's a large language model, which means it's essentially trained on the syntactical structure of language, the connection between words and the way in which words are used to structure styles and types as we've seen right the style of Shakespeare and so on. Um and and to be fair, Dari and behind it and image generation, stable diffusion, everything else are essentially language models that have been infused with image gone to content trained on the same thing and then they mapped the two together so you can ask it ask it for a picture um what this is the interesting dicho which I don't have an answer for because in one theorem you would say all they are doing is regurgitating what we have told them I mean in a sense they they're only repeating knowledge that they've learned so they will you know in Beth's example it knows that there are kangaroos in Australia because somewhere it's read enough times that kangaroos instant correlation to figure out that hey Australia and kangaroos are quite the correlation is high but it doesn't know that and this is the thing about whether or not this is you know the touring test the knowledge point of doesn't know enough to know to be able to go beyond that set of knowledge or at least we think and this is you know we've seen various people stand up and say they've seen sentients come out of these things um and that's something that I'm not willing to comment on because I don't know yet whether or not that's a point we've reached where these things are sentient or not put it into the hands of experts it becomes more powerful because and and you don't need to be much of an expert. So, just between the four of us, Beth just shared that Shakespearean roads of the traffic laws. I don't know if you spotted it says that we drive on the right. That's very that's very dangerous. And so that's why you need experts. Thank goodness it's not going to make any of us redundant. Yeah, that's a really good point, isn't it? I mean, and but you wouldn't have never noticed it and you read it and it's a hugely amusing and funny and well-written piece, but it's totally wrong. And if that was instructions on, you know, wiring a machine or building a tractor or what, lives will be lost. Lives will be lost in this example. So, somebody's going to somebody's going to invent a time machine, a Shakespearean traveler is going to turn up in Australia and we're going to be wondering why there are all these strange accidents. But because it's about a learning process, Dan, as you pointed out, we only have to tell it that it's wrong that it's actually left and it will from that point on be correct. But and and that's just the learning process. So then you start to go, okay, well now If it can learn from that and we can guide it in the right directions and to Ray's point, you get smart people who know that domain to guide it, you do end up with something very powerful. And I think the big the big bit about this one is the fact that it's not narrow in AI in the AI sense where it only knows about answering road traffic questions. You can literally ask it anything. Yeah. This is this is why it's become so profound, isn't it? Because we we've been talking to chat bots like since series 1, episode 4, whatever. You said But I've been creating chat bots myself and it seems to have whatever you know you know it'll probably be worth an episode on itself unpacking this at some stage because it seems to have you know like you race it in the last two weeks those the the the exponential move forward with this type of technology even though it's sort of not some of the technology we've been talking about over the last year um in terms of AI uh is is has really moved forward in leaps and bounds. Is there anything else uh outside chat GPT that people are looking forward to in 2023? What else is coming up in your worlds? I know metaverse is a big one last last year, wasn't it? Yeah, I think we actually met Metaverse and education is an interesting one and you know Ry, you still obviously live very much in the education sector. What what do we think about that? I mean, what's the viewpoint from your side of the of the world in terms of is metaverse going to be a tool get in the way of education? I think the thing is going to pull this back in education is the tools to create metaverses don't really exist. You I remember the first time I strapped on a hollow lens. It was an amazing experience and I couldn't help but imagine what you could do within education but the real barrier is creating the content is outside of the bounds of of most uh education organizations. And I think we're in the similar boat with metaverse. And so something's got to change. Either we've got to have the same tools that we have that allow us to create slides and documents to create things in the metaverse that are high quality or we've got to have a different education system that isn't a cottage industry institution by institution and have somebody that can afford to make something for the globe like the movie industry does. It's such a good point, Ry. I think um is the quote the future is already here. It's just not distributed evenly. And I think whenever we're talking about some of these big trends especially when they are reliant on internet access and devices and computers all of that sort of stuff even if it is going to be a big thing who's it going to be a big thing for and you know how many people are going to be left behind and and to that point I think when I think about 2023 I'm more excited about how you we are starting to see greater collaborations to use technologies to actually solve the world's biggest problems I'm feeling more optimistic now about our ability to collaborate at scale across um multiple platforms, multiple organizations and multiple countries to to get a handle on some of the biggest challenges that we've got. So that that's I'm cautiously optimistic that almost everything will be solved by by next year. I want to I want to we should anchor on that point of optimism, Beth. I think it's a great way for us to start thinking about rounding things up because I'm keen to get your sense, Ray. I mean, what are you optimistic about for the next year not so much the technology but you know in the field of AI and education where we started this long journey what are you most optimistic about I I think two things one is I think we're at an inflection point with technology and education that could lead us to a completely different place and and that's partly driven as we come out of a pandemic world into a world that isn't going to flip back to the old model in the same way that businesses are saying how do we get people back to the office universities are saying how do we get people back to campus and then there's that realization that's not going to happen. So you have to have a different model. That inflection point I think is good. I think the second thing is we're kind of get going to get better at redefining the problem. So up until now education again I'll talk about higher education because that's the world I live in. Um in higher education it's always been about the piece of paper at the end. The uh what I would call the celebration of of leave losing a customer um and what would be called the graduation. And and I think we're going to redefine the problem. because my word lifelong learning is going to be absolutely critical. And so celebrating the end of education is exactly the wrong thing to do. So how do we redefine the problem in order to solve the real problem? And um that's going to take some really out of the box thinking and I think we're going to be given the opportunity as a global population to think about how we do that. Yeah, great point. Okay, are we nearly at that point? I think we've kept Ry for almost long enough. But Dan, you did allude early on and you've said it a couple times that maybe we had some special tips, gifts. Oh, no. We got a huge quiz for Ray, haven't we? No, we haven't. He's failed. Yeah, he's failed. So, we were going to think about, you know, our our tips, our Christmas gifts as a parting end to the season and the year uh for for people. And I'll I'll start with mine um while everybody else is thinking about theirs. Uh mine's mine's a sort of semi-serious This one is it's about the family safety settings. You know, if you are it's one of the passions of mine. I think love buying kids technology and and getting all of that sorted. But, you know, I think my top tip slashgift would be if you're a parent listening to this podcast, make sure that whether you're buying your kids a Apple device or a Xbox or or a Windows PC, make sure you tap into the family security center settings, you know, especially with the Microsoft element, you know. There's a lot of stuff you can do in there like monitoring kids usage of tools and technologies uh when they're logging in and you know I think we too accustomed as as a society to buy something in the box and don't set it up you know I suppose the tech second golden tip would be make sure you put your Xbox in plug it in a few nights before and do all the updates so you don't do it on Christmas day um you know get all the updates done put the game your son or daughter wants to play in get all the downloadable content installed for them before they lo in. But setting it up properly um with parental access, you know, sets you in the right frame of mind for for the appropriate use of technology if you've got younger kids, you know, some of some of which has worked to me and some of which I didn't do and you know, I know that can come unstuck really quickly. So, that would be my tip. Family safety settings. Beth, how about yours? Oh, good tips. So, um actually gadget wise, I've bought my daughter one of those little smart watches that allows her to make a few phone calls. So, it I feel pretty conflicted about getting her something along those lines, but like any good kids Christmas toy, it'll be broken in about 2 weeks. So, so it'll it will go in the pile of of disused um disused toys that we have in our house. I I personally think and if you reflecting on on our lives here in Australia and thinking about people and place like Ukraine, I don't need anything else and my family don't really need a lot of different things. And actually, we we're making the choice not to buy Christmas gifts this year and just um spend time with one another. I I am spending a bit of time getting the kids to hand make a few bits and pieces. So, it's a little bit retro um to to do this. It's um two reasons. Number one, I was a bit disorganized. has to arrange anything else. And um my daughter's got a a new hot glue gun, so we're going to make use of use of that. And what could possibly go wrong with those combination of things? Um having spent $260 at um like a craft shop recently. Um I'm I'm looking forward to making you the family gifts that I could have bought for like $10 that came up. But never mind. That's handmade. Handmade with love. Love it. Now I'm super worried about you having a hot glue gun and a small child in the same room. That's that just feels like that's a recipe for disaster. Maybe I should I should check chat GBT in terms of what could go wrong. Well, funny you should say that. If I think about my tips, one of my tips was going to be go play with it. Have fun with it. It is fun. Don't use it don't use it to write your thesis. Uh don't use it to write your uh your emails necessarily, Ray, despite your uh your tips earlier. But have fun with it because it is a really interesting toy to play with. Uh contr to that, get off the internet is my other tip. Uh not that it's a bad place, but we all need a break from it. I certainly do. And so my my my time will be spent more with my dog than my internet as much as I can over the Christmas period. And the only other one I want to bring up back to your point, Dan, because I've got teenage kids. I got a 12-year-old and a 16 year old, which is I love them to death, but my goodness, it is the toughest time to have children. And we could all argue about the challenges of of the different ages, but my focus with them has shifted away from controlling their screen time to understanding their screen needs and talking more about what are they doing online because it's no longer a thing that I can take from them. It's intrinsically their life. So now I have to figure out how I work around it. So stop trying to tell your teenagers not to use the internet, not to use technology, but learn what they're using it for is my tip. Ry, you can take us home. Wow. Uh I'm gonna go non tech as well. So I think uh really good book. Um, if you want to read a book related to the world we're in and the kind of things that that are talked about on the podcast, I'll recommend a couple. There's a book called Broken by uh Paul LeBlanc, who's the president of uh Southern New Hampshire University in America. It's great and and it's a very human book. It's talking about how do you solve big awkward problems uh not just in education. Uh there's another one if you're a frustrated middle middle manager or you aspire to be but you don't want to be frustrated when you get there. There's a book called Hack Your Bureaucracy. Um, it's it's it's American as well, but it's from people that were in the middle of the system that were frustrated with how the system worked. And so, it's just awesome about what you can do rather than what you can't do because I don't know about other people, but gosh, I don't want to spend too long dwelling on what I can't do. I want to work out what I can do. Uh, and if you're into neither of those and you want a one minute read, go read about what happened in the Horton's hot dog compet. ition in 2001 and the way that they suddenly discovered how to eat twice as many hot dogs in 12 minutes. Uh they had 20 years of eating 20 hot dogs in 12 minutes and then suddenly it went to 50. And the story about how it changed in September 2001 is amazing and it gives you inspiration for how am I going to reframe a problem. Is it something to do with soaking the bread in water? I've I've seen some of those videos. That's the one. have so much show notes for this one. The pinnacle of human achievement multiple times. Well, Rey, thanks again for for coming on today to talk to us and talk to our viewers before Christmas and having this discussion about all things technology and um thanks Lee and Beth. I hope everybody has a wonderful wonderful Christmas uh and happy holidays and whatever you celebrate and however long you're having off just make sure you have time to disconnect and power down I suppose. And that's the that's the key for us all. We'll see you in the new year. Achievement unlocked. Finishing on hot dogs. I love it. Thank you so much, Ray, for joining us again and thanks be and Lee. Have a wonderful Christmas and we'll see you in the new year. Thanks everyone. Thanks all. Thanks.
undefined
Dec 12, 2022 • 39min

Sustainability and the Future

Welcome to the AI podcast! In this episode, Beth, Dan, and Lee are joined by the Microsoft ANZ Sustainability lead, Brett Shoemaker. This episode discusses all things sustainability. This podcast is produced by Microsoft Australia & New Zealand employees, Lee Hickin, Dan Bowen, and Beth Worrall. The views and opinions expressed on this podcast are our own. Show links: https://www.linkedin.com/in/brettshoemaker/ ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 5 Episode: 11 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI podcast. Hi Beth. Hi Lee. How are you doing? I'm going to come out with it. I'm I'm I'm new to this game. I'm I've been struck by CO. So, uh I I'm I'm dedicated to the cause. I'm here to join in and listen in. But yeah, unfortunately I'm dealing with the first my first ever experience of CO, which if you've both had it or if any of our listeners have had it, it's not much fun, is it? No, it's not in my list of highlights. Um, it it sounded like uh you know, you've been through all of the symptoms though and perhaps you're out on the other side. I think I'm dealing with Yeah, I've had like, you know, I hadn't my wife was upset because I couldn't taste any dinner last night. It just tasted the same to me. But mental note, never say that to anyone, even if you feel it internally. Um, but uh, no, look, yeah, it's mostly now. It's dealing with this sort of sore throat and coughing. Yeah, this isn't the co episode by any stretch. Uh, but yeah, you may not hear too much from me today. today guys cuz I'll try not to cough over your good your good conversation. Well, luckily we've got a a special guest today as well. We've got Brett Schumer, our Microsoft INZ sustainability lead. So, it's really exciting to have have Brett along to the to the show today as well. So, hi Brett. How are you? I'm good. Thanks for having me. Um I can't promise that there won't be coughing cuz while I don't have CO uh that I know of, I I I do have a lucky cough as a result of uh having a small child in daycare that they brought home and handed over to me. Yeah, we've all been there, I think. Yeah. Have you been bad? I've Yeah, I've been really well. I mean, my my family have had CO, so we're facing another wave here. Um uh again, but uh I am not getting CO uh Touchwood. So, I I've been really well and actually I' I've been very much focused on sustainability given uh a heap of global events that are happening around sustainability at the moment. So, I'm very very excited to have the opport unity to talk to Brett today. Yeah. And there's a there's a lot to unpack, hasn't there? There's been a lot going on and I I'm here to learn more as well because I've seen a lot on the news and I've seen a lot of the sound bites, but I think the devil's in the detail and actually, you know, understanding some of these things that that have that have gone on, you know, in the last couple of weeks and over the last several years really and the acceleration has been fantastic. So, before we get into the detail with it, Brett, can you tell us a bit about yourself and how you got interested in sustainability? generally. Oh, sure. So, um Dan, you gave a bit of it in the intro. So, I I as the head of sustainability for Microsoft and ANZAD, I guess my usual line is um unlike many sustainability officers, I actually don't spend my days in the world of uh compliance or reporting is actually in working with our clients and partners to uh advance their sustainability journey. So, really in support of others. Um look, I you know, I I can't say you know I don't come at it from uh a science background in terms of how I got in the space. You know, I've as in my time at Microsoft, I've always worked on uh incubation businesses for the last 15 years and this is really it shares a lot of the similar attributes just in terms of a uh a new muscle, a new motion, a new way of operating that we're doing. Um but then the the driver and I I'll share this is on the personal side um actually came a couple years ago right at the start of the pandemic. Um I we had we were in the unfortunate situation where um my my daughter who was three at the time was diagnosed with high-risisk metastatic neuroblastoma which was a stage 4 cancer. I'm happy to report she's a healthy kidney uh kid today. Um and uh but we did go through a very difficult and trying 18 months of treatment and and and I can't pinpoint the moment but there was definitely a moment along the journey where I said it felt a lot more about what I want to be doing and how I want to be spending my time. And um and I always felt, you know, I I didn't have the ability to help uh cure her cancer, but I sure as heck could, you know, work to make the world a better place around her for for when she does win her fight. And uh and that was really the the genesis of starting to to work in this space. Um was, you know, how can I use the platform and the opportunities that I have today to to go do uh what I want to do and and and bring a bit more purpose uh to to my work. Yeah. Wow. Fantastic. Yeah. What an incredible source of inspiration, Brett. I um I I think we're all parents on on the the pod and one of the things that often strikes me is is the world that we're leaving um to our children and and what they are likely to inherit. I have noticed in coming back to Microsoft about four years ago that our focus on sustainability has been very much um uh very much a focus for us at as a global company. It seems like it's been um more of a central part of our strategy than ever before um and certainly more than it was when I started early in the 2000s. Have you observed that sort of renewed focus on sustainability and what do you think is driving that for Microsoft. Oh, it's been a drastic change. Um, well, I shouldn't say dra, it's been a noticeable change. I careful in the word using the word drastic cuz we like we made our first commitments back in 2009, put a carbon fee in the business in 2012. U, but it was in 2020 when we increased our commitments and and those just the quick version of those today are to be uh carbon negative, water positive, and zero waste by 2030 uh while protecting more land. And then we use in building a planetary computer. Nice long run on sentence there. Uh in terms of getting it out the um but the look the I mean the the truth is that when we increased our commitments in 2020 like we did so-c from a corporate social responsibility standpoint because we thought they were the right thing to do. We we did so with huge observation bias. We didn't realize how high it was on the agenda uh of others. Um and and I think when we increased those commitments, we got a lot of questions around the how are we doing it, what are we doing, what have we learned along the way that in in truth we didn't expect. Um and so I think that change that you referenced was really born out of a need to start to share some of the learnings and lessons both from like our steps forward and our steps back um with others and and like the way that I think about it is you think of Microsoft's overall emissions in a given year are 0.03 of a percent of the annual emissions. But then when you think about the impact that we can have in terms of working you know within the the nations where we reside and and with with and through others that we're already engaging with um from a digital technology standpoint today That's where our that's where we feel like our reach and impact scales. So very much if we're if we've learned if we've learned a lot from our journey, it would it would be a disservice not to not to share it uh with others. One of the one of the things that I have noticed is um you know technology isn't a manufacturing company. You know we don't have this massive footprint but what we can do is help customers measure their impact. And I I know that um that is part of the the battle is getting that visibility over um over those uh details. And in my prior job, I was working for a big global healthcare company and I literally spent a year trying to collect and bring together all of the data in one central place to uh identify the um our carbon emissions at a global um from a global point of view. And that involved maybe four or five other people and We worked literally for a year. So yeah, how how how things have changed. Yeah. I mean, if if I you know, I think back to a year and a half ago or maybe a little bit longer than that when I was really starting to get more into the space in terms of what does it mean for people that are that are that are pulling together their compliance and reporting and disclosures on a basis and and you know, I had heard the line that it's a very manual process. You know, a lot Excel files or even physical pe pieces of paper that are showing up in the mail and you know not that I didn't believe it but I think you know as you got into it just simply if you looked at it and said hey there's a massive opportunity for us to help with the automation of this like regardless of the context um I think that is certainly true and you know the you know the thing is that you know my supplier is your supplier my customer is your customer so there's very much a a a shared incentive going about work in this space from a measurement standpoint or proprietary way like would be the wrong approach because you know I I get the same request in terms of here's the 200 questions we want you to answer as it relates to what we're doing from a environmental sustainability standpoint and and hey you take those 200 questions and multiply it by another 200 entities asking a different set of 200 questions it gets pretty big pretty quick and So, you know, there I think there's there's a lot that can be done like the um I I was um I spoke at Impact X last week, which was one of the larger climate conferences in Australia and New Zealand. Um and it was I was on a panel as it relates to uh regenerative agriculture and and my I think my opening statement was we should all have optimism in the space because you know all of the measurement and controls that we're talking about from an environmental standpoint They're all built today in a financial context. Like we're and what we're really talking about here is applying those same financial controls uh to to to the environment um and in in an environmental form. Um and and so that same technologies and pieces that underpin it from a financial standpoint like can just be applied in a different way. That's that's so interesting the way that that that parallels there. I never thought about that before. That's great. So when you're speaking to customers and I know you spoke a couple of my customers as well, Brett, over the time. Um, so over the last couple of months, you're speaking to more and more customers. Um, what are you hearing from them at the minute about their sustainability ambitions? How's it how's it how's it kind of working down the chain there? Yeah. Um, it's a really it's probably the hardest question to answer because it does such range a wide spectrum. Um, and and I think that's, you know, in terms of those that are just embarking on sustainability journey to those that are quite progressed and matured and and and so maybe as a way to answer it I I'll give you what the data the data tells you uh because we did some work oh gosh it was probably about nine months ago in partnership with the univers goldsmith out of the university of London um we worked with them because they had done similar work in the UK and we wanted a good comparison point um and we looked at uh large Australian or A&Z organizations because there was a report for Australia and New Zealand and and the uh basically organizations that are 200 plus employees in size and what like what that came back and said or at least the exact summary of it if you will would you know hey we we're we're big on uh Australian and New Zealand businesses we're on the front lines of the climate crisis whether it's the the bushfires or the flooding events that we've seen um and so the it's very real for us um you know ambition is there with over threequarters of Ansaid businesses having net zero commitments typically in the 2050 time frame. Um but uh we're they're struggling to make progress against them and over a third of those businesses saying they're not on track to hit those 2050 uh targets or commitments self-reporting that. Um and there really three reasons that that consistently come up for it. One is around uh availability of skills. Do I have the people to deliver against uh my commitments and help me make prog ress on it. The second was access to technology. I mean technology in the very broad sense um you know that if I um if I am in the built environment and steel is or cement is an input how do I uh you know green steel is a requirement in the progress of that uh to be able for me to hit my uh net zero commitment. So I mean in the very lucid sense what's interesting in that space is you know while I think it was 80% of uh A&Z business have a heavy relianceless tech innovation. But what's interesting is less than half were actually investing to be a customer of donor to or investor in those same solutions that they'll ultimately require. Um and then the third area is the one that we already touched on was that around measurement and and what you saw was less than half of ANZAD businesses were investing in tools to help them with the automation of that measurement today and only 11% were actually mapping emissions back to their sources. And what I mean by that is is getting back to where that core uh emission source sits. Yeah. The building management system for the uh environment uh that you may occupy in terms of office space. Uh the others were just doing a an estimation, you know, using operational data and general ledger data to to estimate um what the what their emissions were. Yeah, that's that's fantastic. And it's really interesting the way you kind of brought that together in a in an easy this form and and apologies of course that that that that uh Goldsmith's um uh research is fantastic. We'll put the link to in the show notes because there's some great um uh learnings from there and it really gives you a lens on what customers are doing across the landscape. So, thanks for supporting that project. I think it's great. Brett, um one of the things I I think it was one of my most proud proudest days working for Microsoft when we announced our global goals. They were I think um quoted as moon goals because we set these targets, but we actually publicly announced that we weren't quite sure how we were going to achieve them. And when I read the latest 2021 report, I can see that we're reasonably on track with our scope one and two emissions, but off off track with scope three. Um I I wondered if you could talk a little bit about, you know, where you see us on our journey, but also one thing that I've observed Microsoft doing and doing well um is publishing white papers and blogs and being really transparent about our journey. Do you think that um you know if we are struggling other companies must be struggling as well and how important is it for us all to you know work together as a a global community to solve these these problems? It seems to me like it's not necessarily a commercial piece where we're competing and keeping all this intellectual property but we're using to disclose and move things forward. I wonder if you had thoughts on that. Yeah. Well, um I'll I'll start by saying the good news is those 2030 goals are not moonshots anymore. Like they are clear they're there are clear commitments with a with a path uh towards them. The one moon the one moonshot that's left is the our 2050 goal of by 2050 having removed more carbon from the atmosphere than we've emitted since our founding in 1975. Um the um you know the the piece in terms of our progress. So yes, one of the core principles of the work is transparency. That's stated very clearly. I think you can even find it up on our on our public uh web page. Um and and the reason for that and it's another thing I talked about a little bit last week was you know the the pledges were an important and critical first step creates clarity within the organization. Helps people understand what are the And and to be to be clear, there's there's actually 40 to 50 different commitments because there's the milestones along the way that are about uh the progress. Um and the uh that so so that piece is critical, but it's actually the work that we've done as we've made progress that has taught us the most. And uh and that it comes back to both the steps for and steps back, right? So yes, in the last year, our scope 3 emissions increased um largely as a result of the pandic mic as we all uh transition to remote work, the rise in device use that occurred around it, the energy grids that were now we're now not in uh now not in an office that may have renewable energy powering it s you know maybe it's solar sitting up on the rooftop now we're in each of our individual homes um and and certainly as as someone who has three children under nine um I will say there was more iPad use and device use Xbox use during school during homeschool hours than it was, you know, when they're in school and and and and you know, Xbox, you know, people sitting in your home paying Xbox use is all downstream for us. It's it's part of our scope 3 emissions and so um but yeah, you know, it's sharing some sharing some of those learnings on the journey, right? For scope 3 emissions today, we have uh you know, we put a requirement in for a supplier code of conduct uh several years ago, I believe it was in 2020 at the time of those commitments. Um we have 80% of suppliers that are reporting today across all three scopes, but we knew that wasn't going to be enough. We added some tools and resources that were there to help them with those disclosures. And in the last year, we started working with the um specifically in Asia with the with the IMF and the World Bank identifying for our most material suppliers what are potential mitigations um helping them find uh where they can procure some of those um alternatives from and offering them affordable financing through the World Bank as a way to um to to help with the adoption of them. And so um you know many of those pieces has been a journey, right? Like I always think about the pieces that are within your control and the pieces that are within your influence. And so control the things that we can control and continue to invest to influence and and if I then I build to you know Well, hey, yes, you know, if I were to say any of our terms and contracts around offtake agreements for carbon removal, public documents today. Any of our learnings from it, public documents today. Um, any of the uh RFIs or uh EOIs that we've put out to market for carbon removal are freely shared today. And it's the the whole premise, like if I really were to strip it back, is those that have the ability to do more should. And we are in a fortunate position that not others are in. Partly given size partly given the geographic footprint uh partly driven by balance sheet and so the the sharing others is the you know there's there's not much time left between 2030 and no one's going to wake up on Jan 1 2030 and realize that they've hit a goal uh and so you know why have others go through the same lessons and learnings and challenges that we did if we can just shorten that accelerate it yeah thank you Brent I have a poorly formed question in my head around the technology side of this because you know I I so as a parallel to me I think about the work we did in responsible AI as a company and and initially that was a lot about how do we build better technology to be more responsible more ethical more more principled in its approach and it sort of over time really became actually you know what the tech is just kind of not even 30% it's just this little part of it it's people process and communication and mechanisms and other things that drive it and so I think about this with sustainability ambitions and goals. There's, you know, obviously it's, as you've just talked about, there's a lot of it is just talking about it and sharing information and being and disclosing and kind of building a community of people that just who can and therefore should contribute to this problem domain. But then parts of it are very technical. I mean, I know for example, things like measuring carbon as a as an asset is a very technically challenging thing to do for our farming and agriculture industry to be able to buy, trade, and ship those uh and broadly just kind of think about how we remove carbon is a technical challenge. So my question that is poorly formed is like from your point of view is this a is it a technical challenge? Is it a societal challenge? Is it something that like how do we how much technology and given we're a technology company how much do we think technology is actually going to be the impetus for change or is it just going to get pulled along in the journey? So look you will never hear me say that technology is the cure. to the climate crisis that we face. It it isn't it it plays a supporting role that can help you know some of the same I think about all that all the work that happens from a research and science standpoint effectively is using cores that sits in data centers to process and crunch data and so um you know it does have a it does have a role to to play um look it maybe I'll give you so I recently heard it was Paul Hawin. Um, if you don't know Paul Hawin, he wrote Project Drawdown. Um, he's one of the co-founders of Project Drawdown, which is an organization today. And gosh, I'm blanking on what his second book is, but I think it's it's regenerative something and I'm blanking on what the second word is. Um, but um, I'm a huge fan of of Paul's, full disclosure, but you know, so I heard him talking last week, and he was actually talking, it was kind of in that same vein of a lot of the pieces from a sustainability standpoint, whether it be the words that we use, uh, TNFD, TCFD, GHG, like the acronyms or, you know, a nature-based solution, they're they're they often are can be offputting uh to people to help them understand what it is that they need to do or to prompt someone to act. You can kind of get that paralysis that's there. And look, if there's anything the last 30 or 40 years has taught us is just putting out data points and facts isn't gonna actually help someone. And so, uh, like when I think about it, like I think about it, which is prompting people to act by telling stories, um, putting it in. And so, so tie that back to your question, Lee, like I when I think about the technology, I think the technology is in the backdrop of the story, right? Um, you're familiar with, but like you know, the the partnership that we have with CSRO around healthy country AI, like in those stories around how we've used uh um AI to like you know I tell the story of the the turtles uh on the coastline of Cape York and how AI helps support indigenous rangers to help them conone their uh hone their conservation efforts by using that data to quantify the quantity and uh activity of predators that are in the area enabling them to cover long large swaths of land um to see 20,000 of those turtles make it to the ocean each year to to preserve a um a species and and I use that right because it's the like you can you can know knowing the importance and feeling urgency are two different things. I can know the importance of saving endangered species, but it's seeing the progress as those 20,000 turtles make it to the coastline each year that gives me the urgency to either act now or continue to act more. Uh that momentum piece and and so I think technology is in the it plays a role but it's it's it's it's in the in the background. It's the enabler of of that. Yeah, that makes a lot of sense and I I I wouldn't disagree with you. I'm glad you share that view. I just there's always a tendency I think that you know we we sort of see technology as the panacea of all good. It can you know if we build enough tech we can solve problems but I think you're right these these are big ESG problems that are not tech tech's an enabler but not the uh kind of the creator of the the solution if you like. Yeah, it's enabler and like and you're right it's the you know I never want to come like I think about any conversation I have I never want to come across as as pitching technology um at the same time uh you know think about the role that like the role that I feel that Microsoft can play in the tech sector at large is to help help the world innovate out of the climate crisis we face and so it is the like we we do know that innovation going to be required that we know that technology is going to have to help to scale some of these engineered and I'm going to use one of those words that uh Paul Hawin would encourage me not to use those engineered solutions but like the the solutions exist today there's a lot of work that needs to happen to see them deliver at scale and we we know that technology is a an enabler some of that yeah I think you know Brett the important thing is um because it's easy to see us as a tech company and see you as a representative of that tech company walking into a room and you know like you go to these events and you represent tech and so you just need to be kind of really tampering that back to the point that it's not about the fact I represent tech but actually what you represent and as we say in our responsible AI we represent the responsibility that we have as a tech company not the technology itself but the the position we want to take up which is hey we want to be part of the solution which is not just about our tech it's about our people and our process so I think it's great that you bring it up but sorry Beth you want to jump in? Yeah. Yeah. You know, I was just thinking about the broader interpretation of the word technology and if we think about technology in the context of the industrial revolution, arguably it was technology that caused this problem in the first place. So, the the way that we use technology to transport ourselves or grow our food, design our cities, you arguably technology was was um one of the ways that um this problem has come about in the first place. And it need needs to. Yeah, I love that that way of describing how technology can empower us to come up with solutions to solve this problem, but it's not in and of itself the solution. Um Brett, we're talking to you during the middle of COP 27 and you one of the things that I often struggle with is how complex this is as a a truly global issue and we we talk about all the kind of challenges we're facing Arguably, the politics and the personal relationships are going to be just as complex, but we need them to be happening in order to collaborate to solve the problem. Are you optimistic about these processes as a mechanism to to kind of get everyone on the same page achieving the same goals? Oh, am I optimistic? Um I guess I I characterize myself as a bit of a realist like I I I think that these global forums play a important role in the absence of everyone coming together and having that forcing function or some level of accountability around what it is that you're doing. I don't think that would I I don't think eliminating it would help the cause. And I say realist because at the same time I'm very very measured in terms of what to expect coming out of it. There's not a there's not a a magical answer that's suddenly going to emerge through COP and all all things be fixed. You know, this is a space that is going that requires both public sector and private sector coming together. It requires collaboration. Collaboration is not easy. Um if you want to go fast, you go alone. If you want to go far, you go together. Um and so and we're in a need to to go far and go fast at the same time. Um and and so uh but I I do say all right so let's take COP 27 have there been some things coming out of it that I view positively yes um there increasing level of rules and scrutiny around greenwashing and specifically for the financial services sector and how can I you know can I continue like will I be able to continue to make net zero statements uh while can you explain what greenwashing is? Uh greenwashing is basically um oh gosh I should have a really clear uh definition for you Dan. So um uh overstating the work that one may be doing uh to in pursuit of net zero or uh climate targets okay let's say that way a one and a half degree future okay so uh so increased rules around greenwashing policing of that and I use that word very loosely um to you know effectively raise the bar for what is credible in terms of you know do do actions match commitments and for the financial services sector it's often from a financed emission standpoint am I continue like will I be able to continue to uh uh fund and finance um uh fossil fuel expenditures and still make a net zero commitment likely not um the second there's an increased amount of pressure on the World Bank and IMF in terms of overhauling some of his processes which I think is uh good and and maybe I should say for those that don't know COP 27's focus this year is on the global south the southern hemisphere and how does the developing world sorry the developed world um support the developing world in terms of the climate transition right and but there are other areas that are coming out too like um uh I think with the G20 summit which is later this month you'll continue to see more uh announcements that are about getting private financing flowing. Um and then there have been um new rules to strengthen some of the carbon markets. Um which is the basically what does a high integrity carbon offset look like? Um and you know in in your ideal utopian world, you wouldn't require carbon offsets for us to reach net zero. The truth is that we do require them to to to get there in the time frame that we need to. And there's some goodness in them. if done correctly um and that many of these solutions because net zero is just a milestone we ide goal is to remove or or uh some of the carbon from the atmosphere such that we unwind you know what we've done over the last 20 or 30 years and so you know that you know the same solutions that support some of those offsets or is it's basically capital going to car carbon removal that we will require to to get beyond uh net zero and unwind some of the climate impact that we've had. Brett, the other sort of element that is interesting about the the process is just how integral it is to include corporate partners and that that sort of commercial voice. Have you have you got any thoughts around you know what is the role of a corporate? I know Microsoft we're the strategic sponsor and and we're facilitating a number of side events as part of COP. What is our role. What is the role of corporates at something like this? How are we important? Look, um I mean go back to my earlier comment of like this ultimately is both it requires both private sector and public sector to to act. Um and so uh and depending on where you sit in the world uh private sector or public sector may be actually leading uh in terms of the transition. Uh COP 26 was the first like really noticeable sharp noticeable increase in the amount of private sector uh attendees and and COP 27 has been no different. Um and and COP 26 was really where a lot of the financial services sector from a private sector standpoint showed up for in force for the first time. And you think about that you know the context of like the focus this year around how do we continue to finance and support the the global south like you know how do you do that in the absence of having private sector at the table offering, you know, in those and I like I'm going to have to use the word negotiation, but some of those discussions around what does that look like? And so, um, look, I I think uh it requires everyone to act. And so, you know, private sector being there and private sector often being where most of the law I mean, I I can think of scenarios where it might be nationalized infrastructure where it sits within public sector, but some of those large emitters generally are are within the private sector. So I think it's an important role u for them to be there. So bringing this all together then I think I love the comment you said earlier on about we should all have optimism in this space I love that. Um what's what's exciting you most about sustainability over the next year? Give us something to live for. Brett what excites me like what gives me optimism? I think younger generations is what I feed off of for optimism. Like the the level of climate awareness uh that's there. Um I mean there's not too many conversations where it's like someone doesn't say, you know, every one of our new employee orientation classes, you know, I'm getting asked that question. You know, every person I interview is asking about our our climate uh credentials. Um and because it's a supply side and demand side piece and So, you know, there's this, you know, when everyone uh whether it be a a shareholder, uh or your employee base or your client base, um is demanding these things of you. Um it tends to have a really good outcome in terms of driving change. Um and so, so I'd say I'd say that I I say I I think it is the that level of climate awareness of younger generations. And you know, I I I do I interact a lot with kind of a startup and scale up type communities and you know the what people are building on top of the platform and what's po like on a on a cloud platform or the technology that they're building and seeing some of that stuff like it you know it it I continue to be surprised by I surprised you you continue to see use cases that you would never have dreamed of right of like you know the application of technology or what someone's trying to accomplish and and that that piece gives me gives me optimism. Yeah, it's fantastic. I mean, we we we just saw a climate election uh with our last federal election, right? Like that was a a great example of Yeah. of uh of the power that each of us hold uh whether the we vote with our dollars or we vote with our actual vote from political standpoint. Yeah, it's a and so look, it's a really important point that you know this is a this is the people It's driven by people. You know, you talk about the election there. That's that's a people power decision that drives a particular change. And a lot of what you talked about, Brett, I think, is really about how people come together and share, create, and kind of build the mechanisms that are going to change it. And and to your point around it's the next generation. I was I you know, you you struggled there for a bit when you asked about that optimize your optimism for the future, but you got there and you had an answer, which was great because it is, you know, it's not to say that we've all given up. But we're we we have to kind of do more for that next generation because they are going to beat us to the to the chase in terms of really taking this problem and solving it, you know, and and and just kind of shaming us as a generation. And I hate for that to be our legacy. Yeah. Look, I'm no I'm no different than anyone else that works in this space. We've all experienced a little climate anxiety at some point. So when you ask me about optimism, that flashes through my head. Yeah, totally. Look, you know, and you know, we certainly don't joke about these things, but right now in Australia, you know, people out in Forbes and Lismore and their areas are dealing still with these things that you know ma massive issues. So look, thank you Brett. Uh we're at time. I really appreciate you sharing so much of your insight um your views and and kind of making it clear that this is sort of everyone's problem. We've all got a part to play. Whether you work for a tech company, whether you work for you know in some sector or wherever it is, you've got a voice, you've got a role and there's an opportunity for you to speak up and be part of something you know in some way to contribute. Don't say problem. Everyone's opportunity. at opportunity. I love it. Great time. Let's address the opportunity in front of us. Hey, Brett. All right. Well, thank you all for having me.
undefined
Nov 7, 2022 • 39min

Hacking for good: ideas and tips

In this espisode Beth, Lee and Dan look at the mechanics of a creating hackathons based on our experiences on various projects around ethical and hackign for good. From CSIRO projects to the Imagine Academy we we look at what makes them a success and share tips on what works well. ________________________________________ TRANSCRIPT For this episode of The AI in Education Podcast Series: 5 Episode: 10 This transcript was auto-generated. If you spot any important errors, do feel free to email the podcast hosts for corrections. Welcome to the AI podcast. How are you, Lee? How are you, Beth? Very good, Dan. Very good, Dan. The AI podcast. So, we're still we're on that. We got to change that, you know. It's coming. It's coming. He's on his way. How are you, Beth? I'm well, thank you, Dan. What have you been up to? Uh, well, I'm enjoying a bit of sun finally here in Adelaide, which is Very nice. Um, but I have actually been doing some interesting work way out of my comfort zone which is participating in our global hackathon. Oh, cool. Tell us more. I would love to tell you more. So, um, I sort of stumbled into this program actually. So, I have a real passion for sustainability. I've done sustainability a bit in some previous jobs of mine and I've also been volunteering to um, to manage the sustainable um community here at Microsoft here in Australia. Um and I saw this hack advertised um Microsoft does a global hackathon as you know every year in October and I saw this hack advertised looking for people to support a project with the CSRO. So I put myself forward um to participate as a team member and uh long story short I was appointed into the team not as a team member but as the team manager and um fantastic so so led um actually an incredible experience that I'm still buzzing from in terms of um what I've learned and the people that I've managed to connect with and um hopefully providing support to the CSIRO that will have a longer term impact. So are you allowed to talk about the pro type of project you're on? Yeah, I I would uh be very keen to talk about it actually. So I you know I guess back in the day um I had the opportunity when I was working for a global healthcare company to manage um or develop and manage a carbon reporting framework for the business at a global level. Um that is also a a wonderful example of um stumbling into a project putting my hand up and accidentally ending up leading it at a global level. Wow. So you um you learn by doing I think is is the um is the the adage there. So as we perhaps all know now carbon calculations are really important for us to to better reduce our impact on the environment. And if you can't measure your carbon emissions, you can't reduce them. But that is just one part of the story. So this is and this is I guess what I've learned doing this biodiversity. project with the CSRO. So, carbon emissions are just one part of an organization's impact on the environment. So, that also includes things like um the impact of their operations on land use and water use, um air pollution, um the impact that they have in regard to animal populations. And so, um looking at an organizational um impact in a more holistic way is really important. So looking at the impact of an organization on local biodiversity, what makes biodiversity really hard to report on is that it's hard to c calculate, which is where the CSRO comes in. So the the uh um CSRO is an organization that we partner with across a range of different topics and we have been providing some specific support to their scientists who are looking at um this challenge of capturing biodiversity data and using that data to demonstrate environmental biodiversity value uh variables, EBVS, and using that data to provide insight to organizations as to whether or not they're having a good impact on biodiversity or a negative impact on bio diversity. And um you know, I think this is so important and as I started to learn about it, it heard to me or it's revealed that you know really all of humankind relies on biodiversity and almost every single industry from agriculture to retail to mining and manufacturing all relies on on biodiversity to to to um exist and if we don't take better care of our biodiversity we will all end up dead. So so you're the hack you're to do a hack around because that's a big I think that's maybe we can explore that during the podcast today but I think the the the thing about hacking and I suppose the origins of hacking and understanding where that comes from but then also how we go about answering or trying to answer those problems that's a huge project right knowing the CSI who are involved and when we look at hacking generally I suppose Lee do you where does hacking come from do do you know the origins of hacking or not do I do I have a story for you really yes Yes, I do know. And you're absolutely right, Dan. I didn't want to diminish the like the sustainability impact is amazing that the work they did, but the thing that first of all happened hit me was Beth's a hacker. Beth's now become a hacker. She's kind of got that hat on and she's now joining the tech nerd crowd, which is awesome. Um, so look, yeah, look, I mean, it's funny because we use this word and a lot of us would have probably, you know, think about that. In fact, you go do a Google search on on hacking and 99% of your results will be the negative, like this idea that hacking is bad. Um, and so the one thing that everybody kind of tends to know, Captain Crunch, the uh phone freaking tool about the early '7s, like 1971, um, little whistle appeared in a B box of Captain Crunch cereal in the US. And that whistle happened to be at a particular pitch, which was 2600 hertz, which was exactly the same tone hertz uh, frequency as the AT&T or Bell company as they were back then, telephones. And so you could use this whistle to to get a phone to give you a free phone call. And kind of everyone thinks back to generally that being the first example of hacking. However, however, a little bit of internet research tells me that that's not the first time it was used. So, actually in in the in a uh late late 50s, um the word hacking was recorded and used as a mechanism by a bunch of people in a sorry, we're not getting very cool anymore, the tech model railroad club. Group of people who are um railroad enthusiasts and it was literally used the hacking principle was used as a mechanism to cut through cables that very you know what we think by hacking to cut something which is the first time that the word hacking was used however we go back even further wow in the late 1800s 1878 to be precise a group of boys teenage boys which by the way tells you straight away that teenagers haven't changed in the last hundred years they're still up to no good as much as they can be They were hired by the Bell Telephone Company which is AT&T as we know um to basically uh do ethical hacking to miss to penetrate test their network. They were basically they were hired by them to kind of mess around with telephone calls to see what they were testing the service and the system very early days of telephones and that's considered to be the very first instance of intentional manipulation of a technical system in the way that we might consider to be hacking today. Wow. There you go. Potted history. That is that is potted history of hacking in in in five minutes. That's super. It just reminded me as well of of when I started to introduce concepts of computer science uh when I was in the UK and we were working on the computer science curriculum and teachers are looking for things to do. I think the interesting one was um you know when you start creating projects from the the get-go like say using a micro:bit or a Raspberry Pi, it's quite a high barrier to entry. So what's some of the people in the K were doing some of the professors like Miles Berry and people like that from the University of Hampton, they were doing toy hacking. So there was a concept that you take a toy in to the school or to what whatever event you're in and then you actually take it apart but not to a level that's you know completely in bits. You know you utilize the motors and things that are already there and then you re uh process them to do different things. So you take apart a Furby and try to make the eyes work yourself or add a battery to it and see if you can do that. So, so that that reminded me toy hacking. Well, and and the interesting thing, and I think this gets us back to kind of and I'd love to hear from Beth about kind of what it was like to be in hackathon, which is the event that you were in, Beth, because this was this at at some point in time and and it seemingly seems to be around the late 90s, early 2000s, the word hackathon started to appear and we went from hacking as a negative connotation. I mean, we had, you know, there's always been sort of um, you know, black hat and and and you know white hat hacking but it was this first time and I think it was late n late late 1999 that the somebody devised this idea of a hackathon and it was the idea was in the early days you'd be hacking over a marathon period. So hacking up until that point had been a kind of a get in quick do something and get out mechanism and the idea was that actually we might get more out of hacking if we spent more time doing it. So we kind of applied almost a um a project-based approach to hacking to create the hackathon. I was going to say like agile. Yeah. Yeah. Yeah. Yeah. So turned into this. So So look, I mean that's a Beth, back to you. I mean, I want to hear the sustainability stuff's cool, but actually like what was a hackathon like? What did you think it would be like? Was it what you thought it was going in? And um you know, kind of tell us all about it. Yeah, I'd love to. So I I think I put my hand forward. It was mostly out of interest of the subject area as opposed to to you knowing a lot about hackathons and wanting to participate in the process. Um now I would definitely put my hand up to participate in any hackathon. Such was my love of the experience. So I think um you know I was a little bit nervous that we wouldn't get a lot achieved in a short period of time. So Microsoft allocates a certain um I think it's 3 days but it's a 24-hour cycle because you have a global team and so you're you're allocated a period of time. But I was so impressed with the suite of materials in the kit bag that we were given and the um you know the enthusiasm and the willingness to help from colleagues that I had never met before and people from all areas of the business was not only inspiring, it was actually just really humbling to be part of a team of people who really cared about these topics and wanted wanted to do something in their way to make a difference. So I loved that element. Um what I also loved was that everyone came that that we had such diversity in the team that we um we were able to collect. So we had people from the technical parts of our business, people who work in marketing communications and you we had a a professional storyteller from our Irish um office who wanted to to participate, a design experience lead in the US who was able to to add some value. So, we had so many different people representing different parts of our business and none of us are environmental experts by any stretch of the imagination, but we were all able to add value in our own way and that diverse kind of experience and thought process was really really valuable as well. One of the things that I think was a bit of a challenge was just how we managed that 24-hour um hack cycle. And so, So um uh some of our colleagues were trying to stay up late or get up early and keep a continuity of participation across the different um hemispheres and that worked quite well. But I think going forward I' I'd um I'd try and make that effort myself to to do those later night calls or those early morning calls. But it was um you know I think that the structure of the the program development was really effective. And it was, you know, part of an experience. You, yes, we were a smaller team, but we were also participating with hackers right across the globe. And I think we got um nearly 50,000 people within Microsoft participating in the global hack this year. I was also struck by just how many of the topic areas um that the hacks seem to to cover. Um I would say 90% of the those would have some kind of social impact um to them as well. And so you know the other kind of final point was this is really led by example and led by the highest parts of our business. So the the project that we worked on um was an executive challenge and we had executive challenges that tackled um areas such as accessibility, diversity and inclusion, sustainability, um you international development, all kinds of different topics. and to have our senior leaders really champion these topics and ask people from across the Microsoft world to you know volunteer their time to participate. It was just a a brilliant experience and you aside from anything else I've walked away meeting new people within Microsoft learning more about this topic area uh and also you know I I would hope developing my own um professional soft skills in order to to manage projects in a a finite time. So, I had a I had a wonderful um experience and we really hope that through this um contribution that we're able to move this this issue forward um with um with the CSRO. So, time will tell. So, Beth, you kind of alluded there in what you were saying. I mean, the size of it and the fact that it's global. Uh and for our listeners, I think this is part of our global the Microsoft global hackathon which is um it's a huge thing. I mean, as you said, sort of 50,000 people involved in these things. Um, and it's not just Microsoft because you had external people in there, but do you want to like I mean maybe helpful for our listeners to talk a bit about what the Microsoft global hackathon where does it come from? How how does it work? You what's what's how do we do it and how do other people maybe think about doing something like this. Beth, what do you what do you think are some of the things that would you need to put in place to have this kind of program? Yeah, it's it's a really good question, Lee. So, I think um prior So, I'm I'm a bit of a a longtime person at Microsoft. I Boomerang. So I started Micros working for Microsoft nearly 20 years ago um left um after about three or four years and then came back about 5 years ago. So um I I've seen different programs from time to time and I was a little bit ignorant as to what the garage is, but the garage is the initiative behind the scenes. And I know you know a bit more about the garage than than I certainly do, but I think what what they have perfected um as an internal offering is a a set of materials and a tried and tested process through which people can work together to solve problems. And that set of materials I think is invaluable. But what I also observed was um people who have volunteered to be um sort of hack mentors and people who've been through the process several times so that they can support new people into the the process. We also have a number of local hack judges. So um uh I think people who've again been through the process and can identify um you know the the the better projects or you know the most effective um programs hackathons have always you know I've never I've always wondered a little bit about the effectiveness but I think it is really effective to concentrate people's attention and time over a set period. Uh we all have other commitments and getting people to set aside some time, dedicate it and do it as a concentrated activity I think is actually really a valuable thing. So I' I'd suggest that that is um you know those are some of the elements that would be um success make make a program successful. You we again because I'm a Microsoft um boomerang this concept of growth mindset I I must admit when I first heard it I thought it sounded like an episode of Oprah Winfrey um that I' I'd hear about um you know self-development but it is true that people come to this process with a really open mind and ask lots of questions and try to learn as much as they can and that pro you know and and be willing to risk things and try things test things out from each other that rapid iteration and as a tool to encourage other people to adopt this innovative mindset. I I I wonder how effective that could be um if we're able to extend some of these tools um to our customers and partners. But Lee, I know you've been doing a bit of research on the garage. What have you found out from from that work? Yeah, look, it's a fascinating area and I'm I'm bit like you Beth. I'm a I've been here a long time and I've boomerang back in so I got a bit of history in this place and you know I think it's probably fair to say that a lot of people might categorize Microsoft as you know kind of the the old god of technology. We've been here a long time 40 plus years and they sort of see the stories from some of our fellow players in this cloud world has been the innovators and the people that started this and the garage is probably one of those interesting stories that differentiates that and kind of calls that into question. So um look it's for a long time Microsoft's actually had a whole series of these kind of incubator startup kind of rooms and spaces Um but up until the mid2 2000s it was largely productled. It was kind of like you know we're going to have an innovator to do product X or product Y. Um so in 2009 the garage got started and and there's some really good uh kind of commentary from that early days as to what they meant when they started what it was about. I love this one. Um was this idea that um essentially ideas are cheap and they're not valuable. Uh everyone's we got millions of ideas. Prototyping and proving is far more valuable. And so they had this basic the the very first office which was actually open in Redmond campus in 2009 just simply had this principle on the thing on the door that just said um all are welcome doers not talkers and it was this basic idea that you know you come in and you do uh there's no place here for business conversation this is about kind of this idea we want to build and create things and so I think that's kind of you know it got its starting point from that idea and then there was a whole bunch of um uh empowerment across business for people to just go and get involved. In fact, it did actually start out largely as an office thing. It was kind of our back uh research around the office platform. But of course, you know, 2009, early late 2000s, that all started to expand out to the point today where the garage has become this 24 locations around the world, physical places where Microsofties and customers and externals get together, think about just prototyping and solving problems. And it and it works on this sort of very common principle that's well known in the in the um design thinking mechanisms this the ideate process you know you you bring sort of that that passion for an idea uh to the room you build a vision around it you create a scenario for it you and you've you ever seen the sort of the um what do you call it's like the the think the uh what do you call this hourglass the hourglass approach you you start off really big and wide at the bottom with lots of ideas and things on the table you narrow it down to a scenario and an idea and a concept in the narrow and then you start going out to do the build and that's the second half for the thing and and it's quite simple. It works on that principle, but it just seems to work as a I think what keeps it working. I'm interested your feedback, Beth, because you did it virtually is, you know, the garage is about a place. It's about going to a space and having the the tools, the mechanisms, the whiteboards, the prototyping, you know, the the the soldering irons and bits and pieces to do things. How did it work in a virtual way? How do you collaborate and ideulate virtually? Oh, you it's a little bit hard to compare it to a physical experience having not really participated in a physical hackathon other than the fact that I would imagine the pizza um was lacking. Um I kind of associate hackathons with beer and pizza. So um uh you know I I still had a great experience. I didn't feel like it lacked anything necessarily that um a physical interaction would bring. Um, if anything, I felt like it it didn't matter. You we could all be effective in whichever environment we chose to be in in the moment. And I I guess that's the nature of hybrid work now, isn't it? That, you know, experiences are different, but they're not necessarily better or worse for being virtual or face to face. So, um, yeah, I I I would definitely think that um that you may be It's even more inclusive by being a virtual event that the fact that especially if you're having to do it late at night or early in the morning, you don't want to be That's true, isn't it? The the other thing I'd like to ask as well for both of you, I suppose, is you know sometimes these are too techy, right? Um the like when we do Microsoft hackathons or we think about hackathons or when our customers or people are listening to this now, you know, we thinking about hackathons. I did one piece of work with the University Technology of Sydney they've got a faculty of transdisiplinary learning or it's something like that it's got is amazing and they bring people from elsewhere in you know so they bring a lawyer in and a marketing person and somebody from you know all these kinds of different faculties because because so this this is my problem in my mind one is you need people to come in and think about some of these problems and really ideate properly rather than get the the the lens of technology only and jump to conclusions really quickly. But then you also got the issue which we have when we do protay which is our kind of university entrance thing of if people aren't in the domain of technology then they struggle to even think about what you can do you know so you've got almost you almost see the best of both worlds uh you know um what do you think about that lead because um you know there there's there's the people who know and there's the people who don't know and it's about trying to mingle everybody together, I suppose. Well, look, I mean, you're right because I think that's almost the history of the terminology. I mean, when Beth said then, dear pizza, there's this sort of a um I'll use the word, I don't mean it to be in a totally negative way, but there's a bit of a misogyn misogynistic uh pro kind of tech bro kind of thinking around that it is about men doing beyond pizza. But I think Beth, I mean, as you kind of said to me, I mean, you know, you sort of said yourself, it was such a multi-disiplinary group and you're not a technician. yourself. I know you've said that yourself, but you come into this and you can contribute because what you find figure out is that actually building taking ideas and then forming them into uh visions or strategy that can actually be executed takes more than the technology. You know, I think you had said you had a storyteller in the room and you've got, you know, kind of project managers um ad people who just simply advise and direct you in a particular way. It I think it just actually requires all sorts whereas historically and I'm going to I've got a brain tease a question people but historically you know that idea of the the the startup the startup has been about the technology in our industry at least. Yeah. And and to to take that to the next level a little bit well not next level but but when we thinking about things that we've done in schools for example we're younger kids you know where they might not have an idea of the domain or the social problems you know you go you could go to kids I remember when I used to teach and you say right we're going to solve a problem here with Excel or whatever and they'd struggle to come up with some social problems, you know, of how they, you know, because they haven't had, you know, aspects of business, they might not understand personal finance or whatever. So, um, when we've done the the AI for good hackathons, which now come out into Imagine Cup Junior, which we've done multiple, um, uh, sessions on previously in the podcast, the key to it has been to make sure we've got a curriculum that stands up to support people going through that. So, a hackathon isn't just going into a room and kind of going okay well let's ideulate on it. It's about a coming up with that problem like you said but also giving people techniques and tools to say well okay um what are the you know I think with the with the AI for good uh stuff we do with education change makers and the like you know it's very much around um let's think about the problem but let's think about the ethical implications of this let's think about the technologies that are involved let's talk about those technologies let's understand what GPS Bluetooth um and and all these other facial recognition and cognitive services are because without understanding those you can't you struggle to come up with uh ideas or think outside the box a little bit. So I think that structure when we do things with with younger kids and K12 and even in universities to certain extent we've got to scaffold a hackathon. So it's not just sitting around with a pizza. It's about kind of saying well you have to invest that time in in learning during that process and having that constant feedback. back and discussion around well that might work but what what about this or have we thought about this um you know that curriculum part's quite important Dan you mentioned the imag well sorry you mentioned the AI for good and I nearly gave you away there but you know AI AI for good led its way to something really important and I think to you know you're making that point around the fact that it's not always about the technology that big body of work the Imagine Cup that turned into Imagine Cup Junior that you you know very well you talk about that because that wasn't even about building things that was just about ideating on. That's correct. Yeah, that's right. Yeah, that was very much I I went in and ran a session with the school last year actually. Um about that actually this year. What was my best the time going? Um but yeah, exactly. It was about them ideating. They didn't even have to touch computers. It was all about ideiating and coming up with those ideas, but also thinking about the implications of those ideas and and what what what impact that would have on society. And then but you'd still need to know about the technology. So there was still an element, you know, that that Cup junior you know and then we've got the full imagine cup which is for university students is is very much uh around ideiating and solutionering some of these projects going through the design thinking process like you mentioned earlier on Lee and then but but you still need that curriculum element to kind of understand well you know what what are these technologies that are available you know whether that's VR AR cloud AI um all of these tools and technologies because otherwise you can't come up with a a solution. You know, my my my final little anecdote is when I used I've mentioned this on the podcast multiple times when I used to teach kids mobile phone design. If you just say to kids, let's develop a mobile phone app that chances are if you don't give them any scaffolding, they come up with a game. When you give them scaffolding and say what what what are available? What services are available in a mobile phone? You know, text messaging, multimedia messaging, internet, GPS, location sensor, accelerometers. Suddenly they realize, you know, and and they come up with better ideas and solutions for the project they've done. So yeah, you know, I love the Imagine Cup Junior. That'll be opening up again next year um academic year in Australia and uh you know, in January, February time for for those people in the Northern Hemisphere as well. So that's exciting project to run with. So if I think about sort of rounding this not not to wrap things up, but to round it up into some like how do you how do you move forward? because it's amazing as Beth talked about amazing opportunity to to learn from it. It's it doesn't require to be a technician. Anybody can do it. Um but I think you know as the point you made there Danny you can't just sort of go okay let's have a hackathon and we'll all just go build something because you'll end up with you know the Homer Simpson car for example or or as you say mobile phone game best car. It's a great car. Um so you kind of need to build. So I think one of the learnings we're probably all agreeing on is that you know you need to give enough guidance to make sure there's a direction but not so much guidance as you tell people. what to do. Uh that's probably one of the key things. Anything else you think would be good learnings between the two of you for anyone thinking about setting up or running a hackathon or event of their own? But for K12 to do that first while bet's thinking there is easy for K12 because we've got an Australian kit and a New Zealand kit and a global kit that you can download from our Imagine Cup Junior website which has got lesson plans in there and it's got structure uh to it. So, you know, that gives you everything you need if you if you're teaching in K12. even in university level. Um but but that again though is is sort of limited. You sort of controlling it a bit more. I keen to think about like learn from you Beth and what you've you've experienced. Well, you know, I think like any good um problem you you've got to start with the problem first. So this isn't a a hack to um have technology find a solution or an application to be to be used, but to to to you start with the problem. Um, one thing that I was amazed to find out a couple of weeks ago is that we also have a global hackathon for our partner community around the sustainable development goals um the STGS which are leading up to 2030. So this is something you know again Microsoft is the organization that keeps giving in in terms of all these different programs and activities that we have underway but it's you there cannot get any more complex or ambitious goals than the sustainable development goals. So they are starting with a set of these these goals and they're saying demonstrate how technology can help the world move towards achieving these goals. So it's not technology for its own sake but it's technology that can be used to solve a particular problem. So for me I think that's where you start. You you take the the challenge and it needs to you need to be solving a particular problem because otherwise it's it's not it's not effective. Um but definitely I think the fact that you can use this as a as a an inclusive experience that welcomes people with lots of different experience levels, lots of different perspectives. You know, the lived experience of people who might have had this challenge before I think is really important. And it's a it's you know it's a stealth mission to actually teach people about technology as well. So you know you walk away learn having learned a bunch of new technology skills. as well as um you know problem solving skills but there's not a barrier to entry um that is imposed by a lack of tech skills which I think is um is the thing that makes it as inclusive as it has been for me. That's a really good point and um trying to capture all this in my head and there's one last thing I wanted to add to it but you know if we think about the fact then that it is um you know it's it's uh it takes all sorts and all different types of backgrounds. It's not just about tech people and and that it's it requires some direction but not so much direction that you kind of force an outcome or you create a you know pred predetermined outcome. And then the last thing I was thinking about Beth as you were talking then is it's also it's not something to do with it's not a tech industry thing. It's not like it seems to be a you know it's quite attached to it. We always think it's a tech thing but you can do financial services hacks, you could do age care hacks, you could be doing educ I mean everything has this idea that with enough people and in a room or in a space thinking through the problems and being free to bounce ideas you create that kind of um that opportunity. So it's it's also about trying to do something right, isn't it? So I in my role I'm working with CIOS all the time and I think there's just it's definitely a sea change now when people are implementing large problems and not implementing problems solutions to fit the problem. You know you you there are there are there's a big view to kind of get something quickly out of this. Now, you know, the agile development methodologies taken off quite a bit obviously over the last couple of years, but but people are now really want to try to find rather than see something and then kind of fix a bit like we doing from Microsoft with lots of our products, you know, we put things in and then uh see how it goes, you know, because rather get things out there um not necessarily, you know, testing everything, but just getting the solution to market quickly is really really important. Yeah, it's and what's interesting you said there, Dan, about you know the sort of the speed of it. I mean one of the one of the things that a hack hack mentality or or a growth mindset mentality or agile mentality or however you want to sort of frame and word it creates for you is speed of innovation. It actually lets better ideas bubble up and you have a sort of a self-norming process where as a group the best idea tends to bubble up to the top. So I think there's a lot to be said for it as a way to it's not to be frightened to be scared of I think there's a people get scared of this idea that you know bad ideas will persist or What's the one the uh I think it was in the UK uh that the naming of a boat they wanted to call it and they call boat face. You know when you give consensus to everybody you get silly answers. That's not it's a great name for a boat. We should have had it. But anyway, I wonder if uh if there's if it's too late to rename the bureau um through collective uh that collective very topical the What was your brain teaser earlier on Lee before we end? Ah well yes I forgot to ask you something, but I have a question for you. So, what do the following five companies have in common? Google, Microsoft, Apple, Amazon, and Dell. I'll give you a minute to think about it, but the clue is in the entire episode. Wow. Anyone want to take a stab? Do we develop technology around? Oh, no. No. I still do hackathons, right? We're all largely tech companies. That is true unfortunately but that was the area of my knowledge. It's the reason why the garage is called the garage because all five of those companies were started in garages. A right that's amazing. I I actually do know Microsoft Australia was um operated out of somebody's garage for the first few years as well. So even when we were up in the US, we still kind of maintained that garage experience. But isn't that that's really cool. Yeah. Yeah. I'm sure there are actually non tech companies. I just happen to know the the tech ones cuz you know tech nerd. Um and and but if you ever get the chance I've done it I was over in the US and I I wanted to go to the Apple the house in in um Certino where Apple was started and you but you drive past it but it's actually someone's private house now. Of course there are police on that road and if you stop the police will stop you. So we drove past at slow speed as I could and my wife was driving and I lent out the car with my camera went on my phone actually and got a couple of pictures. So I uh I do have a picture of it. Can I can I ask one question? I know we've coming up to time now, Lee, but you mentioned in one of the last episodes you got a Surface Duo 2 and you put that on because you were going on a trip. How did you manage with that device in the end? Did you love it? Oh, I do have a Surface Duo 2 for those of you who don't know. Again, this is a really interesting one. Not a hack outcome, but a Microsoft Android device. You know, one of those things that probably came out of that kind of idea thinking where we go, yeah, hey, Why not? Um, look, it's a great device. Amazing for traveling. Um, small, portable, great experience, particularly for Teams calls and that kind of thing. You know, it's big enough for you to be able to actually use the keyboard. For me personally, I'm still jarring getting my head around the jar experience of moving from an Apple ecosystem to an Android ecosystem. But as a as a device, uh, 100% like perfect for traveling. Cool. That's good. Like a bit of an ending around the hot ending. Yeah. Well, thanks everybody today as great to learn about all the hackathons and uh you know, we put some uh notes in the show notes for everybody to read from, but uh thanks again Beth for your insights. I love that CSIro hack. That sounds phenomenal. Hopefully we see more of the the fruits of that labor actually coming out soon. And uh thanks again Lee as always for sharing your uh thoughts on the garage and the history of hacking. Yep. Pleasure. Brilliant. Thanks guys. Thanks guys.

The AI-powered Podcast Player

Save insights by tapping your headphones, chat with episodes, discover the best highlights - and more!
App store bannerPlay store banner
Get the app