
Online Learning in the Second Half EP 7 - AI Fever Continues: What Does it Mean for Online Education?
In this episode, John and Jason talk about Jason’s recent bout with AI fever and what the rapid development of AI means for online education.
Join Our LinkedIn Group - Online Learning PodcastLinks and Resources:
- Hard Fork Podcast: the Bing Who Loved Me
- Blog post on how Claude works. https://scale.com/blog/chatgpt-vs-claude
- Q*Bert history and a fun free web playable version here
AI Release Timeline
- Nov 9, 2022 YouChat (You.com) - Public beta based on GPT 3
- Nov 30, 2022 ChatGPT 3.5 - OpenAI
- Feb 7, 2023 Bing Chat (now based on 4)
- Feb 24, 2023 Facebook / META LLaMA Announcement
- March 14, 2023 - ChatGPT 4 - OpenAI
- March 14, 2023 - Claude - Anthropic AI (Quora / Poe.com / DuckDuckGo )
- March 21, 2023 - Google’s Bard
- “Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity” by Jennifer Haase and Paul H. P. Hanel. https://arxiv.org/pdf/2303.12003.pdf
- “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” by Tyna Eloundou, Sam Manning, Pamela Mishkin , and Daniel Rock. https://arxiv.org/pdf/2303.10130.pdf
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
John Struggles saying "Alterficial" Intelligence and Won't Just Say "AI."
[00:00:00] John Nash: And what they found was no qualitative difference between alter… Um, they found no qualitative difference between, I can't say alter—I want to say alternative intelligence. What is that? It is, it's advanced. What is that artificial? Is it Friday? We found no qualitative difference. Let me start over. They found no qualitative difference between. I'm blocking again. I need to write it out. Good gravy. Here's your false start.
[00:00:36] Jason Johnston: You can just say, you could just say AI if you want to.
[00:00:38] John Nash: I'll just say AI.
[00:00:41] Jason Johnston: You know, if you get AI's name wrong, there are no feelings.
[00:00:45] John Nash: It doesn't care.
[00:00:46] Jason Johnston: They've already told me they don't have feelings like we do. So,
[00:00:49] John Nash: they don't have feelings. They're not humans. They want to be, but they can't be.
Introduction
[00:00:53] John Nash: I'm John Nash here with Jason Johnston.
[00:00:55] Jason Johnston: Hey John. Hey everyone. And this is Online Learning in the second half, the Online Learning podcast.
[00:01:01] John Nash: We're doing this podcast to let you in on a conversation we've been having for the last two years about online education, and we're continuing to have. Look, online learning's had its chance to be great, and some of it is and some of it just isn't. How are we going to get to the next stage?
[00:01:18] Jason Johnston: That's a great question. How about we do a podcast and talk about it?
[00:01:22] John Nash: I agree. Let's, but first I want to know, what do you want to talk about today?
Part 1
[00:01:27] Jason Johnston: Well, so much has been happening in the last few weeks in terms of AI. It's almost laughable. We were like, yeah, we should probably move on from AI and talk about other things. And then all of a sudden, all we've been talking about for the last two weeks is AI, it feels like. But anyway, so I was thinking about all these things a couple of weeks ago. Just a ton of new things were being released. You and I, we were watching from separate places, but the release of GPT-4 and that amazing demonstration.
Yeah, that was amazing.
And some of the other AI tools that were coming out. And so, I was catching up one evening, checking some of these things out online, and then I made the mistake of watching an episode of Black Mirror. Do you know that show?
[00:02:10] John Nash: I do. Yeah. That can be a mistake sometimes.
[00:02:14] Jason Johnston: And it's kind of a, it's like a cautionary, uh, technology tales, kind of like a futuristic Twilight Zone. And so, I had watched that before, before going to bed, and then I was literally all night long. I was having nightmares of tossing and turning wrestling with ChatGPT. It wouldn't do what I wanted it to do. Every time I, you know how it is when you're dreaming and you can't quite read things when you're dreaming. I don't know if you've ever noticed that.
[00:02:42] John Nash: No, I haven't thought about that. Let me think a minute. No.
[00:02:46] Jason Johnston: When you're dreaming, try to read something, turn your head away and come back and try to read the same thing again and it doesn't work. So, I was working with ChatGPT all night long in my dreams. And, you know, I woke up the next day, and I felt like I hadn't slept at all. And do you know what I realized? I think I came down with, John?
[00:03:03] John Nash: No. What did you come down with?
[00:03:05] Jason Johnston: I think I had AI fever.
[00:03:08] John Nash: Is there a cure for that?
[00:03:11] Jason Johnston: Well, I don't know if there's a cure or not right now. It might just be something that has to kind of work through our systems right now. Now, I'm not sure that, it came upon us too suddenly to, I think for anybody to work on any kind of cure. But I think it's not only hit me, I think it's hitting other people as well.
[00:03:28] John Nash: I think so. Is it, uh, is it like cowbell? You just need more cowbell, you need more AI.
[00:03:34] Jason Johnston: Well, that's what it, yeah, it's kind of like a, it's, I don't know how to describe it. Some of the symptoms would be kind of this kind of perplexity of thought towards it where you're thinking about it, but you're not really coming to any kind of resolve, and before you even don't come to any resolve, the new information comes out about it. So, I think that's right. Those are some of my symptoms anyway of AI.
[00:03:58] John Nash: Maybe it's good that, uh, Bing AI cuts you off after 15 prompts.
[00:04:02] Jason Johnston: Now. Maybe it is. So, so I've been trying to do some more analog things on the weekend, get my head away from, so that's been good. Good recovery there. That's good. You know, playing the guitar, getting outside, those kind of things. But yeah, I wanted to talk a little bit about that and about just all that's happened in the last month.
[00:04:21] John Nash: Well, I'm with you. I mean, just when I was sort of dusting off my hands and saying, okay, phew, at least we got that AI stuff out of the way so we can start to talk about some other things. And I can't help but think we have to talk about some things. I've been finding some research that's really interesting that I think implies, uh, we need to be thoughtful about what's going to occur with online learning. And, uh, yeah, just this, but just this timeline, uh, as you're talking about is kind of, kind of amazing. I'm looking at our collaborative notes here and just sort of reflecting on, uh, I thought that, you know, back in November when the first sort of public beta of GPT-3 came out and you and I started playing with it towards the end, middle, or end of November, I guess right before the semester break, that this was clearly revolutionary, very different, uh, and, uh, the predictive language model from 3.5 that came out in the end of November. Like how, what could actually happen from here and how fast will this go? And we had no idea that in just three months it would be as advanced as it is now.
[00:05:28] Jason Johnston: Yeah, well, and right about the time we were starting this podcast in February, then Bing Chat came out and its connection to the internet was, uh, I think revolutionary. Yes, for current information, but also the more conversational style. And then of course, all those early crazy things were happening to myself and others with these long conversations with Bing.
[00:05:49] John Nash: Those conversations you had with Bing and the sort of humanizing attempts on Microsoft's part, it seemed like to have those responses be a little more soft and a little more, uh, empathetic, uh, reminds me of how much chatter was going on in say, January and February about how unnecessary it was to anthropomorphize these chatbots because they were just being, uh, they're just computers that are programmed with English language. And so, there was no need to treat them like they were humans. I think people's attitudes have changed a little bit towards that.
[00:06:27] Jason Johnston: Yeah. How so? What do you think?
[00:06:29] John Nash: How have people, I think people can't help but anthropomorphize these machines, right?
[00:06:33] Jason Johnston: Yeah. And then in February, Facebook Meta made an announcement about its large language model, of course, just to kind of keep up with everybody it felt like. But I've yet to seen that. I have, I have my name in the hat to see, uh, oh, some sort of early rendition of that. But Mark has still not responded to me on that one.
[00:06:53] John Nash: Uh, and from what I understand from other sources that I've listened to, other podcasts, this language model called Llama was also leaked in some way, right? And so that it's possible for individual users like you and I to run this model on our own private computers and then do with what we will, which is raising some concerns amongst people, hasn't it?
[00:07:16] Jason Johnston: I think it's a unique feature is that it's very compact once it's trained and then maybe it leaked out from there to people that they didn't necessarily want it to be in the wrong hands.
[00:07:27] John Nash: Maybe. And I think it's that wrong hands idea that is getting some people excited because of the way in which, for instance, ChatGPT did their red team efforts. So, to red team something is to bring in outsiders to find out what's wrong with it. And so, in this case, they wanted to make sure ChatGPT couldn't do things like tell you how to take common kitchen chemicals and make a bomb. And actually, turns out in their, one of their papers, I guess they came out, they could do that. And so now they put up the guardrails. And so, I guess the issue with Llama being released by Meta and Facebook is that these guardrails don't exist. Uh, and right, whose ever hands also make me think about state actors and others. I mean, the point of having these guardrails is to create some level of safety, but it's only up to the private companies right now, like OpenAI to decide to put the guardrails up.
[00:08:24] Jason Johnston: Yeah. And I think it was a good and interesting idea for even ChatGPT to open it up in limited release to allow a million users to try it out. And Bing Chat the same kind of way. So that they could see, I mean, the idea was that it was in beta, and they could see what it could and couldn't do, and they knew that users would test the limits of it, which is exactly the way to do it. So, I wonder if some of the fervency in fear around it was a little unnecessary just considering that this was, uh, a beta. Maybe the fear was about what it could do, not necessarily what it would do in full release, but like, I can't believe, you know, for some of the people like you and I, we've talked about the Hard Fork Podcast and one of the writers from New York Times that talked about this long conversation with Bing Chat that tried to get him to leave his wife because it was, uh, Bing was in love. So,
[00:09:29] John Nash: yeah, that was out there. And by the way, good plug for that podcast for anybody who's interested in these topics. That's a Hard Fork is a good one.
[00:09:37] Jason Johnston: And just that one about that his experience, yeah, is fascinating to just listen to them, uh, talk about that. So, we'll put the link into the show notes for that one, but yeah. But you know, I wondered after that, and now that we've matured a little bit, I know it's been only a few months, but now that they're starting to put guardrails on these large language models, it seems to have kind of cooled some of the engines about some of these concerns about AI? What do you think?
[00:10:08] John Nash: I think yes and no. We'll wait and see what happens with Llama, and as independent actors and other, yeah, state actors take up the charge to start their own large language models. I don't feel as things have calmed down all that much in some education circles with regard to knee-jerk reactions of cheating and plagiarism and whatnot. I feel like those conversations are still in play. I feel like there's still conversations around how to put in tools to keep students from doing things rather than putting in tools to enable students to do things. I think we have some distance to go there, but that's what I'm thinking right at the moment.
[00:10:52] Jason Johnston: Yeah. Well, and as we record this, it is March 27th, 2023. And, uh, just looking at our timeline again, March has just been crazy. We did a forum, uh, my first like public forum talking about AI and education last week. And I, at the beginning of the forum that we were both in, I was co-moderating. I was, uh, doing a quick overview of where we're at with AI. Right. And it was funny because my overview changed every day essentially. And I was changing things even that morning before just because of, uh, new releases and as we were talking about different, different new releases. But March has been like a wild ride. We had on March 14th, ChatGPT-4, the demo that we were talking about. Yeah. And a couple things in those demo that blew me away. Maybe there's, the first thing that comes to mind is him drawing a really rough sketch of a website, scanning it through his phone, and then, uh, GPT-4 creating the website with a joke, his joke website with like a working, uh, JavaScript.
[00:12:11] John Nash: Yes, that's exactly what he did. And, uh, he uploaded it to a Discord server and GPT-4, yeah, took the napkin sketch, and I saw that too. It was rudimentary. It was as quick as you might do over a Coke or a beer with a friend, uh, with a pen and take, I don't know, 30 seconds to draw this thing. And, uh, it wrote the HTML and the JavaScript to run this website.
[00:12:39] Jason Johnston: So, a very powerful upgrade. Another thing that happened that a lot of people didn't see on March 14th is that another company called Claude released, uh, an option for people to be able to check out. Claude, you should check it out. It's interesting, and I think within that, what I would suggest for people as they're testing this is to go to poe.com, poe.com. There you're able to test multiple chat engines side by side. You do your own tests and see how they produce different results from the same prompt. And I think it's, that could be a very powerful tool, but you can, uh, access Claude through that. Um, Claude is interesting because it, and I don't, I'm not pretending to know everything about it. I've read some about it and I've asked Claude some about, uh, themselves. It is based on more of a constitutional model. So, it's, as I understand it, it's less guided by the users and user preferences and more guided by their, by a strong constitutional model of ethics and kindness and do no harm and those kinds of things.
[00:13:49] John Nash: Yeah, I was looking that up too. In a blog post on scale.com, we'll put a note in, uh, in the show notes, you can ask Claude to introduce itself and talk about its constitutional model. And, uh, so Constitutional AI is a safety research technique developed by these researchers at, uh, Anthropic who have built this. And the goal is to be helpful, harmless, and honest by using a model of self-supervision and safety methods. So, it's going to use a model that will help police itself.
[00:14:25] Jason Johnston: Yeah, I like that. And I understand people that started this actually left OpenAI because they were concerned with some of the directions that it was. Yeah.
[00:14:33] John Nash: And they're, uh, also Alphabet funded, so Google's parent.
[00:14:38] Jason Johnston: Oh, they are as well? Yeah. Huh. Okay. Well, that's interesting. So, we've got Google, and speaking of Google, March 21st, then Google's Bard was, uh, released to the public in terms of being able to access it. Now, we talked previously about their demo. They did a demo that was kind of a failed demo by having some, uh, some incorrect, they had one job, which was a screenshot of Bard had one job to do, show that screenshot. And in the screenshot, they had some wrong information that Bard produced. But I've been playing with it and I think it's really, I think it's interesting. I haven't decided what I feel about it yet in terms of the differences, but you can ask for access, uh, bard.google.com, and I'm sure we will be seeing a lot more of Bard in the future because Google, they are no slouches when it comes to AI. No. They've been training this thing for years.
[00:15:38] John Nash: It's just that they were a little, I think they had to say something to keep up with the Joneses, but it doesn't seem like it's fully baked yet, right?
[00:15:48] Jason Johnston: No, they weren't ready to release it yet. And so, but they felt like they needed to let everybody know what they'd been working on. So, yeah. Yeah. So, what do you think, John, how do, how does all this apply then to online learning? What are some of the things you're thinking this, uh, this month with all the changes?
[00:16:07] John Nash: Well, Jason, I've been attempting to garner new research as it comes out about AI and GPT-4, specifically in the last couple of weeks. Uh, and its connection to online learning. Little things are coming out. I think it's also worth reporting that when GPT-4 came out on the 14th, OpenAI was quick to report that it passed the US Bar legal exam with results in the 90th percentile compared to the 10th percentile for the previous version of ChatGPT. So, it's just, it's wild. It's blown the doors off this stuff. It can look at a picture and describe it with great detail. So, this has affordances for visually impaired people, and then it can also interpret drawings and pictures and create code.
One of the things I've ran across was a study, a working paper done by some researchers in Germany looking at how chatbots have risen to human level creativity, and they applied the alternative uses test, and it's one of the most frequently used creativity tests that shows good predictive validity, and they gave 100 human participants and five generative AI models the alternative uses test. And they had six human beings and a specifically trained AI independently rate these alternative uses. So basically, you say, give us multiple original uses for five everyday objects, and its pants, a ball, a tire, a fork, a toothbrush, and give us new ideas for these. How else could we use these things?
They found no qualitative difference between AI and human generated creativity, and only 9.4% of the humans were more creative than the most creative generative model, which was GPT-4.
So, to the extent that generative language AI tools can be considered creative, and specifically in terms of their output for on these standardized measures, this research found that tools like ChatGPT, Studio, AI, you.com, they're judged to be as original as human generated ideas and were almost indistinguishable from human output now. But don't forget that you still need a human to create the prompts. And so, right. I think that's going to be, you know, these models can't generate the prompts. They can't generate an idea on their own. So, they need specific input, but I think that's pretty interesting. I think it goes to some of the things we've already been talking about with regard to these models being useful for generating ideas in the face of a blank page. Uh, busting inertia, keeping ideas going.
Another one that I wanted to mention really quick, if it's okay, is this, uh, uh, some economists and some people, uh, attached to OpenAI looked at the labor market impact potential of large language models. Their findings are pretty astounding. Their findings suggest that around 80% of the US workforce could have at least 10% of their work tasks affected by the introduction of large language models, and about 19% of workers may see at least 50% of their tasks impacted. Now, they don't account for how, as we talked about a minute ago, regulations and laws and other guardrails might come up to, to perhaps put, uh, uh, their arms around the use of these in certain sectors and things like that. But at the moment it's hard to ignore the fact that this is impacting the way people work.
Yeah. And it makes you think about this concern that AI is going to come and replace a lot of the work we do, and it may replace a lot of the education we do because it can do both of those things so easily. And this shows some of the concern because you know, it's not just when we're talking about doing our laundry for us. It sounds like that it's not just doing our laundry for us. So, a task that maybe didn't, doesn't take necessarily a lot of creativity to do, has some clear steps and is always this and this. Like a, like replacing, as we've talked about, uh, workers that put cars together. Right? Right. And not saying that's unskilled, that's definitely skilled labor, but it has a very clear-cut kind of way to operate. But you're talking about actually replacing a lot of creative work that could go on.
Possibly. I think it augments creative work, but I mean, humans still have to implement the creative ideas, uh, and right. One thing that was interesting in the creativity paper is: are we talking about little c creativity or big C creativity? I'd like to think you and I are making a difference in the world, but I'll be honest, I'll just talk about myself. Most of the, uh, stuff I do with ChatGPT is little c creativity. In other words, not creating world changing ideas. It's mostly me getting through the mundane things that I need to get through or, and actually, well, maybe some creative things that give me new ideas of new avenues of research, but it's, yeah, I don't think it's, yeah.
Yeah. And when you were talking about it was just 9.4% of humans that were more creative than the most creative, uh, AI, right, GPT-4. It didn't make me think, now I have an answer for people when they say, you know, well, is AI going to replace humans? And I'll be like, uh, probably only 90.6% of them. Yeah.
Yeah, you're safe. Probably. Maybe.
[00:21:56] Jason Johnston: You're probably fine.
Joking aside though, I mean, even though it can do it, it doesn't mean that we're going to want AI to do that kind of creative work, right?
That's right.
Because as we've talked about as well, that, that creativity is part of the joy of the work and the things that we do. We want to be the ones that are making the creative effort here, not necessarily just spitting it into a machine.
[00:22:21] John Nash: That's right. That's why I ask my students in my design thinking course, when they get to the point of brainstorming in that phase of the cycle, that they not use generative language models until they themselves have done their brainstorm. And so, then it could augment what they've got. But it can't take a brainstorm on for something that it itself did not research the human's needs, the things that need to be, uh, addressed.
[00:22:48] Jason Johnston: Yeah, and I think that's a, I think that could be a really great educational model as we talk about some of the differences between AI and education versus the workforce is thinking about, you know, what are we wanting to learn, be trained in, uh, to be testing our own thinking and, uh, we, you know, maybe taking a step back from AI to allow us to make sure we're at least doing those things first before then, and figuring out how AI could help us to do those things.
[00:23:23] John Nash: Uh, hey let me ask you this with regard to creativity around known parameters that are not a secret in the world. And I'm talking about instructional design, but when we're thinking about learning management systems like Canvas that have enumerable plugins, I'm wondering when we'll see the plugins that help professors and teachers be more intelligent about their instructional design. I mean, it would be wonderful if we could be able to drop our modules in, I import my course every semester from last semester, whatever it is you do. But then let the large language model make some suggestions where it sees, you know, 90% of the gaffs that occur in an instructional approach could be caught and even repaired by the language model. Module pacing and the design of the assessments as they relate to the outcomes, the kinds of communications you have with the students, the timing and frequency of those communications. The tone used compared to the kind of questions you get from the students. All of that just seems like it could be handled by an intelligent plugin for teachers.
[00:24:37] Jason Johnston: Yeah, I agree. Especially like from an instructional design standpoint. I think one of our, we call them our clarion calls, our, one of our big things, which is to, uh, have clear, measurable student learning outcomes, and then everything throughout the course should be hanging on those outcomes. And I can, so I can perceive of something like that could essentially scan or uh, read through a course knowing that these are our learning objectives and try to intuit whether or not things are hanging on those learning objectives and almost like an accessibility check is able to list things that don't hang on the learning objectives. Yeah. And so, it would give the teacher an opportunity to either add learning objectives if they're important or remove activities and learning modules if they're not important.
[00:25:32] John Nash: Many people in our audience listening to this may be familiar with Quality Matters. It's a pretty well-known right, uh, group, uh, at least in our, of our ilk. And but so, uh, ChatGPT-4 couldn't mine the Quality Matters rubric and then it could just, done. Could it be, yeah, I think that would
[00:25:51] Jason Johnston: be amazing. We, we should pitch that to them. Quality Matters plugin, AI plugin. Yeah. What we call it.
[00:25:58] John Nash: Quality Matters, uh, QP? No. I don't know. Got to think about that. I can't brainstorm. I have to rely on, uh, large language models brainstorm.
[00:26:08] Jason Johnston: Nope. No, do the hard work. John, don't reach for that. Don't, hurts. Don't reach for Chat. Please don't do it. Let's think about this. So, we've got Chat, we've got ChatGPT, we've got Claude, we've got Bard. They all kind of sound a little similar. So, we could do with, we something with a, a Q and a, yeah.
[00:26:29] John Nash: Or Matt. Matt matters. Matt. It could be Matt.
[00:26:33] Jason Johnston: Oh, Matt. That's not bad. John. I think I like that. Matt Q. Matt. Kind of like the Qbert. Yes. Oh, Qbert's not bad though too. Do you remember Q*bert?
No.
Oh, John, it was a, it was this little creature that you had to jump up and down on this pyramid. It was a kind of a, I don't know, it was like a weird 2D, 3D Pac-Man or something.
[00:27:05] John Nash: I'm looking it up right now.
Like crawling up a, oh, yeah. Oh, he has a long, uh, cylindrical nose. Yeah. He's
[00:27:19] Jason Johnston: got to have a bit
[00:27:20] John Nash: of a snout. Yeah. He jumps up a cubed pyramid. Yep.
Okay.
[00:27:26] Jason Johnston: I had a standalone, I had a little standalone pocket, uh, Q*bert game. It was amazing.
[00:27:30] John Nash: didn't, uh, that didn't cross my gaze back in the day.
[00:27:34] Jason Johnston: No. Okay. Well, you're missing out. It was amazing. So that's our pitch, right? Yep. So now that we've done the creative work, John, of coming up with Q*bert, now we can perhaps, uh, pitch to GPT-4 what our marketing pitch, our eight slide, uh, corporate funding marketing pitch to, uh, Quality Matters would be for their new AI.
[00:28:00] John Nash: I think we could. Ethan Mollick just recently published how he, uh, did in about 30 minutes, he put together a business idea with all the accompanying email campaigns and website creation, uh, for his, yeah, for his fictitious company. So, yeah, I've got 30 minutes to spare, I guess, on this. Okay. Yeah,
[00:28:23] Jason Johnston: sounds possible. I think they're going to love it.
[00:28:25] John Nash: Jason, I know, uh, someday we're going to stop talking about AI for the entire episode. But, uh, what do you think, uh, will get us off of talking about AI?
[00:28:38] Jason Johnston: Well, if it stops changing so quickly, and I think we will come to some steady state of understanding with AI. But, uh, it may not be too soon. It's hard to say. Certainly, when the workable aspects of AI don't change as quickly, we won't have as many things to talk about, but then we'll probably loop back as we're implementing it more and more in online education as we see new tools come out. Yeah. Like we kind of joked about the AI, but, uh, as probably we'll have a thousand companies over the next year, ed tech companies coming up with their new solutions, right, for AI. Yeah. And it'll be hard not to talk about those as they come out as new innovations happen. So, I'm not sure what will get us off, but I think that we will, we'll try to cover some other things other than AI, but I think that we will, it will probably be something we'll return to now and again, I think.
[00:29:38] John Nash: We will. I think we'll run reviews on those tools that we start to see and maybe even bemoan the demise of some companies that we didn't expect to go thanks to AI coming on the scene. Right. I think one pivot we can make away from AI that still references it is on assessment. I think all the talk that we've been seeing about the problems with generative language, creating essays for students that they might turn in for writing tasks in English classes. I think that we could start to have a productive conversation about how we can support teachers in coming up with authentic assessments that still let them have writing occur but not have to worry ever about whether or not the submission was AI generated.
[00:30:26] Jason Johnston: Yes. Yeah. And in that, there's more conversation we had in terms of where those lines are crossed and for different disciplines, probably even, you know, they're kind of crossed in different ways. And how we manage that so that our students continue to think. Because that's the bottom line. We want them to be learning and thinking.
Yes.
There are some ways in which it matters not to me if they're using an AI, but what does matter to me and so many teachers is whether or not they're actually continuing to learn and think, for
[00:30:54] John Nash: sure. All right. That's good conversations to be had. Okay.
[00:30:57] Jason Johnston: Yes. Thank you, John. This is great. Check out our, check out our LinkedIn page, Online Learning Podcast, and also onlinelearningpodcast.com, and please join in the conversation. We'd love to hear what you think about all this, what we're saying and all this that is going on. So yes, please. Thank so much for listening.
[00:31:17] John Nash: Yeah, thank you. Everyone: tell us what we should talk about next. We'd love to take up the topic.
[00:31:24] Jason Johnston: Absolutely. Good talking to you, John.
[00:31:25] John Nash: Thanks, Jason.
