
Online Learning in the Second Half
EP 28 - Spring 24 Check-in focusing on AI in Education: Navigating Ethics, Innovation, Academic Honesty, and the Human Presence online.
In this Spring 2024 check-in, John and Jason talk about AI-created voices, the importance of human presence in online education, the challenges of AI detection like Turnitin, and insights from their spring conferences and presentations. See complete notes and transcripts at www.onlinelearningpodcast.com
Join Our LinkedIn Group - *Online Learning Podcast (Also feel free to connect with John and Jason at LinkedIn too)*
Links and Resources:- Eleven labs AI voice generation (on OpenAI)
- John's deck from his presentation at ASBMB - AI as an instructional designer and a tutor.
- The Ezra Klein Show - Interviewing Dario Amodei
Theme Music: Pumped by RoccoW is licensed under an Attribution-NonCommercial License.
TranscriptWe use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False Start:
John Nash: Okay, we'll get AI to fix that.
Jason Johnston: You can maybe get AI to fix that.
Intro:
AI Speaker 1: Hi, I’m not John Nash and I’m not here with Jason Johnston.
AI Speaker 2: Hey, not-John. Hey, everyone. And this is Online Learning in the Second Half, the online learning podcast.
AI Speaker 1: Yeah, and we are doing this podcast to let you all in on a conversation we’ve been having about online education for the last few years.
Look, online learning has had its chance to be great and some of it is, but some of it isn’t. What are we going to do to get to the next stage, not-Jason?
AI Speaker 2: That’s a great question. How about we do a podcast and talk about it?
AI Speaker 1: That sounds great. What do you want to talk about today?
AI Speaker 2: I’ve got a big question for you not-John. Are you ready?
AI Speaker 1: Okay, shoot.
AI Speaker 2: If we carefully and lovingly create a script for an online learning video (or podcast) but then have AI-created voices read that script. Are we humanizing or de-humanizing online learning?
AI Speaker 1: I’m just a text-based large language model chat-bot and I don’t think I’m equipped to answer that question. Maybe we should bring in the real John and Jason? John? Jason? What do you think?
John Nash: I think it's a great question, real Jason.
Jason Johnston: Yeah, real John. It's it's good to see you in real Zoom. and that is a great question that this our chatbots pose for us today. And I think that yeah, I'm not, what do you have any initial responses to the question if we use AI tools to lovingly create our scripts for online videos or for podcasts, are we dehumanizing or are we, humanizing these experiences
John Nash: Well, it's a classic academic answer, isn't it? It depends.
Jason Johnston: Depends.
John Nash: But I think used exclusively, I think it does dehumanize. I think used judiciously and with an agenda to humanize, I think they could be helpful, but the jury's probably out because it's all context, isn't it?
Jason Johnston: Yeah, definitely context and it gets into some philosophical questions as well, when we talk about humanizing. There is the act, there is the perception, right? And so, this goes back to some of the things that are going on even with AI telehealth, and so on. Or AI therapy.
If the people don't know, does it matter? Does it feel human? Have they had the experience of being with a human, even though it wasn't a human? And then does it matter? I guess there's a ethical question about, It matters because we want to be transparent and we want to be honest with people and so on.
But if at the end of the day they feel like that they've been in a humanized situation and it gives them maybe a positive outcome for them.
John Nash: Yes. Yes. Yes. I think we discussed that last year a little bit. Yes. So essentially what we're saying is that if we fake them into feeling belonging, then that's okay.
Jason Johnston: yeah. As long as maybe we're not being dishonest with them. Or maybe not, I shouldn't say maybe. As long as we're not being dishonest with them. I think that would be the cutoff for me. If people knew what was going on.
John Nash: Okay. Fair. I think so. You say you're about to engage in a scenario that we've created that is designed to help you feel more belonging with regard to the activities we're doing as a group, maybe in our class. We used artificial intelligence, generative AI to create some of that, and we'd like you to engage in it, and then let us know.
I think that would,
Jason: Yeah, I think so. Yeah. So, we started with
this.
This was a, there was a moment which you could invoke Eleven Labs this company through Chat GPT, you could invoke their GPT to create voices for you. And I was just playing around with it and came up with these, this intro script because I thought it would be fun just
Jason Johnston: to, Just to
Jason: start off, I'm not planning to replace you, John, just so you know.
There's, I have no intention on replacing you. I'm, I enjoy our conversation too much to and respect you too much as a scholar and as a friend to replace you with just so you know, in case any concern or question.
John Nash: I have been trying to get fired from this podcast and I thought this was my chance, but labeled redundant. Isn't that what they say?
Jason Johnston: Well, I know you wanted to take the summer off, so maybe, maybe it could be just be like a, maybe a temporary replacement. We could get your voice. Yeah. Summer John, we could do summer John with yeah, that'd be all right. Yeah.
John Nash: Well, your new dog, Kevin could take over the podcast for the summer. Yes.
Jason Johnston: Yeah. Yeah. He would have some great things to add. I'm sure. The the really interesting thing about this, I'm not saying that this intro is perfect by any means, but, and we've talked about this a couple times, but just how quickly things are moving right now with AI and how even a year ago, that the emotions maybe weren't there with a AI created voices that are starting to come into itself.
I think some of the early pushback for AI voices that I have found from an education standpoint is like, well, students aren't going to like it. It sounds too fake. And and so in that way, it's just not going to be a great experience for them. Well, we may be moving past that now in terms of those kind of arguments against AI voices in, in online education.
But now we're moving towards, well, maybe it's fine for things. It doesn't matter. Like with, obviously we need to think about teaching presence, right? Community of inquiry. creating a great educational experience for students, having a teaching presence within the online class is super important, makes a difference for students and for teachers. I'm in a hundred percent on all of that. However, still within that, we pay voiceover people to do some slides that are going to be evergreen for us that maybe last beyond a teacher, or maybe are shared among a number of teachers teaching different sections or whatever. And so I think that we're probably just moving to a place where we're going to see more and more of this and online teaching.
And I think maybe it's going to be okay. What do you think?
John Nash: It reminds me of our conversation in the middle or end of our ethics episode this calendar year where we were discussing I'll call it scope creep or it will job creep.
Jason Johnston: Yeah
John Nash: I think it depends. Is this going to be a replacement technology, or If there are professionals in your circles who are already doing this work and then a new person comes along who's not it's not their station to do that work, but the technology will allow them to do it.
Will they be stepping on toes? That's what the first thing that comes to mind.
Yeah, I think there's questions to be answered at every level, as we've talked about before in terms of contextual ethics on this within your departments. And I was thinking about that this last week. I have the advantage at University of Tennessee of having people, we have humans that can do these things, right?
Jason Johnston: So it is more of that kind of question about, well, I shouldn't be using AI when we already have humans to do things. But this last week I was at a conference and talked to a lot of people that are a team of one, right? They're expected to produce multiple courses and expected to be high quality.
And they're maybe working at a community college or other colleges that are just not as well funded. And I think it maybe is another different answer to the question, maybe in some of those areas. What do you think?
John Nash: do. And I think you're right. I think and again, we're in that world where we say it depends. Many professors are teams of one who are managing course loads. they don't have ready access to a center for teaching and learning or a set of instructional designers or production level tools.
And so they want to create some evergreen material. Maybe they think their voice isn't up to lecturing for 15 minutes on video and staying stable. So these tools could be useful.
Jason Johnston: Yeah. You have a hard time saying completely no across the board for everybody in every place on these kind of things. However, that being said, I think that I'm feeling more confidence, saying no in my particular context on a lot of these things where I prefer for humans to do the human things when it comes to graphics and music and voice and so on.
And certainly We don't want to replace professors and have no intention on that, because I do think that those connections, I do believe that you there needs to be trust in a in a real teaching relationship, and I think you build that through that teaching presence and connection with the students, so.
John Nash: Yes. And so I think that's probably the framework that we should be talking about all the time is connection and presence. And then if the affordances of these tools, let us advance that. I think we're in a better place.
Jason Johnston: Yeah, that's good. Well, we got right into it, didn't we? With the AI voices spurred a conversation, but we did want to do this little kind of spring check in just to see what's going on. So what have you been up to this spring of 2024,
John Nash: Spring has been busy, not only with teaching to two courses both in person on campus, but April and May sort of AI related and teaching related. I was I was out and about in different places. I was in, in April, I was at the Lamar Institute of Technology in Beaumont, Texas.
Jason Johnston: Okay,
John Nash: their professional development day. Really impressive what they do there. Once a year they close-- no classes are held and all employees from classified staff and even, janitorial and buildings and grounds to the provost and president come together for one day of learning on this professional development day.
And they decided to focus a little bit on AI and I was invited to give the keynote address.
Jason Johnston: nice.
John Nash: On AI and the role and future in higher ed. And then I did some workshops. I did a workshop on prompt writing, and I did a workshop on ethics of AI and talking about crafting an ethic of care like we have
Jason Johnston: Nice.
John Nash: Gave some worksheets for them to think through how teachers could be thoughtful about integrating AI into their work. So that was great. A big shout out to Dr. Angela Hill, who's the provost at Lamar Institute of Tech, and also Beth Knapp, she's the executive director of human resources. They put on a great program.
Gosh, and then I was in San Antonio, sort of Texas focused. I was on a panel on AI in the classroom at the annual meeting of the American Society of Biochemistry and Molecular Biology. And so this is a gigantic annual meeting held in the convention center in San Antonio, filled with biochemists and molecular biologists.
But this was with Craig Streu from Albion College, John Tansey from Otterbein University. Emily Ruff from Winona University, and Susan Holacek from Arizona State. Have you run across Dr. Holacek's work before? I know you've been running around ASU a bit. But, this was a session on AI in the classroom, and so, in that one, I talked about large language models as two things, as instructional design partner, and as a teaching partner.
And so, I talked about the John Hattie bot prompt that Darren Coxon has shared out and how that could be used for instructional design. And then I played up Ethan Mollick's work to do deliberate practice and using turning LLMs into tutors. And so, in fact, I've got a deck that I put on Gamma that we can put in the show notes and everybody can see this live web page I've got with all the links on to a whole bevy of scripts and prompts and stuff that I've got there on that.
And then the last one, I added another one too. It was in Nashville. This one was a lot of fun. I was in front of about a thousand folks on a panel at the Healthcare Compliance Association's Compliance Institute in Nashville.
Now it was with Brett Short, who's the University of Kentucky's Health Care's Chief Compliance Officer and Chief Privacy Officer. No simple job. I'll tell you Betsy Wade, who's the with Signature Healthcare. She's the VP of Compliance and then an attorney from New York, Christine Mondos, with Ropes Gray.
Fascinating discussion about what healthcare compliance officers should be worried about in the presence of AI. And it's not just about, worrying about LLMs and the use of chatbots, but also where AI has penetrated a whole host of medical related software devices and where also healthcare folks may be in compliance or not compliance where they're using AI for patient use that is not licensed for patient use, for instance.
It really opened my eyes to the way we've been talking about AI, Jason, about mostly around chatbots and ChatGPT and how LLMs are infiltrating work. But on this other side, in a lot of universities and also across hospitals that have, or universities with medical centers, hospitals there is people may not understand what de-identified data necessarily is.
They think things are de-identified when they're not. 26 states are considering laws for use of AI in medical situations and how patients will be informed about their use. It's fascinating. So I think that was a lot of fun, to be able to talk about that. So yeah busy spring around talking about AI.
Jason Johnston: that does make for a busy spring. So, yeah, if if you guys noticed that our podcast dropped off a little bit there, you'll know why for a little bit, but we're back at it. I'm curious. So it was really interesting that you're pulled out this Institute of Technology, I think, and then you're with Biochemists, and then you're with healthcare folks.
What is the general feeling? Optimistic or pessimistic, would you say out there in the world beyond education with folks?
John Nash: it's I think it's a balance. So the my new friends at the Lamar Institute of Tech, they were optimistic. In fact, I was in many ways. I appreciated the provost perspective that a community college where half their graduates go on to four year institutions to, of academics and the other half are going into the workplace because with workforce development.
And in that light, they see themselves as needing to compete. And so how might AI make them more competitive in the way they think about their work, what they do day to day, And so let's be sober and forthright about what its possibilities are. I talked to a lot of instructors who are worried about their students using it in academically dishonest ways.
And so we talked about ways in which those could be teachable moments, the way they could think through their own assessments. So I think it's a balance, but I think the overall the administration is optimistic.
The panel on use in the classroom with the biochemists and molecular biologists was pretty optimistic and all the other panels were talking about ways in which they thought about how it could be used. Some who was it? It was Emily Ruff from Winona did some, has done some empirical work looking at students reactions to it and where it's been helpful and not helpful.
So I think it's overall optimistic. The healthcare compliance officers, that is a balance of just I think mostly awareness and being careful that you're not breaking the law or violating patient confidentiality because if you make that mistake, then the federal government comes in. And this is the other big difference between what's happening in that sector, Jason, and what we do day to day, in the academic side of the house is the federal compliance spanking is severe and so you have to be very thoughtful there.
Jason Johnston: Yeah, we've got FERPA, of course, but it feels that very rarely the FERPA police come in and actually do much of anything.
John Nash: not like the HIPAA police.
Jason Johnston: Not like the HIPAA police, which is, makes sense in many ways, because we're dealing with people's health care and yeah, exactly.
John Nash: One of the common challenges across all three of these groups is this understanding of whether, the systems that you're using are opened or closed. So for instance, are you inside your institution's walled garden? And is the, is that information that you're feeding into, it's staying there and not feeding the models or is it going outside?
That's a big concern in healthcare at any rate, because the tools are so opaque in terms of whether they're AI is baked into almost everything now. I don't know if you use WhatsApp, if you notice, but WhatsApp started to put AI right inside the the app itself at the top. And so, forget the the age 13 gateway is gone now because of all Generative AI is being stuck in all the apps without really being told.
So I think that's one thing that everyone had in common there is like, what do we understand about how data gets shared?
Jason Johnston: Yeah, it's fascinating. It's again, one of those situations, as you said, with health care and everything else, where AI is just being rolled out. WhatsApp, who's same company as Instagram, same company as Facebook, right? And so you now see it everywhere. You can you can chat with AI. And so it's here.
There's no stopping it, really, when it comes to academic dishonesty. I asked my kids a little while ago, where did, did our kids log on to chat GPT and so on? And they're like, Oh, no, mostly they're just like asking Snapchat.
John Nash: Yes.
Jason Johnston: Yeah. Okay. So what do you do to stop that kind of thing when it's just baked into all the technology that we're using?
John Nash: Yes. That's right. And so it makes me think about where this is going is it starts to get not only simple air quotes, simple GPT style chat gets embedded into apps, but then when it all becomes more sophisticated and embedded across other tools, that will be another. I want to talk more about that, but I want to hear what you've been up to.
Jason Johnston: it's been a busy semester, on top of all the day to day things that I do. Yeah, lots of hiring. We're growing at University of Tennessee. There's a strong push towards online learning and I think for good reason. We're. we're really trying to reach out to mostly a lot of undergrad folks who have started a degree.
They have some college credits. We have almost a million in Tennessee who started undergrad and never finished. And so we're trying to build out those courses. And so we're building up, hired some great new instructional designers. I work with some fantastic people. Very thankful for all that. On top of all that, I, did help lead an AI workshop in April called Thoughtful Teaching with AI.
And one of the cool things about this that I really enjoyed is that I was able to partner as part of, we're digital learning, so we're the centralized, like, online learning department. We were able to partner with our teaching and learning office, and shout out to a colleague, Chris Kilgore, there, and then also our writing center.
And shout out to Matt Bryant Cheney there. To be able to connect with them and develop basically a and then in some connection with our office of information technology as well and be able to create a workshop together using all of our perspectives and we're able to bring in our different kind of angles and perspectives on this two day basically workshop working with faculty focused around teaching with AI, thinking about creating assignments with AI and how to be thoughtful about that and build it into the curriculum in a way that is human, but a way that is impactful as well. So that was a lot of fun to do and I think interesting. I, as a reminder to those out there that are in similar spaces trying to help professional development and education is that there's still a lot of basic questions out there around dishonesty, as you were talking about around just usage, like, where does my information go?
How is it used? What's a good prompt look like? What is a chatbot versus an LLM? And those kind of things. And so we still need to be teaching and talking about these basic kind of things when it comes to AI. So.
John Nash: Yeah. So much of what I thought would be solved by now is
Jason Johnston: Right. Right. Yeah. And then I just came back, like, yesterday from the Digital Universities Conference in St. Louis. This is a conference that's put on by Times Higher Education, which I was not as familiar with, but I'm very familiar with Inside Higher Ed, and many of our listeners and yourself probably Familiar as well.
And I was on a panel with Rachel Brooks from Quality Matters, Flower Darby, Brian Beatty, and with a great moderator from Inside Higher Ed
Jamie
Ramacciotti
and we're talking about achieving access through equitable course design.
Had a great conversation and some good feedback from people in the audience. I think it was really interesting just to hear the different kind of approaches about even defining what equitable course design looks like. We've got some things that we all kind of land on in terms of UDL and making things accessible, but beyond that, really, what is the definition and some varied kind of approaches from, Brian And Rachel, we're less likely to really want to land on a definition.
Flower Darby, who's done lots of writing in this area, was, had a little clearer kind of idea of how to move ahead. So
John Nash: Nice. Nice. And you you mentioned to me, I think that there was also some presentations from some vendors and things, particularly Turnitin was there.
Jason Johnston: yeah, it was really interesting. Yeah.
John Nash: talk about that?
Jason Johnston: Yeah. And, without throwing anybody under the bus at all, but, we do talk about ed tech and we're at UT is a Turnitin university. We have Turnitin on. But it was really clear to me. That they were there on a really strong PR push to I think they've probably gotten a little bit of backlash on some of their AI detection that they turned on and then they turned off.
And it was really clear that they were there to, to strongly let people know that they're, Their purpose is student learning and good outcomes for students. It's not for catching cheaters. That's not their focus. I'm not sure if that is It may be that they're doing a little bit of a, not just a rebranding, but a change in terms of their organization itself.
I had a hard time, I think, not hearing some of those words without some skepticism um, and without kind of feeling like, That it's easy for them to say that now that maybe they're losing some of their market spot that they had before. And so they're trying to reinvent themselves into something else.
I'm not sure. I don't want to, I'm not judging anybody's motivations here for being there. I'm just on face value I think we need to continue to have a a digital critical approach when it comes to working with our ed tech partners.
John Nash: Certainly. Does it feel like they still want to try to detect AI written work?
Jason Johnston: What was interesting is that they seem to present it as if they could very clearly detect AI written work. There is not time for questions for this person. And so that they the main kind of operating guy, I don't know who it was that was doing a kind of a bit of a keynote talk. There's not time for questions, he just gave a spiel and then left.
But yeah, he very clearly kind of demon put on the slides that they're, they're able to detect AI. And this is what it looks like right now.
John Nash: Ah.
Jason Johnston: There's no chance for me to stand up and I guess I could have stood up and just dissented while he was talking, but I guess I have a little more,
John Nash: Yes.
Jason Johnston: maybe social
John Nash: More decorum?"
Jason Johnston: Decorum than that.
John Nash: not like, in a British parliament where you stand up and just yell "rubbish."
Jason Johnston: exactly and start pounding the desks and so on. Yeah, if I knew this was coming, I could have worn my AI Detectors Don't Work shirt or something like that, and it had more of a silent protest than I, could have just had it on without having to interrupt him.
John Nash: Well, fascinating. I don't know what to think of that. I want to believe that we're moving beyond that, but I guess what does a company that's called Turnitin, who's made their way through detecting plagiarism and plain old written essays back when we used to do that, right?
What do they do now? Yeah,
Jason Johnston: Yeah. Well, you had an experience recently, right? At your school. Are you able to share about that a
John Nash: yeah, well, yeah, just a little story from a colleague that I was contrasting in light of a great interview with Dario Amadei, the CEO of Anthropic, which is the company that makes Claude, the LLM, and he recently shared some, pretty mind-bending insights on the Ezra Klein show about how AI is evolving and where it will go this exponential growth in AI tech and that in the next, 18 months to three years, we could see things like AI like planning our trips or it's already writing code.
It's going to be integrated into our tools even more so. And this conversation struck a chord with me when I thought about a situation that a fellow professor shared. She had caught a student using AI to write a paper and they turned that paper in and she thought it was written by AI, felt AI.
But this same student had sneakily passed it on to another student who submitted it also as their own work. So we have not only academic dishonesty in terms of use of, say, chat GPT, but then full on plagiarizing and cheating in the old traditional sense by this other student. And so I thought, just, but so how she handled that is really not the point of this, but she was throwing up her arms a little bit saying, well, what do we do about this sort of thing? And it was a kind of a snapshot of the massive ethical puzzles we're now facing thanks to the presence of AI now, but also, what Amadei is talking about is AI getting so good at handling complicated stuff that soon chatting with AI is going to feel as natural as talking to, you and me and here we are now today trying to figure out how to keep AI from turning our students into these copy and paste wizards.
And so it was a bit of a reality check for me about where we need to be. So my story is really ends with a question. What's the game plan now for us as educators? We're all, we're still stuck trying to figure out what to do about how to assess well in the presence of AI and courses that have AIable assignments.
So what will we do? How do we push this conversation next? I think we still have to think about AI as a force for good in education, but that has to come with more conversation. It makes me realize I'm not having enough conversations with my colleagues about ethics of the use of the tool, transparency of the use of the tool and where it can benefit them.
Jason Johnston: Yeah, and I think that benefits, ethics, and transparency are things that we can continue to look at, and I think what we can't do is make policies based on where AI is today, right? As Sam Altman, I think, said even just this last week, what you're using right now is the worst AI that you will ever use.
It was something like that.
John Nash: Did he say that? I know Mollick has been saying that too for, yeah.
Jason Johnston: this is it. This is the worst version of ChatGPT that you will ever use. I've heard to the grapevine that, ChatGPT 5 is coming out this fall and it's going to be like a 100x what we've experienced. I don't know what that means exactly but, I just think that we cannot, we can't look at it today and say we've got to make policies based on the quality that we see right now.
I think that we can think about some of these other ways that we can approach it that, that should stand the test of time. Transparency. I think whether it's 100x in six months, when we start the fall break or fall semester, whether it's 100x or not, I think transparency will still be a thing we want on the table.
Right?
John Nash: Yes. Yeah, definitely. I was reading I get a little newsletter every morning called the Bay Area Times. They were noting that OpenAI showed off an unlaunched deepfake voice tool recently. It only requires 15 seconds of reference audio to clone a voice. Now we were talking earlier about, well, it wouldn't be nice if we could have some voice generated material for instruction, but we weren't talking about cloning or deepfaking voices.
But if you only need 15 seconds, I think that's that's pretty amazing and frightening.
Jason Johnston: Yeah. Yeah. There's a Hard Fork podcast, both you and I are fond of. They just did one, and we can put a link in the notes, where they were talking about a situation where a principal, was was put on suspension because somebody had used a deep fake of his voice that sounded really realistic. And not just realistic, but sounded like the environment that somebody might have just recorded somebody, in a hallway through their phone kind of thing, saying things that he didn't say.
John Nash: Yes. And another example in a school in Southern California, I think of students who were suspended for doing deep fake images of female
Jason Johnston: Right.
John Nash: that were that were pornographic. Really terrible stuff. And I think it shows how important it is for school leaders, both in, P 12 and in higher ed to be thinking about how we'll get in front of the stuff.
Do the existing policies you have really get at it?
Jason Johnston: Yeah. One of the sessions I was at this last week at Digital Universities conference was by Dr. Robbie Melton, who is the Interim Provost and Vice President for Academic Affairs Technology Innovations at the Tennessee State University. And she was talking about the impact of AI on minority-serving institutions, which hers is one. One of the things that, that she was talking about in this was just that she was stressing, if you do not understand what AI is doing, then you need to.
Like, not everybody has to be an expert, but everybody needs to understand the capabilities. And she's like, "This is why, if I'm showing a demonstration, I don't show them, ChatGPT 3. 5. We go for 4, this is why I keep up on all these things so I know exactly where it's at, because people need to understand where it's at and where it's going in terms of its capabilities because people underestimate what's going on."
And I think it's the same thing in our schools for really understanding where all of this is at. I think that as leaders, we do need to have at least some sense of where the technology is at today and then where it's going tomorrow.
John Nash: So if people are interested in listening to that episode, that was the Ezra Klein show where he interviews Dario Amadei, D A R I O, A M O D E I. And it's a really interesting picture into where one leader of one of the frontier models of generative AI is thinking this will all go.
Jason Johnston: Yeah. Those are some great series with Ezra Klein. Again, just for all of us to expand our understanding of where things are at and where they're going. Yeah.
Well, it's great to catch up, John. It's nice to see you. And I'm glad to see you after all the busyness of the semester. We've got a couple more podcasts coming up with some amazing guests.
And then we'll do a a summer kind of break and wrap up, but yeah, as always our podcast can be found at onlinelearningpodcast.com. Please, wherever you listen to this podcast, if you can do a review. That would help us know how things are going, as well as help the algorithm get it in front of other people that like similar podcasts.
And find us on LinkedIn, of course, and we've got the links in our show notes as well. Love to hear back from you about what you think about this podcast and others and and everything that we're saying here. So,
John Nash: Please like, comment and subscribe. We, yeah, we have three more episodes in the hopper that are going to come out with some amazing guests. And so I'm excited for those and excited to talk about summer plans after that.
Jason Johnston: sounds good. All
John Nash: Cool. Talk to you later.
Jason Johnston: Talk to you soon. Bye.