
Online Learning in the Second Half EP 8 - Comparing and Testing AI for Education (But Can it Write a Good Theme song?)
In this episode, John and Jason talk about John’s use of AI in his doctoral mentoring and personal research, if prompt engineering will be a job, comparing large language models, detecting AI writing, and if AI can create a podcast theme song.
Join Our LinkedIn Group - Online Learning PodcastAI Large Language Models to Test
AI in Research Tools
AI Detection Tools
Links and Resources:
- How to cite ChatGPT in APA
- The Various AI Generated Podcast Theme songs Google doc
- Please comment and let us know what you think and what you like!
- “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” by Tyna Eloundou, Sam Manning, Pamela Mishkin , and Daniel Rock. https://arxiv.org/pdf/2303.10130.pdf
- “Artificial muses: Generative Artificial Intelligence Chatbots Have Risen to Human-Level Creativity” by Jennifer Haase and Paul H. P. Hanel. https://arxiv.org/pdf/2303.12003.pdf
Opening theme music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Closing theme music: Eye of the Learner - composed and arranged by Jason Johnston
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False start
[00:00:00] Jason Johnston: When to pull out the AI tool so that you're not bringing another party too early into the conversation.
[00:00:08] John Nash: Hmm.
No, I don't. No, I know. I don't know. I just know it when I feel it.
[00:00:14] Jason Johnston: Kind of like love.
[00:00:16] John Nash: Yes, exactly.
Start
[00:00:19] John Nash: I'm John Nash. I'm here with Jason
[00:00:20] Jason Johnston: Johnston. Hey John.
Hey everyone. And this is Online Learning in the second half, the Online Learning podcast.
[00:00:27] John Nash: Yeah. We are doing this podcast to let you in on a conversation we've been having for the last two years about online education. Look, online learning's had its chance to be great, and some of it is, a lot of it. But we can get there. How are we going to get to the next stage, Jason?
[00:00:44] Jason Johnston: That is a great question. How about we do a podcast to talk about it?
[00:00:48] John Nash: I agree. Let's do that. What do you want to talk about today?
[00:00:53] Jason Johnston: Well, I was curious about just where you are at with your testing of different AI and how that relates to your own teaching and mentoring of students that is going on for you right now.
[00:01:10] John Nash: The stage I am at now is where I have been for the past few weeks, which is having conversations with my doctoral students about ways that ChatGPT can be helpful in focusing their writing on difficult issues to express. They, they, I've talked a little bit about this in the past, but many of my students are doing mixed methods, action research dissertations that are kind of micro-politically fraught. They are about issues that are of importance to them about creating change in an organization. And so, while they know tacitly what they want to say and what they want to do, because they're in the middle of the situation that they want to change, it can be hard to express that clearly.
Right.
So, we'll sometimes use ChatGPT as a way to think about what are the big pillars that they want to dive into.
Mm-hmm.
And have it help us make some connections that we might not otherwise be able to make across these seemingly disparate aspects of the organizational change.
[00:02:16] Jason Johnston: Hmm. So let me picture it then. Do you use it in real time? So, you'll be in a Zoom with one of your candidates? Yes. And then you, you pop it open and you use it in real time. Describe that. Like what you might do if I was your doctoral candidate and I was having a difficult time getting down to the point or trying to describe something or brainstorming or whatever it is.
[00:02:41] John Nash: Yeah, I won't have it open as the purpose of the call, but we will, we'll have weekly meetings on progress towards a prospectus or some aspect of a proposal, and we're discussing, yeah, some part that the student is stuck on. I'm not sure how to express this or this new wrinkle has come up, and I'm not sure how we're going to handle that new aspect. And as they struggle to think about how that should look, I'll ask them to say a few big sentences about what they think the key issues are on that matter. And then I'll say, well, let's just open up an AI model and let's see what it can do with these ideas. And so, then I'll share my screen and I'll say, talk to me out loud about the three big bullet points you think are important here, and put those down. And then I'll, we'll say, well, what is your goal here? I said, well, I want to figure out how to connect these ideas because they seemed sort of disparate.
Mm-hmm. And say, well, let's put that in there. Connect these three ideas and what are the similarities and what are some things that are maybe different about them. And then it'll spit out some output that we'll look at together and we'll discuss together so they can have some thoughts on the, on the direction they want.
Great question, but I didn't answer it
[00:03:51] Jason Johnston: That's great. So, we've done a lot of conversation about how maybe AI can help us to think through things without replacing our thinking. How do you have any kind of litmus test for when to pull out the AI tool so that you're not bringing another party too early into the, into the conversation?
[00:04:13] John Nash: Hmm.
No, I don't. No, I know. I don't know. I just know it when I feel it.
[00:04:19] Jason Johnston: Kind of like love.
[00:04:21] John Nash: Yes, exactly.
Pivot because I didn't answer the question
[00:04:24] John Nash: Another way that I have used ChatGPT is to train it on the criteria for the rating of sections of dissertations based upon key authors that we're interested in having students adhere to key ideas inside aspects of literature reviews or mm-hmm, research designs, particularly in the context of mixed methods action research, which has a different tack than a traditional sort of five chapter theory building knowledge creation dissertation. It's very action oriented. It's very contextual. It's locally bound. And so, the AI model has been helpful in helping students adhere to some of those structures that they can miss because it's important to be very detailed in these, these reports of these settings in particular, and the stakeholders that you have talked to and the kinds of information you've collected from them. So, I've been able to use ChatGPT to teach it what it should be looking for with these key things and then subjecting the model to student writing to also help me catch things that I might miss in the context of their narrative.
[00:05:33] Jason Johnston: Hmm.
Yeah, those are, those are great use cases. Do you have some prompts depending on your purposes, what you're trying to do?
[00:05:39] John Nash: I have different prompts for different purposes. A significant portion of the studies that these students do is predicated on a diagnosis of a problem, a practice, a leadership dilemma in their organization. And so, I have some prompts that help the diagnosis section to say whether or not there's help that's needed in tidying that up. But then they're also for research design. Since these are mixed methods designs, they might be concurrent or they might be consecutive designs. They could be sequential. And so, needing to be very detailed in that, I have prompts that help suss out matters related to how well those sections are written.
[00:06:15] Jason Johnston: Hmm. So, you kind of keep those prompts handy. So, you have some that you've already created. So, you don't necessarily always create those prompts on the fly. Yeah. Okay. Yeah. So, you have like a, a prompt menu.
[00:06:28] John Nash: Yeah. A colleague of mine and I have created a OneNote document on OneDrive where we keep different types of prompts for different issues and across different kinds of tools. So one is with ChatGPT, and we have a whole catalog of prompts that we use for common issues in there, but also for some of the research tools that are out there, we have prompts in the OneNote for Research Rabbit or for Elicit, mm-hmm, or Consensus and some of these other tools that, mm-hmm, used not for student facing work necessarily, but maybe our own research and kinds of things we're thinking about in there.
[00:07:02] Jason Johnston: Mm-hmm. Yeah. I think that prompt engineering is it, do you think that's going to be a, like a position in research centers or at, at companies? I, because it's a, it's a thing, right? You,
It is a thing. It is a thing.
You're keeping these because it takes time to really craft a prompt that gets you, it's like the classic really. AI is classic computers. You, it's garbage in, garbage out. If you don't give it a good prompt, it may kind of read between the lines a little bit better than it used to. It won't spit back an error, and maybe that's to its detriment. It may just spit back what it thinks you want to hear.
Mm-hmm.
Without really understanding. But yeah, it, I mean, simply the fact that you're keeping a list of good prompts shows that it's enough work that it's worth it to you to keep a list of them, right?
[00:07:56] John Nash: Yes. That's fair. Two things that I hear you asking. One is, do I think that'll be a job one day? I don't think so. I know there was a lot of chatter across different publications that this is going to be a thing and that they're high paying jobs to be had for good prompt engineers. I'm not certain that's going to be the case. It's, it is true that you do have to prompt these machines very carefully to get good quality responses back. But I get the feeling that it's going to be more like any of us tech geeks that are out there that just became good at a tool and then, you know, they're, they're going to be the people that are relied on to, to figure out how to do this.
Right.
That being said, I think AI is going to take care of itself in this regard. I mean, all of the prompts that are going in now surely are probably cataloged somewhere. We have to remind everyone that everything we're talking about, the shifts that are happening across labor markets, across education, that the dialogue's happening in P–12 education around teaching and learning are all based on tools that are six months old and in public beta.
Right.
Full stop. So, I think, you know, the prompt engineering is going to be handled by the AI models because, mm-hmm, it'll, it'll probably teach you how to ask it. Like, it'll probably come back with better Socratic questions like, did you mean this? What do you really want to do here? Because it is garbage in, garbage out at this point. You write poor prompts, which are basically, you ask it sort of one sentence, you know, you know, write my paper for me.
Right?
Doesn't that, that, that gets you nothing. The reason why my colleague and I catalog these prompts is because they're lengthy. Because they really are, and I think also Jason, it's a misnomer. We shouldn't, it sounds like when we say, did you engineer that prompt? Did you write that prompt? It's as if it was just the one question to get it. It's, these are six or seven prompts nested after each other based upon scaffolding some information that you need the model to know before you can go to the next thing. So that's why these have to be saved. You can't, you can never remember them to get them right once they work.
[00:10:08] Jason Johnston: Yeah, once they work. Yeah. And it's a learning process for all of us. I mean, I think it's one of the reasons why as you and I are interested, we share some of the prompts and responses back and forth because,
Mm-hmm.
It's interesting to us, first of all, but it's also like, oh, look what I got it to do today, kind of thing. Right.
And, and I think all of that is informative to, as we learn how to work with AI better in, in more productive ways.
[00:10:37] John Nash: Mm-hmm. Definitely.
[00:10:40] Jason Johnston: Yeah. So, and as you're doing these prompts, do you have a particular AI language model that you reach for all the time?
[00:10:49] John Nash: I reach for ChatGPT, and yeah, I use, I, I pay 20 bucks a month to use ChatGPT-4.
Okay. Yeah.
Yeah. And that's even, that's limited to like 25 prompts over a three-hour time period. They even gate it now at that level. But the, the quality is, I think, better than the ChatGPT-3.5, although it's slower. ChatGPT-3.5 is, is not bad to do, sort of perfunctory administrative matters related to, you know, analyzing some memos or putting something out just quickly.
Yeah, that's the one I tend to reach for.
[00:11:23] Jason Johnston: Okay. And well, that's interesting. I didn't know that you were paying for it. Yeah. Do they, do they send you a tote bag or anything like that? Do you get any stickers or badge to put on your
[00:11:33] John Nash: I have to, I have to ask it to design my own tote bag and yeah, I have, I have it write the prompt for Midjourney so I can design the logo for the tote bag that they would get.
[00:11:42] Jason Johnston: Right, right. That's good. Yeah. Still currently, if you're using ChatGPT for strictly at OpenAI, it cannot access up-to-date information, right? It's still 2021 and previous kind of thing.
[00:11:57] John Nash: So, I will use Bing now and then to pull some uh, internet responses where I think I'd like to look at some literature or find some ideas on some journal articles or other items to chase down. And then I can take those findings and put them in something like Research Rabbit, which does a sort of a semantic web of literature based upon titles of journal articles you can put in there. So, then I can find related research to an area that I'm interested in.
[00:12:29] Jason Johnston: Mm. Yeah, yeah. I used, I, I, I played across a number of them for, and then, we'll, I'll tell you in a minute about something kind of fun that I did, but for something more serious, I played across a number of the AIs trying to get them to summarize some articles. I'd read a pretty decent article about AI. What I liked the article is that it had a lot of references, a lot of current references of articles I had never heard about.
Okay.
And so, I took the references and then I was trying to prompt the different language models to give me summaries of the articles.
Mm-hmm. So, saving me the time of having to go out, find the article, download it, get a summary, although you could look at, I could read the different abstracts, but actually to summarize the main points, and I had some varying results depending on some of it had to do with currency because some of the articles were so current. So ChatGPT didn't, couldn't register a lot of the articles because they didn't know they existed to, to look them up. Right. Bing was able to do it better for me because of that case, because it had up-to-date information as well as then leverage. ChatGPT was able to do it. And so, I found kind of my best results really in either Bing or in Bard. So, Google Bard can also do up to date information.
[00:13:52] John Nash: Bard's getting better. I, I have to admit, I haven't tried Bard yet.
[00:13:56] Jason Johnston: Yeah. And for this kind of task in terms of summarizing, I found that Bard was actually pretty good and gave me some pretty good summaries of these articles. And now of course I'm trusting Bard that it's actually summarizing the article and not just making things up. However, in this case because I just wanted a, a quick summary of these articles to see if, which ones I would maybe be interested in reading more, it was actually pretty helpful, I think.
[00:14:23] John Nash: That's excellent. And did you come to this conclusion because you were using the other system that compares it, is that called Poe?
[00:14:32] Jason Johnston: Yeah. So, another great tool that I've been using is, is poe.com. Okay? You're able to go into Poe and you are able to select different language models on the left-hand side. And for those that are listening, that are not paying $20 a month, like me, I'm not paying $20 a month yet for GPT-4, you can basically get one token per day to GPT-4, and so you could do some testing of your own.
Nice.
And especially you've already crafted and you don't have to ask a lot of questions. You could send out that one well-crafted prompt and then see what GPT-4 will spit back. So, yeah, it's Poe and it has a fairly decent mobile app as well that allows you to do the same thing.
Mm-hmm. So, I've been kind of checking out a few of the mobile apps that way. Bing has a pretty good mobile app as well that will actually let you talk to it. And then it will answer you back.
[00:15:26] John Nash: That's, yeah. So that's interesting. Well, what do you think, Jason, about the tools that are coming out that are going to try to catch people using these models?
[00:15:38] Jason Johnston: Yeah. It's interesting. I think the most well-known, and at least they say that they're the most used of those, which is ZeroGPT. And for those listening, you can try it for free and just look up ZeroGPT. I think it might even be, is it zerogpt.com? Yeah, that's what it is. And you can paste in some text and see if it recognizes it based on perplexity and burstiness. Right.
[00:16:07] John Nash: Yeah, that's it. I had some fun playing with this because a friend of mine posted on LinkedIn about how GPT Zero, oh and it's yeah.
Yeah. GPT Zero, is that what we said?
[00:16:19] Jason Johnston: Yeah. GPT Zero. Oh, sorry. There's two of them. I guess there's actually GPT Zero and then not to confuse things, there's ZeroGPT. Yeah.
[00:16:28] John Nash: You're right. Let me see which one I used. Oh, I think I used ZeroGPT, which was put out by, hang on. Yeah. Okay. So, there's a, there's a young man named Edward Tian who got a lot of press about a month ago who, for building GPT Zero. He's a CS major at Princeton and a minor in journalism. And then there's also ZeroGPT. So, they're both tools. I played with the Tian's tool.
[00:17:04] Jason Johnston: Yeah, that's GPT Zero. So, me, that's the one that looks at perplexity and burstiness.
[00:17:12] John Nash: Then that's the one I played with. Yep. And so, I went in there and I had some fun with that. I asked it to write on a topic that, uh, I'm currently interested in, which was on the topic of teaching problem solving, not solving a problem. It's a subtle distinction, but it's important. Okay. And, first I, I asked ChatGPT to try to write more human-like by writing like me, John Nash. And so, I took some text from my book that got published in 2019 and I fed it into Bing, and I fed it into ChatGPT-4. And I said, talk to me about the style of this writing and how would you label this writing? What, what is this writing like? And both of them spit out some content that talks about what that writing is like, and then I said, okay, fine. I want you to pretend that you're a writer that writes like this. It's conversational and informative, and it encourages reflection. And then I want you to write about how college professors should teach teenagers problem solving, not for the sake of solving the problem, but for the sake of teaching problem solving. It's a subtle but important difference, right? Right. It spits out these paragraphs, and I took all that and I threw it into GPT Zero and GPT Zero said back to me that this text was likely to be written entirely by AI. Mm-hmm. So, in spite of trying to teach ChatGPT to write like me, this tool caught me and said this was written by AI. And it said that it lacked perplexity and burstiness. So, I said, all right, fine. I'll just rewrite the prompt. And I say, well, now write about how college professors should do all this. And now write it with high perplexity and high burstiness. So, and then off it went and it wrote a different one, and it was slightly different than the first one. And I guess it had more perplexity and burstiness. But the result was your text is likely to be written entirely by AI. So that didn't work. Well, undaunted, I took the original response there that it gave me from that second one.
And all I said was fine. But you really have to increase the average perplexity score, which is your text measurement of randomness of the text and the burstiness score, which is the text measurement of the variation in perplexity. Okay. So now write it again. This is exactly what I typed, and the result was this. Your text is likely to be written entirely by a human.
Hmm.
So, I thought, wow, fantastic. I won. Except I didn't, because what's interesting is that the text that the detector thought was entirely human was ridiculously flowery and inflated. It was kind of like when a master's degree student thinks they're supposed to sound academic, right. And actually, in my humble opinion, the second attempt that they thought was all AI actually read fairly naturally and seemed to be the short-term winner in my little experiment.
Hmm.
But the, the text on this one that they said was human. I mean, there was just nothing I would've, I would've sent it back if a, if a student had sent it to me just for how inflated and, and ridiculous it sounded.
Can I give you a flavor of how GPT Zero thought it was totally AI and where it thought it was totally human?
Yeah.
So, this first, a couple of sentences is about how college professors should teach teenagers problem solving, not for the sake of solving the problem, but for the sake of teaching problem solving. And so, and this was detected as completely AI.
Hmm.
Have you ever wondered about the true purpose of education? Is it simply to train young minds to solve specific problems, or is there something more profound at play? Let's explore how college professors should approach teaching problem solving to teenagers, not for the sake of solving problems, but for the greater purpose of teaching the art of problem solving itself.
So, GPT Zero said, no, out. That's totally AI. And it was right.
So, then I told it to up the perplexity and the burstiness and make it, make it really up there. Okay. Now I have to channel my inner pompous person. So, this is, this was according to GPT Zero, written by a human.
The enigma of education. Is it merely a means to prime our youth for problem solving conundrums, or does it hold a more profound role? Let us delve into how college professors can illuminate the path of imparting problem solving to teenagers, not for the sake of addressing problems, but to bestow upon them the very essence of problem solving.
Yes. Human.
[00:22:08] Jason Johnston: And it's called that human. Yeah. Yeah, there's a, there's a problem there because you think about how much clearer the first example was versus the second. I got lost in the words. Maybe it was the accent I got lost in, but I also got lost in the words. The first was much more direct. Yeah.
Cleaner.
[00:22:27] John Nash: Enigma, conundrum, delve, illuminate. Yeah. That was fascinating.
[00:22:35] Jason Johnston: Yeah. Those are good examples.
[00:22:36] John Nash: Yeah. I, I mean, you can defeat these things, but, you know, to what end? And I think for me it says a lot about the purpose of tools like this are to catch, you know, cheating, which I think is a sort of a dodge. It's to catch students who are not going to do their own work. But I think really the point is here is that we need to rethink the way we assign work because I don't see any real benefit to these kinds of tools.
[00:23:01] Jason Johnston: Right? Yeah, using these tools as a gotcha moment is not, in my opinion, it's not very instructive.
Mm-hmm. But I think certainly there is some value for students to feel motivated to write original work. Because we know that students who are feeling high anxiety or they're stressed out by the rest of life, and, you know, they might just take the shortcut once or twice or whatever to try to get them to their end goal. We know it's a real both a threat and possibility for students. And so ideally, we want them to be thinking and to writing their own work. However, yeah, I don't know what the tool, I don't, I don't know the usefulness of these tools either. The other thing is, and I just encourage people just to try it out themselves like you did. I think that was a great example. The other thing is that it's not like the old, it's not like Turnitin, where as a teacher you could use something like the Turnitin plagiarism detector. Mm-hmm. It will detect without question 100%. This was copied from an article that was posted here or from this website, or even from this, this paper that was turned in. Yes. In 2016 at this university. You can request the original documents in those cases to be able to come up with a, without question kind of moment. And I've used those, we have used those in school to be able to talk to people about, about plagiarism.
Yeah.
And it's difficult to get to those conversations with some people without hard evidence. And I heard recently about somebody using one of these AI tools without hard evidence with somebody that they knew without a, without a question that they'd written this themselves. But the AI tool had come up with a, a false, uh, when it actually thought it was AI, but it actually was written by the human. And so, the usefulness of these tools is, yeah, as you said, hard to maybe determine at this point.
[00:25:10] John Nash: I went and grabbed some LinkedIn posts that I've done that I, that I knew I had written and threw it into the model. And it said that parts of it were surely written by AI. And it's interesting, it'll highlight the, the sentences that it said yes, that were AI and that were not that were all written by me. I will also add, I did take in a couple of paragraphs from my book. So, I took chapter one from my book and put it into the thing, and it said it was written all by a human. Yay. But what I thought was interesting, Jason, was that the, this burstiness and perplexity scores were on the magnitude higher than even any sort of the best perplexity and burstiness of the samples I was doing before. And so, there's something about presumably well-written human text that have these things.
[00:25:58] Jason Johnston: Yeah. Yeah. And it's somehow more random, even though it doesn't feel that way. When I write a, when I'm writing a paper, I'm mundane in some ways. You know what I mean? Yeah. Like I just feel like I'm just, I'm just pulling up the same sentences I've done before. But there is something in the way that we craft text as humans that is much more random and has a lot more variation to them than, than we even give ourselves credit for.
[00:26:22] John Nash: I really should compare these two because the pieces from my book, which have high perplexity and high burstiness, but they read with a flow that I thought was similar to the second example in here that was AI driven, but still it had a sort of a, a flow to it that felt natural to me, at least to my ear and eye. Whereas the one that passed as human, that was all written by ChatGPT is so ridiculously inflated with its purposeful attempt to have, you know, there used to be this book called the, the Thesaurus for Highly Intelligent People or something like that. But it's like these just ridiculous synonyms and antonyms and other choices of words that are really off the scale. And so, I think, yeah, that's really just sort of interesting to think about what the computer thinks is burstiness and perplexity and it can beat the AI models for catching that.
[00:27:14] Jason Johnston: Mm-hmm. It's been really interesting serving on a couple committees right now talking about AI here at the University of Tennessee. One of them is the pedagogy committee. One is the philosophy committee. The philosophy committee is a lot of fun because you can really try to go at that really high level of what it is we're trying to accomplish and what are some of our concerns. One of them is simply transparency, and I think that's, yeah, where I would like to see us get with AI is less of when we're talking about it, it's not plagiarism, but, but it making text for us, writing for students is getting to the place that we try to create as much transparency as possible within our classes so that we can talk about the use of AI, that we can maybe offer some opportunities to use it within class, and then also be able to have other opportunities and make sure we're really clear and calculated about, about when we don't want AI being used, and we can also talk about that and, and hopefully get to a place within our classes where we just know when we're using it, when we're not using it. And, and in some ways maybe hope for the best for those that, that are using it as a shortcut.
[00:28:32] John Nash: Yeah, definitely. I think that this notion of transparency is great. I think that there's an opportunity here for teachers and professors to approach this in a way that tells us, you know, here's how we can use this in a productive way. Here's how we can bring this into our conversations. Mm-hmm. Mm-hmm. I think you pointed out to me just today that, uh, APA has come out with a stance on how you can cite ChatGPT in, in, in its
[00:29:01] Jason Johnston: use.
Yeah. An official stance a couple of days ago, which just, I think they're just having enough people ask them, they're like, I want to be transparent about using AI. I'm going to use AI. I did use AI for this writing. How can I cite this so that it is, so I'm being transparent so that people don't accuse me of plagiarism or whatever, what would the word be if it's not plagiarism? Because you're not actually stealing it from somebody else. It's generated, AI generated. It's just, yeah, it's just, well, is it, it's just cheating, I guess. Yeah, just cheating. I'm tired of that word. I think it's, it's getting, it's, it's used as a catchall for too much, but it's, it's as if I were to hire someone to write my paper for me.
[00:29:43] Jason Johnston: It's, it's inauthentic writing. Yes. Really in a, in a way when you're not being transparent about it.
It's academic dishonesty.
Yeah. Yeah. A level of academic dishonesty. And so, I'm really glad that they just kind of came out with it. Well, here you go. If you want to cite it, didn't say you had to, it wasn't being prescriptive, but it said if you want to cite it, here are the different ways. And so, we'll put the, we'll put the link for that into the notes as well. If people are asking that same kind of question. And I wouldn't, I, I'm hoping that we're kind of moving in that direction. Mm-hmm. You
[00:30:15] John Nash: know, they also talked about how to cite ChatGPT when it is your colleague or your ally in your work. So sometimes you are talking about your use of ChatGPT as a manner of your research endeavor, and so you have ChatGPT output quote in your paper. It tells me how to cite that, yeah, appropriately in your references now too. Yeah.
[00:30:39] Jason Johnston: Yeah. I think it's, I think it's helpful. Yeah. Well, can I tell you, as we're kind of closing up here, can I tell you about my kind of fun way that I, that I used the different AI to kind of compare prompts?
[00:30:50] John Nash: Absolutely. What did you, what did you, what did you take up?
[00:30:53] Jason Johnston: Well, you know, you had created a basically like a, a PR kind of pitch sheet for our podcast, kind of talking about what we're doing, what we're about, and so on and so forth. So, I thought, well, we've got this kind of well-crafted pitch sheet now for our podcast. What this podcast really needs is a custom theme song, right?
Absolutely.
Then I thought, what better theme song would underlie our podcast than the Eye of the Tiger? Like, talk about a second half theme song, right? That moment where you're going back into the fight of your life and you don't know if you're going to be able to do it or not, and you need something to pump you up. And that is a song that you need to pump you up is Eye of the Tiger, right?
[00:31:46] John Nash: I think you may be onto something here.
[00:31:48] Jason Johnston: I don't know what other song would do it for me anyway, maybe for other people. As you think about going into the second half of life. So, my prompt was, can you write a theme song for my new podcast called Online Learning in the Second Half to the tune of Eye of the Tiger? I want it to be hopeful and energetic, just like the song that got Rocky pumped up for his comeback. This song should energize people in online learning for the second half of the game. You can base some of the content from our press release below, but please be fun and creative. Don't let the details hold you back. And then I included in the prompt our press release or as much as I could squeeze into the limited character count. Excellent. All right. So, here are a couple of the, and you can give your response on, on some of these. But here's a couple of the things I got back and I'll just, I'll read a little bit of ChatGPT-3.5, the verse, and then the chorus.
Rising up in the world of online learning, taking on the challenges we're facing. We know there's room for improvement and we're ready to make it more, it's not bad for a verse, right?
It's not bad.
And then the chorus: Online learning in the second half, we're going to make it more human, creative and fun.
I don't know.
With, yeah, with John Nash and Jason Johnston, we'll explore the possibilities for everyone.
[00:33:12] John Nash: There was a little rhyming there. That's good. Yeah.
[00:33:14] Jason Johnston: A little rhyming. Okay. This was ChatGPT-4 and I'll just do the chorus. Actually, I'll do the verse, the first verse for this one too, because it was pretty funny.
Rising up back in the classroom. Did our time, took our chances. Yes. Went the distance. Now we're not going to stop just two friends with a passion to share.
[00:33:36] John Nash: Huh? I can, yeah. I'm playing, I'm playing the melody in my head as you go.
[00:33:42] Jason Johnston: Right? Yeah. And then the chorus.
It's, I have the learner. It's the thrill of the screen.
Rising up to the challenge of our rivals.
I don't know who our rivals are, as, as, and as we dive deep into this digital scene, going to make online learning come alive.
[00:34:06] John Nash: Come alive. Rivals. We have to run with.
[00:34:08] Jason Johnston: Yeah. And it didn't quite work with the, with the timing, but I thought that was okay. That's pretty, that's definitely hopeful.
[00:34:14] John Nash: Yeah. Yeah. No, very much so. Very much so. Now I'm going to spend the rest of my day wondering who our rivals are. Yeah, that's
[00:34:21] Jason Johnston: good. Here's, here's Bing Chat, more balanced, and I'll just do the, I'll do the chorus. It's the online learning in the second half, rising up to the challenge of our rivals again, and the last known survivor stalks their course in the night, and, and they're watching us all with the eye of the tiger.
[00:34:47] John Nash: The last known what again? The
[00:34:50] Jason Johnston: last known survivor stalks their course in the night.
Oh my gosh. So maybe this is after AI has taken over the
[00:34:57] John Nash: world. Maybe last known, the last known teacher posts their course in the night. Yeah. Yeah, yeah, yeah. Well, I think we should, if people have an opinion on where we ought to go with that, we might have to, we might have to record.
[00:35:13] Jason Johnston: We might have to do a theme song. Well, and I'll post my Google doc with, with the different outputs here. And if anybody has any opinion, they can let us know. So, but I thought that was a fun way to compare. And some of it is just to get a feel for these different large language models, what they can do. And it helped me just to kind of play with the tools in a low stakes kind of way that helped me just to get a feel for, for how they could take a pretty specific and creative prompt and how, what their output, yeah, shows up as.
[00:35:49] John Nash: That's hilarious. Jason, this was a lot of fun, a lot of good stuff covered today.
[00:35:54] Jason Johnston: Yeah. Good talking to you about this. And we promise to those listening. I think we already promised this, and I'm not sure we've come through on a, on this promise yet, that we're not going to make this podcast about AI. So, we are going to move on. One of the ways we're going to move on is next week, if you're listening to this on, in the week of April 10th, but it'll be the next week we'll be at OLC in Nashville on the Wednesday doing a, a design thinking session about humanizing online learning. And we really want to take this podcast in that direction. So, after that session, we'll have lots more ideas I'm sure, about what people are thinking about and, mm-hmm, and where they want to go next.
[00:36:34] John Nash: Yeah. That session is going to generate a lot of ideas. And then we're going to have time at OLC in Nashville to also talk to people. While we're there, we've got, we'll have our microphones and hopefully grab some folks and get to talk more about what's on their minds.
[00:36:48] Jason Johnston: So, if you're going to be there and you want to talk with us, just shoot us a, a message in LinkedIn is the best place to probably get us there, and we'll let you know the, the secret room and time to be there. So that would be great to, to meet some folks and to talk with you. Great.
[00:37:06] John Nash: Yeah. This is good.
[00:37:08] Jason Johnston: Yeah. Can I leave us with some words from,
Oh, absolutely.
Okay. Here's some words. Here's a theme song from Bard. Some, some words to take us out on. Um, online learning, it's not a one size fits all solution to the tune of Eye of the Tiger, of course. Okay. It requires us to think critically and creatively about how we design, deliver, and assess learning.
That's supposed to be the second line. I don't think that would sing very well. Experiences that meet the needs and interests of diverse learners. We want to keep up with the changes and share them with you. I think it captures our heart. I think
[00:37:46] John Nash: it does. It captures our heart. It did a terrible job of putting it to the music.
[00:37:50] Jason Johnston: Yeah, yeah. But it does capture our heart for this podcast and for all of you. So, we hope you keep on listening and please connect with us online, onlinelearningpodcast.com. Find us on LinkedIn, under the same name, and we hope to connect with you and hear about the kinds of things that you want to talk about on this podcast.
[00:38:09] John Nash: Excellent. Keep the eye of the tiger going.
[00:38:12] Jason Johnston: Keep the eye of the tiger going, folks.
[Eye of the Tiger in chiptune style plays as the outro]
