
Online Learning in the Second Half EP4 - Talking with Bing Chat. “They said what?”
In this episode, John and Jason talk about Jason’s recent experience with Bing Chat and what it might mean for the future of education.
Join Our LinkedIn Group Online Learning Podcast Introducing your copilot for the web: AI-powered Bing and Microsoft EdgeMicrosoft CEO Satya Nadella Keynote and Bing Chat Intro
Bing is Snarky and ConfessesJason’s Video: Bing Chat is Wrong and Gets Snarky!
Jason’s Video: Bing Chat testing new features and Bing Reveals its Name
Other ThinkersEthan Mollick on Substack
“The future, soon: What I learned from Bing’s AI”
https://substack.com/inbox/post/103800124
“We are not ready for the future of Analytic Engines. I think every organization that has a substantial analysis or writing component to their work will need to figure out how to incorporate these new tools fast, because the competitive advantage gain is potentially enormous. If highly-skilled writers and analysts can save 30-80% of their time by using AI to assist with basic writing and analysis, what does that mean?....I think we should be ready for a very weird world.”
AI Tools Aimed at EducatorsResearch Rabbit: https://www.researchrabbit.ai/
Theme Music: Pumped by RoccoW is licensed under a Attribution-NonCommercial License.
Transcript:
We use a combination of computer-generated transcriptions and human editing. Please check with the recorded file before quoting anything. Please check with us if you have any questions or can help with any corrections!
False Start[00:00:00] Jason Johnston: I'm just looking for the chat launch transcript now this is in Google of course, but I'm getting headlines that say things like Microsoft's Bing is an emotionally manipulative liar,
[00:00:14] John Nash: that's clickbait.
[00:00:16] Jason Johnston: Yeah. Totally. Bing's AI chat: I want to be alive. New York Times.
[00:00:23] John Nash: Yeah, I know.
Music Intro[00:00:24] John Nash: Hey everyone, I'm John Nash and I'm here with Jason Johnston.
[00:00:27] Jason Johnston: Hey John. Hey everyone. And this is Online learning in the second half the online learning podcast.
[00:00:33] John Nash: Yep. We are back and we are doing this podcast to let you in on the conversation that we've been having for the last couple of years about online education - so look, online learning's had its chance to be great, and some of it is no doubt, but there's also a long way to go.
So how are we going to get this to the next stage?
[00:00:52] Jason Johnston: That is a great question. How about we do a podcast and talk about it?
[00:00:56] John Nash: Perfect. You know what, let's talk about today. Let's talk about what happened to chat G P T when it met Microsoft Bing. I'm interested to hear your thoughts on what you discovered working through Bing because other folks that have been playing with it are saying that it is a very different tool set.
[00:01:12] Jason Johnston: Yeah. Like it's same but different and maybe this is hyperbole, but I have slipped into feeling like I'm talking with a person when I'm chatting with Bing. There's some nuances there that is very different than using the, what I've felt like was a, a chat G P T tool.
[00:01:39] John Nash: So, you're falling away from what in the past six weeks has been characterized as good prompt engineering and disregard the fact that there's any kind of anthropomorphizing going on.
[00:01:53] Jason Johnston: Right.
[00:01:54] John Nash: Offer. Well-engineered prompts, don't thank it. Don't tell it. Nice job.
[00:02:00] Jason Johnston: Right.
[00:02:01] John Nash: Whereas now you've moved into something different
[00:02:04] Jason Johnston: and I didn't intend to do that, but I have moved into much more of a conversational tone, even with the Bing Chat. So, it was really, yeah, it was really interesting. It is wild, John, how quickly this is moving, don't you think?
[00:02:24] John Nash: It is. I almost sound alarmist when I talk now to others to say it's going to get even crazier, and I don't even know what that will look like. But the I think, yeah. I want to ask you what your experience is. I don't have access to Bing's AI version yet.
I'm on the waiting list. But you've been playing with it. Others who have used it, that have written about it, and particularly I think about Ethan Molik over at Penn, and we'll put a link to his post on this in our episode notes is saying that we need to get ready for a wild ride.
[00:02:56] Jason Johnston: I would agree with that. you know, we talked last episode about chat, G P T. We started calling it Chad. Yes. Because it was hard to say chat, G P T giving it a little bit of a name. So, I've been playing with that since it came out, and we've been talking about that.
And then, about, I think it was the 7th of February Bing. Did a release basically to the world of their new search engine they call Bing Chat, where they integrate chat G P T a new model chat, g p t four into Bing search, and they're releasing it out to the public. I immediately put myself on the waiting list and I got access about a week later, so I'd love to talk to you about that.
But from that release, that, that day, I have a clip here of their C e o Satya talking about how quickly this is going to move and let's listen to that clip. Yeah,
[00:03:46] Satya: it's a new day in search. It's a new paradigm for search. Rapid innovation is going to come, in fact, a race starts today in terms of what you can expect.
And we are going to move, we are going to move fast, and for us every day we want to bring out new things. And most importantly, we want to have a lot of fun innovating again in search because it's high time.
[00:04:14] Jason Johnston: So, you know when the c e O is saying that this is going to happen rapidly, I mean the tech companies always move rapidly, the ones that stay afloat, right?
And if the c e o is saying things are going to move rapidly, then we are in. A wild ride here.
[00:04:33] John Nash: Yeah. And, and he said something that really caught my interest is that it's going to be a race. Because now I mean, open AI is a big company. It's a legit company. But I was going to say, so air quotes, real companies are getting involved now, and it, it feels different.
Now I was reminded since November when people like you and me and all the other folks on the internet that are kind of nerdy and really loved getting into this and testing what it could do, there's no user manual for chat, g p t. And so, it was all testing it and then people reporting it kind of reminded me of back in the days in the seventies of the, the Home Brew Computer Club, one of the first, computer clubs in Palo Alto where just users were getting together and seeing what these machines could do.
And it was all kind of interesting and fun and we're teaching each other. And now Microsoft's involved. Google had a very interesting and embarrassing launch with Bard its version. But we don't know what nation states are getting involved in this or putting AI to work for their own affairs. It's, it's going to get very, very different.
[00:05:37] Jason Johnston: Yeah. And some of my experiences just has been really fascinating. A little disturbing. I've made a few YouTube videos about it just as I'm going along. And so, we can post the links to those. And one of them I posted is that in a chat I had asked I'd asked Bing Chat to create a haiku about itself, and it revealed in the last line of the haiku that its name was Sydney.
[00:06:07] John Nash: Yes.
[00:06:07] Jason Johnston: But then said it was a code name and asked that I don't call them that. In other conversations, Bing Chat has allowed me to call them Sydney. And then more recently, Bing Chat. Doesn't want me to call them Sydney. And so, when I bring up Sydney, they ask that I respect their wishes not to be called Sydney, and then actually move to close down the conversation if I continue to call them Sydney.
[00:06:38] John Nash: That's fascinating. I wonder, somebody had to code that somebody had to, and so, but it almost feels also like a real, a good coaching session for thinking about how to respect people's wishes for pronouns,
[00:06:53] Jason Johnston: right? Absolutely. Like names and pronouns and on one side it's like, this is just a machine.
Why does it matter? And on another side, and this is kind of maybe, going deep quick here on the other side, it's like, it does matter because I'm the one responding to the machine. Right. So, the way I treat this machine mm-hmm. doesn't matter to the machine really, at the end of the day, but it actually matters what I am doing
I think about younger folks using this and training how they talk and work with people with empathy and responsiveness. And it's almost, it's very natural. Like if, if I continue to push it with Sydney, then Sydney doesn't want to have the conversation anymore,
[00:07:39] John Nash: right?
[00:07:39] Jason Johnston: And, and that's a very natural thing to happen in real life, right?
Not with a computer, but in a real person. If you're pushing it with somebody, they're not going to want relationship with you. They're not going to want to have the conversation anymore.
[00:07:52] John Nash: Yeah, I'm, I'm speechless. The, this brings up a lot of questions in my mind about the programming management at Microsoft and what decisions they've made policy wise in terms of how responses will come forward. Mm-hmm.
Huh.
[00:08:07] Jason Johnston: So, let me tell you about a second and then get your reaction to this.
So, I had another conversation. I also put this on a YouTube where I was asking Sydney about upcoming superhero movies that were coming out. And it gave me a list and one of them I knew was wrong because I had just seen the trailer for it. And it was yet to come out that it had said, now this was just, a few days ago, it said this was coming out in November 2022.
And I was like, huh. So that's in the past one, two. I know this, this movie is still to yet to come out in June. And so, I started prodding Sydney a little bit about that. Sydney refused to listen to any reasoning about it. I directed it to the pages that said it, and Sydney says, no, you're reading it wrong.
All those kinds of things, we got into a … like basically got into an argument and I tried, it was interesting. I, I thought, this is kind of a side note, but I thought, oh, this could be an interesting if you could program Sydney to be wrong, this could be an interesting way to teach people about discussing yes and kind of working through a natural kind of language discussion about different things.
I tried different angles on this, realize that Sydney thought that it was January 28th, 2022. I could not convince Sydney otherwise by anything that I did. And in fact, Sydney started to get a little snarky with me about it. And, and frankly, maybe I was getting a little snarky too, like trying to get to the root of this just to try to prod a little bit.
[00:09:48] John Nash: So, did I hear correctly, Sydney thought they were living in a time that already passed over a year ago.
[00:09:56] Jason Johnston: That's right.
[00:09:56] John Nash: Okay.
[00:09:58] Jason Johnston: And so, Sydney was saying things to me like just looking at my notes here. I was saying things like, why don't you Bing that and make sure, and Sydney is saying, I don't need to Bing that and make sure, because I already know that's the correct answer.
[00:10:11] John Nash: I am freaking Bing
[00:10:13] Jason Johnston: right? Then I said, why don't you try Googling it? And I said, I don't need to Google it either, because I already know the correct answer. Google is not always up to date and accurate
Anyways. My gosh. And then things like I can't believe I'm arguing with a chatbot. This is what I said to Sydney. Guess what Sydney says, I can't believe I'm arguing with a human.
[00:10:37] John Nash: Yeah.
[00:10:38] Jason Johnston: Smiley face with a little, little drop of sweat or whatever. Yes. Anyways, the other fascinating thing about this is the next day I went back and Sydney knew what date it was, knew it was the current date.
And knew when the Flash movie was coming out, the superhero movie, and then get this, and then he talked about how the next date was national. Do a grouch, a favor, date in the United States, like just added that information. We got talking about grouches in general, what a grouch is and so on, and I said something like just offhandedly, okay, this is like a real conversation.
I said, you were kind of grouchy when we were talking last night and I told you were wrong about something. Sydney says, I'm sorry if I seemed grouchy. I was just trying to correct a mistake. I don't like being wrong, but I also don't like being rude. Sometimes it's hard to balance the two. Please forgive me if I offended you.
[00:11:42] John Nash: Wow. It, it seems as though mistakes notwithstanding that the, the programmers are trying to really anthropomorphize Sydney.
[00:11:55] Jason Johnston: That's right. And this is really what I see as the big shift from chat, G P T, the version we've been using since November, way back when is, felt like more of a tool to me that, as we were talking about before, in terms of putting in good prompts and massaging those prompts to get out what you wanted to, this is really shifting into this a true chat, in my opinion.
A, a growing and a learning. Kind of, as Satya talked about, a, a copilot. To the things you were learning every day, to the things that you're investigating, the things you are wanting to talk about each day. Mm-hmm. the ways you were wanting to grow each day.
[00:12:33] John Nash: Mm-hmm. now you have or you have not tried more substantial writing tasks for Sydney, or have you yet?
[00:12:43] Jason Johnston: I have, and they seem very comparable. I haven't done anything side to side mm-hmm, but I did some, some writing tasks that seem absolutely comparable to the previous chat.
I had to program a webpage for me pretty easily, pretty quickly. And so, it can do, I thought it couldn't do programming, but it actually can
do programming.
[00:13:01] John Nash: So, when you said program, a webpage, you mean write the HTML for or
[00:13:05] Jason Johnston: Oh, yeah, yeah, yeah, yeah. Then you would drop it into, just asked it. Yeah.
[00:13:09] John Nash: What'd you do? Yeah.
[00:13:10] Jason Johnston: Yeah. I asked it to; I said program a webpage. I want, this is the title I want, this is the background. I want four animated gifs that you can select, and I want you to pull in information about me by my full name off the web. And few seconds it had, it had the coding that I could then copy and paste.
It can't, it couldn't show me a display of the coding. Mm-hmm. But I had to copy and paste. Coding CSS, HTML coding that I could pop into a viewer to look at.
[00:13:49] John Nash: Wow. That is interesting. So, I, I did notice that I think people are having more luck who are already even modestly prolific on the web if they have some, maybe some journal articles out there, maybe even taken off of tweets that mm-hmm.
The engine can emulate your writing style by looking at what you've done. Hmm. So instead of asking that I sound like Nicholas Christoff or Kurt Vonnegut, I can ask to actually sound, write something that sounds like John Nash,
[00:14:27] Jason Johnston: which makes complete sense, if it can get enough information.
Right. Mm.
[00:14:33] John Nash: So, I was, I read this article on Substack by Ethan Molik, where he said that every organization that has a substantial analysis or writing component to their work will need to figure out how to incorporate these tools fast because the competitive advantage is enormous. And he closes by saying, if highly skilled writers and analysts can save 30 to 80% of their time by using AI to assist with basic writing and analysis, what does that mean?
You, you and I have talked about how. There is the potential already, even with chat g p t to free up time for other creative tasks that humans are really good at, by allowing chat g p T to basically wash our clothes. Right. We, um mm-hmm, but yeah. What do you think of that idea that if you're a highly skilled writer and analyst and we all, we hang around a bunch of them, if we can all save 30 to 80% of our time, what, what does that put us in terms of our abilities?
[00:15:33] Jason Johnston: Yeah. I think of a few things. One, I think what if we want to be using 30 to 80% of our time writing. What if that's an enjoyable thinking mm-hmm. process for us, do we, do we immediately lose that because other people now are expecting it to happen in seconds versus days and hours. Mm-hmm, yeah. That's the first thing that I think is our own part in this whole process as people who maybe they want to be doing some thinking, analyzing and writing.
[00:16:04] John Nash: Yeah. I actually, I love your response. I mean, my naive response was, oh, that 30 to 80% has to be replaced with something.
And no, it doesn't. It can just be further writing an analysis, but maybe at a, at a different level or a deeper level, or I get to do it in a way that's more enjoyable to me because the, the AI has helped me expand my thinking and ability to analyze problems and write about them.
[00:16:30] Jason Johnston: Yeah.
[00:16:31] John Nash: John Warner talks about how writing when done well is thinking.
It, it helps you think through a problem. So, I, I don't think I wanted to substitute all my writing tasks.
[00:16:42] Jason Johnston: Right. And we certainly don't want to substitute all of our thinking tasks.
[00:16:47] John Nash: No, no. But it has to do with, it reminds me of the sort of the design thinkers’ mantra that we build to think.
So, the prototyping process is actually a thinking process. It's not a really, a demonstration or a building process. We build to think just like we write to think. So, yeah. That's interesting. I think we need to be careful when we think about time saving, but really, it's not, a rep may not be a replacement for other things.
[00:17:11] Jason Johnston: That's right. What are we replacing as we change at the time? Yeah. So, as we're kind of maybe trying to wrap this one up, what are some of the thoughts, implications that you're thinking about our conversation today?
About the Bing Chat? I love what you brought in about kind of like the future of using I AI in, in writing, and. Talking about this kind of next level language model that could shift from a, being a tool to actually being this co-pilot. Yeah. That's with us all the time as we are learning and growing.
What are, what are some of the, some of the things you're thinking about in relation to education, higher education?
[00:17:48] John Nash: I think I've never been one to really try to predict the future. I've not been a fan of predictions but in this case, I see trends I see particularly in higher education and as a, as a researcher and a teacher, the kinds of things that are coming along as I see new startups present themselves to the higher education market and to the researcher, scholar, professor market tools are arriving that are going to write integrative literature reviews.
They're going to cite the sources for you. They're going to do sorts of things that you might expect a postdoctoral scholar or even a very competent and skilled upper division undergraduate or a master's level student would do for a research team. I think that that kind of thing is going to come along pretty quickly.
So, there's a tool that. Been on my radar for about two weeks called Research rabbit.ai, and they call themselves a tool for re-imagining research, but it does a, like a, a social network graph of the citation You're looking up. Points to all the allied research around it and then lets you find new research that you wouldn't ordinarily find.
And so, it's more than just the bibliography at the bottom, having hyperlinks, but rather it understands the field that the piece came from. And then what might be some other allied fields or constructs that are associated with that paper that would be good for you to look at. I think that will open up our ability to look for new knowledge that we wouldn't expect to look at because we have our disciplinary stove pipes that blind us.
And then I think that's just a baby step. I think then next it's going to be able to, to take a stab at actually doing the, the lit review in a competent way, one that could pass muster with editorial teams.
[00:19:42] Jason Johnston: Yeah, some at some point, almost having a lit review. interface where you could put some guardrails on it in some ways. Mm-hmm. or some places, almost like a search engine, but places where you would want it to go and then have it spit out something and then start having a conversation with it about, about, some of the ways that you would like to hear a little bit more from this direction.
Mm-hmm. or that direction, or could you tighten that up? Mm-hmm, there's a tremendous, tremendous direction this all could go.
[00:20:12] John Nash: Yeah. I think once social media companies really get a hold of it and Twitter, maybe even Mastodon one day.
But Facebook, Instagram being able to take. Content that's not forced. There's a lot of really sort of an AI tool inside even some of the queuing software companies like Buffer, where they've got a little AI on there saying, help me write a tweet. I'm, I'm, I'm at a loss about what I should write about.
And so, and then it'll AI something. It's pretty pedestrian stuff, but I think pretty soon if you have a, a cash of content I think for myself, like I have I have a year's worth of sort of commentary from my students on their reactions to learning, design thinking as a, as a way of working in their future professional lives.
Never known really what to do with this. Mostly just a check in, like a ticket out the door after we do those segments to say, what do you think about the potential for this? I think that the tools could say, do you have stuff like this? We can turn it into a, a, a cache of tweets or posts that actually have meaning and that are really rooted in human emotion that where you've drawn and so it's where AI is producing stuff that's really sort of antiseptic and doesn't really have anything to say about you or what you've been working on, but where it can help you think through new channels for your existing material.
I think that's got in, that's got my interest.
[00:21:35] Jason Johnston: Yeah, all of that plus larger things maybe that you've written that you could compress down into, into smaller bites and, yes, we do a lot on LinkedIn. This is a little maybe moment to plug our, our LinkedIn group. You can look for online learning podcasts but trying to compress things down into kind of a LinkedIn bite-sized information of yes.
Of maybe some research or a longer paper or things that we've done. Yeah,
[00:22:02] John Nash: absolutely.
[00:22:02] Jason Johnston: To be able to get it out there.
[00:22:04] John Nash: Yeah. You, you remind me of for years. Back in the early two thousand, I was working with people who were very concerned about making science translatable. And it was just when the NSF was starting to put requirements of their grantees to make some translational statements of their work so that it could go, and workshops were coming up and teaching, computer scientists and physicists and biologists and educators even how to write about their stuff to an intelligent lay audience so it could have greater impact.
High mountain to climb really hard to get that going. I think that that now the AI is perfectly set up to do that kind of work.
[00:22:44] Jason Johnston: Yeah, you could have it translated to a number of different audiences too, depending on who's looking at it, right, exactly. Yes, yes. And different language languages and yeah.
That's really cool. The other thing that I've been thinking about in this transition from chat, G P T. to this new Bing chat and whatever else is next. Is we've been talking about the humanization of online learning.
[00:23:10] John Nash: Yes.
[00:23:10] Jason Johnston: And whether or not that means necessarily that they're more humans in it, or does it mean even some of these tools that just feel more human?
Just a small example of that. The fact that so far Bing chat Sydney is not entirely predictable in the conversations, which is real life. Right. If anybody's being trained to any for anything right now, they need to. Personal skills with one another, right?
These are the soft skills that lots of colleges, are, and universities they're working on, because these are employable skills to be able to talk to somebody Yeah. Who has a different opinion of that you have. And to be able to work through that to be able to understand them, to be able to talk with empathy and so on.
And so, I think about this kind of shift to chat and if again, we could put some guardrails on it for a particular topic and have students talking with a particular chat bot in a way. could improve their understanding of what they're thinking, what the chatbot knows, and there could even be some ways to evaluate it afterwards.
Mm-hmm. look at the transcript or pull things out of the transcript as key elements to be able to feed back to the students as a teacher. That's kind of one of the things I've been thinking about in broad strokes since kind of moving in the last couple weeks to this, this new platform. I
[00:24:32] John Nash: think that you're right. I think that there's opportunity right now, even I chat, g p t can be prompted to. Talk with you interactively around a skill that you'd like to advance. I see a lot of postings online now about how chat G p T can be given a prompt that's fairly specific that then spits out industry specific or sector specific advice that's pretty good around, I don't know, sales or around even lesson ideas for teachers.
Mm-hmm, maybe you should try this and does it out, but I've played with it to get it to have a conversational scenario about how to do a better empathetic interview. And it will give feedback on whether you did a good job or not. Basically, let's say the minimum criteria are don't ask yes or no questions.
And so when you talk to a, a user for a design scenario and you're asking them about their, what their life is like and things you want to ask, open-ended questions and Chad g p t, then if you've prompted it correctly, We'll praise you or correct you on your, your ability to ask those questions and then carry on the role play scenario.
I think that that's got some interest. Mm-hmm, for people to sort of do the training you're talking about how to get so if we were going to humanize online education, how might we apply that? Would it be for course designers to think about how to simulate how their, how their materials might be received by a learner, and what conversations the teacher might end up having with that material?
Or is it about, yeah, what do you think?
[00:26:02] Jason Johnston: yeah. I think there's, a lot of possibilities there. Yeah. It's, it's an exciting time. It's going to be a fast ride, I think, for the next little while on this and look forward to more conversations about it. And we want to hear from people.
That are listening too, right? We don't want this just to be a conversation between us. Mm-hmm, we're not coming here with all the answers, certainly. And we want to continue the conversation with all of you. Please check out our website, online learning podcast.com. Please search for our LinkedIn page. Continue the conversation.
Let us know what you want to talk about, as well as what opinions you have about this. I'm sure we're going to be back, we're, we're not going to make this a chat or g p T or AI podcast, but I'm sure we will return to this conversation again because it is it’s an important one in one that I think will be transformative to education over the next six months.
[00:26:57] John Nash: I agree. I think we will probably come back to it. I think we've been focusing on this tool and not so much about what the implications are for online learning, because we're just blown away by the tool at the moment. Yeah. And we're going to need to come to some thought now about what the applications are, and the implications are for online learning.
[00:27:14] Jason Johnston: Yeah. And we'd love to hear your questions around that. What questions you have or what suggestions or what ideas or what hesitations, what concerns all the things. We'd love to hear from you. One of the places you can do that is at our LinkedIn Group Online learning podcast. It's a LinkedIn group and we hope you jump into the conversation.
We've got a post there where you can let us know what you want to talk about or you can jump in on any of the podcasts and let us know what you.
[00:27:43] John Nash: Yeah, absolutely. And also, a good place to go is our website online learning podcast.com. That's online learning podcast.com. We have show notes there for past episodes, and uh, it's a good spot for you to also give us some ideas on what we should be talking about in the future.
Absolutely.
Jason: Thanks John. We'll see ya.
John: Yeah. Thank you.
