Speaker 2
The honest answer is that we're all still a little shell shocked in some ways. I think everyone's still kind of digesting this thing because it came somewhat out of left field, right? Like, so for me, at least, I didn't have an OpenAI account before Chad GPT came online. Same. They had some pretend to not be a journalist so I could get in. Thanks, OpenAI.
Speaker 2
started engaging with this thing and had this kind of holy smokes moment where this is we are at a different place technologically today than we were as far as I'm concerned a week ago. I think digesting that and figuring out the ramifications, I think, is a tall order. What I will say, though, what's really, really important is that it's like Chad GPT and AI generally, these are still derivative technologies in the sense that they're quality depends on the underlying and the underlying still being human output for all intents and purposes, right? And so understanding exactly what is the bias and what is the erroneous information built into the kind of core of human knowledge that this AI is kind of regurgitating is really important because it's kind of reinforcing that. And this is kind of Rob's point. Like, I was engaging with Chad GPT on subject matter that I knew nothing about and I found it to be very illuminating. And I was going down rabbit holes and really engaging with it. I was really, really good. I engaged with it on subject matter that I knew a lot about. And when I engaged with it on that subject matter, I found a lot of mistakes. And that
Speaker 3
made me nervous. It's also worth noting that, like, it is not the core of human knowledge. It is everything that is on the internet. And everything that is on the internet is, for example, overrepresented with answers to stack overflow questions. So it like leans heavy to tech stuff and you can get it to do tech stuff very well. But it doesn't do a lot of other stuff very well because it's whatever the hell has at eight on the internet. And to your point, there's not only bias, but there's like the data in garbage in garbage out, I suppose is worth mentioning.
Speaker 1
Yeah, it's, you know, it is telling that there's been reports that Quora is exploring some sort of chat GPT integration, which is, you know, if you look at places on the internet where everyone just loads up and provides air quotes answers to questions, it kind of speaks to that. Is there anything telling here where it seems like the one section where there seems to be a high level of accuracy with the animation seems to be programming and coding? Is there is there anything to that either in terms of that's the biggest data set in the world's to leverage? So it's just more accurate, or it's easier for it to do that. Like, are we going to go from learn to code to learn to write because coding was replaced by chatbots? I should also note that it seems like chat GPT right now can very effectively generate contracts, mostly because if you want to be consistent there, you just copy the standard language.
Speaker 3
Yeah, lawyers should be more scared. When you have a deterministic question with a deterministic answer, it's a fact-based binary true false thing. Aside from the data set being bigger, and aside from yes, all developers these days just copy and paste from Google and or use like Copilot or whatever
Speaker 1
that thing is. It's not copy and
Speaker 3
paste if it's right. So I'm like those are deterministic fact-based answers with large data sets. I think that's easier. And I think what we seen across all industries before AI came along, software, law, all these industries are being commoditized from the bottom up. They're being eaten from the bottom up. No code is an example of software before AI. So it seems like a next natural step to these things. But there is a ceiling where human judgment comes in and opinion. And that's the whatever magic if AI can get to that point, that becomes probably generalized intelligence. And I don't know if it can actually get there, but it can be certainly scary on the