Speaker 1
You will have a situation where it's not just projecting agency onto something inanimate, but something that really does seem to have agency. Um, and people will organize around that. I think there will be, uh, there will be religious sex that, you know, have a model trained specifically for them and they go to it for council and they let it, uh, decide to determine their ethics.
Speaker 3
AI, we're talking about it some more. Sam Hamad is the director of social polity at the Niskanon center and author of the sub stack second best. And Zohar Atkins is the sub stack rabbi. He hosts the truly fantastic podcast Meditations with Zohar writes a philosophy sub stack at what is called thinking and does weekly tour analysis at Etzhasa debt. Uh, welcome to China talk, which I guess is turning to a, I talk. I'm totally fine with that. I Sam and Zohar.
Speaker 3
Appreciate the enthusiasm. So, uh, at nine a.m. on December 16th. Um, so hard. What are the questions we should be asking about?
Speaker 2
Uh, I come to this question not really as a trend forecaster or somebody who's worried, uh, about the future, but really somebody who's interested in anthropology and the opportunity that AI affords us to self reflect on what makes us different than robots. So as AI becomes more sophisticated and more capable of passing the Turing test, it reveals the extent to which most of the time our thoughts, feelings, and, um, scripts could basically be written by GPT, if not written better, uh, by a robot. And I think that that is distressing. Uh, a lot of people have, have sort of taken to this point and Nate Silver said that, um, one of the upsides of GPT is sort of that it will reveal how mediocre we are. Um, and, uh, I guess that is a new problem in a way, but it's also an old problem because Plato in the feeders or Socrates in the feeders, um, was worried that right in itself would basically allow human beings to simulate knowledge rather than to actually have it. And I kind of agree. I think that, um, one of the features of technology is that we can, um, pretend to know when in fact we don't. And so, um, asking what it is that we can know that, that, that a computer can't know is probably the most important question right now. When we look at GPT and it gives us, uh, an answer to a question and, uh, and it seems to have knowledge. What does that, what does that mean for, for us when we, when we give some more answers?