Speaker 1
Not listening to economists, how to organize the North Korean economy. It's politics, it's politicians. That's what politicians in North Korea decided to do. That's what they decided was the best economic system. It's not some high technician from open AI or anywhere else telling them wrong things here. It's technologies who drive it forward, but economy is driving forward as well. It's just that some people consider it obvious. They say, oh, it's obvious. Economy is telling us, try and regulate so there is an exploitation. Have a good minimum wage so that workers can happily work there. Create good conditions at work. Have people investing their money through a good stock market and a good financial system, and then the economy will grow. It's not that simple. I mean, it's true. That's what I just said, but only because I'm on the economy, and I said so quickly.
Speaker 2
So given that we should be skeptical about what technologists are saying to us. What advice would you give to people thinking, well, hold on a moment, I don't have to trust. I don't have to trust the economists in the US or the UK or the or North Korea or or the technologists. You know, what what what advice would you be giving to people who might be nervous about their jobs to say, well, okay, how can I prepare for this transition that's coming? What we have
Speaker 1
here and where the engineers and the technology can new technology do? Those who give those bad messages, they're assuming that we're going to push the technology to the limits in the bad direction. It's basically the point in the panel discussion I had with Hasimovu that came up when we were talking between us that now a lot of the things we see are probably things that they've been pushed in the wrong direction. Forget about AI for a moment, I'll come back to it and think of nuclear weapons, for example. When nuclear weapons first came in the 1950s, we thought life was going to end. The famous Bertrand Russell, common better end than dead, let's not develop nuclear weapons and let's all go along with the Soviets because they are the biggest at that time. But it didn't happen because we showed some human intelligence. Of course, not 100% intelligence, there's a lot of stability involved because we produce and put them on the ground. And after a few years, we bury them on the ground and we produce more and dig new holes and put them on the ground and so on and so forth. Why wouldn't the same thing happen with AI? It can do many things, it can do many horrible things to destroy us. It can do many things that would be good only for a small fraction of the population and terrible for the rest of the population, like taking our jobs away, becoming more productive, working 24-7, making some people extremely rich and making everyone as poor. Or it could be developed in a direction that is used collaboratively. Now what would I tell the workers? Another one of these comments that takes you about 20 seconds to make them and they require 20 hours of answer.
Speaker 2
Because... That's why economists not put on TV. Because maybe, maybe. Don't
Speaker 1
provoke me again to come back into that topic. First of all, employers have to be very careful actually. The owners now is on employers to develop it in the right direction. Take what you see from the technology now and apply it within your company after you talk to your workers. And then if you do that as an employer, then your workers will not be fearing that they would lose their job or that they wouldn't know what to do because they know they can rely on you as their employer to give the right training to move on there. Now, in situations where the employer is not telling them, suddenly new machinery comes on that threatens to take their job or they all takes the interest and thinks from their job and they are just doing nothing other than watch the machine work. What they need to do is to look carefully at what skills have been demanded in what sectors of the economy and doing what and try and upskill in that direction. For example, if you go into the hospitality sector, if you go into the medical sector, now if you are a young person going into university, for example, and you are really worried that technology might take your job away, then think of jobs in those sectors and that kind of sector and you'll be guaranteed a job because those involve person-to service that will never accept a machine to give us that service.
Speaker 2
One of the interesting things that I try and touch on in the book is around how we've responded to a technology slightly before that to do with social media. And it would seem that we've been a little bit slow in responding to how social media changed us culturally and our relations with one another. It didn't change work so much, but it's perhaps changed people's ability to be educated, you might say, in terms of the way it kind of brought about anxiety, it's changed just politically in terms of, you know, people's trust in politics. Do you think that we'll be able to respond better to AI because of our experiences of social media, or do you still think they were quite vulnerable in
Speaker 1
terms of truth? I think actually, I mean, I'm not a big user of social media, partly because it's not what I do or at least what I used to do in my research. And it's a question of time, you know, getting into using the social, if I'm going to get into that, we spend a lot of time making sure that I get it in a way that it's right, it's in my kind of definition, which is type consuming. It's not that I have anything against social media as such. But in what social media have achieved is to increase transparency and communication of what people think and how they feel. Of course, there is a lot of fake news, there is a lot of misinformation, and sometimes, the sorting good information from bad information is so difficult that in the end, things get mixed up and messed up. But there is a lot more consumer awareness, for example, now, of what's going on in the market, what kind of products we have. For offer, if there is a product that has made you ill, for example, or you want to know what was in it, through social media, if you put it up on social media, the company will respond much quickly to tell you what was going on. Maybe that's not the reason that the product made you feel like that. Whatever. Overall, if you read how the consumer market is changing in recent years, especially food industry, but also elsewhere, mostly in education, you know, what are you learning and why and all that, then there are good developments in that there is a lot more demand for information.
Speaker 2
But is that information going to be trustworthy in an age of generative AI? And I mean, it was interesting you talking about use of time, because, you know, in one sense, you are perhaps far more disciplined in time than my children might be. And in terms of the amount of hours that are getting thrown at social media, is that necessarily something that would lead to productivity? Let's put it in economist terms. So is AI going to actually make things worse in terms of this feeding, this constant desire for more information and data, but not necessarily building a world where we could trust what we are reading and hearing, and therefore kind of building a slightly more cynical, one might say, bit more kind of cynical, unproductive economic space where it's not going to be particularly beneficial and lead to a kind of human flourishing, as it were?
Speaker 1
No, that's a kind of such a negative development outcome that somehow I don't think will happen. It could happen in small pockets, some politicians try to manipulate it to the extent that you can do it, or some specialists. But I think there is social media in the way you describe it. They conform to the principle of you can fool some of the people some of the time, but not all of the people some of the time. It is sooner or later than we find out. In fact, the best contribution of AI in this connection would be if something like some generative language models, for example, you could to differentiate between what looks more like a trustworthy information and what looks like completely fake you know you're right.
Speaker 2
But that would be AI kind of marking its own homework though wouldn't it?
Speaker 1
No, I mean what I'm saying is that if I want to know something about a food item for example then I could ask AI if me about this food item, what's trustworthy information about the qualities of this on my health. And then it would just give you the outcome, of course, currently, church, GPT. Occasionally, that exactly the opposite with his hallucinations.
Speaker 2
It would tell you that you could eat stones and you
Speaker 1
should put glue on your pizza. Yes, sometimes it presents. It presents exactly as it is completely false and fake as it is the truth and it's so convincing that people believe it. But it's improving, I mean, it's in new technology, it's improving. So
Speaker 2
Would you describe yourself as an AI optimist, which is you think it's going to be generally overall good, or a pessimist, where you think, well, it's just going to make everything really terrible, or kind of, you know, well, I don't think it's going to change much at all. Or do you think it's going to kill everybody? I mean, there are those who kind of talk about these god-like systems that will probably just decide that the most economic thing to do would be to shut down all humans.
Speaker 1
Well, we managed not to kill ourselves with nuclear weapons, so we're not going to kill ourselves with AI, because at least I have some evidence. I am an optimist to the extent that a lot of things will improve with AI, especially the way we work. I think it will improve the way we take time off. I'm not an optimist in the way that some people say that productivity and growth will rise so much. We would not have to work with sitting back and the AI would be doing all the work. I don't think that's going to happen. I don't think AI is going to work.
Speaker 2
So, so, your conclusion of that project, But only if we take pains to bring about that outcome. And that would imply that there is some hard work to do. What would you feel that hard work would look like, maybe for individuals, businesses, governments, or whatever? Yeah,
Speaker 1
I mean, it's hard work because you have to combine regulation from the government, this is needed, because there will be people who exploit it, you know, they're always kind of as corruption, etc. etc. Don't we see it on a daily basis, sometimes in other connections at least. So you need a collaboration between government, you need employers. As I was saying before, employers would have to do the hard work in how they apply it. You need engineers to develop in the good direction, and there they should be taking some leads from people acting in the labor market or in the economy of what would be good for workers. They could come to us and tell us, you investigated the future of working well-being. What can we do with AI to improve the well-being of the workforce of low-income people? You can tell them. And then develop it in that direction, or develop apps in that direction based on models like GenAI that we have. And then you have the workers who need to collaborate to learn the skills to work with this system. So you need everyone to it to use a European term. You need all social partners to participate, a pre-Brexit term, you need all social partners to participate in the development. It's doable though. And although it's not going to be done in what you might consider the perfect way and this and everyone happy after that, I'm optimistic to the extent that we're not that stupid to go in the opposite direction. I mean, we're stupid enough to destroy other animal species and maybe not destroy them but worse in the environment, but I don't
Speaker 2
think we're stupid enough to destroy ourselves. I sometimes wonder if because of the way that our economy is set up in terms of the current kind of incentives towards profit and shell, that it's not individual stupidity, but there's a kind of group stupidity around that which we may not overcome. And we continue to do great damage environmentally, even though no rational person would suggest that we should still be continuing to pump fossil fuels into the atmosphere, but the market kind of demands that we do. But you're more optimistic about how the kind of the invisible hand, I suppose, might, you know, might not just towards a more intelligent response this time.
Speaker 1
No, not not not the invisible hand, actually. It's a visible hand. It's a regulation combined with open, transparent debates of where we're going. I mean, the invisible hand just looks at prices in the market and tells you to respond to any price change that you see in the way that is best for you or for your company or for your shareholders and society would be blissful after that.