24min chapter

Tech und Trara cover image

Wie entsteht unsere Zukunft? - mit Johannes Kleske

Tech und Trara

CHAPTER

Zukunftsvisionen und gesellschaftliche Verantwortung

In diesem Kapitel wird die Entwicklung pessimistischer Zukunftsbilder in Deutschland untersucht, insbesondere unter jungen Erwachsenen, die durch Krisen wie den Klimawandel beeinflusst sind. Es wird hervorgehoben, wie bedeutend positive Zukunftsvisionen für gesellschaftlichen Aktivismus sind und welche Schritte notwendig sind, um diese Visionen Realität werden zu lassen. Zudem wird diskutiert, wie gemeinschaftliche Beziehungen und konkrete Beispiele für eine nachhaltige Transformation zur Schaffung einer lebenswerten Zukunft beitragen können.

00:00
Speaker 1
So let's now
Speaker 2
take all of that and move to, or try to translate what it means for policymakers. And I will here introduce another paper of yours entitled Reflexivity, Complexity, and the Nature of Social Science, published in 2013, in which you make an important distinction that builds on a paper published by a very famous character, but I won't mention his name, maybe so that I don't upset some people here. But you make the distinction between the complex adaptive systems and complex reflexive systems. In adaptive systems, the agents react to one another, while the rules of the game remain constant, whereas in reflexive systems, even the rules of the game evolve over time. So in a sense, as you describe in the paper, reflexive systems are, and I quote you here, at the far end of the spectrum of complexity. They can be considered to be a subset of adaptive systems, but they are definitely on that end of the spectrum. How significant should the distinction be for policymakers when crafting interventions? And maybe we can take AI as an example or another
Speaker 1
field. Yeah, I'll start with a key difference between social and economic systems and physical systems. So whether you believe in the laws of relativity or quantum mechanics or not has no impact on how the universe works. The rules are there and they're fixed. Human beliefs and expectations don't change that. All we can do is use science to try to understand those things and maybe harness them as we've harnessed quantum mechanics and semiconductor design and quantum computing and so on. Social systems are fundamentally different in that the rules of the game, the physics, are themselves human creations. The way the economy worked back in Babylonian times, under one set of rules and institutions, is very different than how it works today. There's an interaction then between our beliefs about the system and how it works, and the rules that we have and support, and then how the system works. So if we all read Karl Marx's book and said, oh, we believe that, and made the Marxist economy, the economy would work in one particular way. If we all read Adam Smith's book and say, yay, that's how it should work and we created another set of rules and system, then the system would work in a very different way. So the combination of the rules of the game, which are often set by policy and policymakers and governments, and what people believe actually really matters in how the system works. And if we have a kind of wrong diagnosis of how the system is working, if economics is kind of giving us the wrong answers and the wrong policy advice, then we can create a system which doesn't work very well. So there's lots of flaws in Karl Marx's theories, and the systems created on that did not work very well, and actually, in fact, caused misery and poverty and repression for tens of millions or hundreds of millions of people. Economics kind of matters in a way that other disciplines don't in that it helps shape our beliefs about how the system works and what the rules of the game should be. And then we also have the phenomena, and we're seeing it now with populist politics playing out in many places in the world, where people come up with stories and explanations for how the system works and how it should work, you know, aren't based on any empirical evidence at all or understanding, but are just there to serve their own interests and power. In fact, are trying to kind of hijack the physics of the system to serve their own ends. So we believe that this shift, this paradigm shift in economics, this new economics movement is critically important. It could lead to a much better understanding of the system and a set of rules of the game, which could help us create more cooperation and trust, address issues, economic equity and justice, address issues like climate change and other big policy challenges.
Speaker 2
Okay, so let's take as a given that when we talk about the economy and most likely today, if we are trying to tackle and understand the dynamics in the digital economy, we are facing a, at the minimum, complex adaptive system and most of the time a complex reflexive system. But it doesn't mean that the agents within the system will just suffer from those dynamics. can also try to influence those dynamics and produce the effects that will be convenient for the sake of achieving their own objective, which leads me to a paper that you published entitled Getting Big Too Fast, in which you tackle the very interesting question of aggressive behaviors. And by that, you mean a behavior that is aggressive in that it is trying to achieve growth at a cost of potentially survival. And you give an example, it could be that a company is heavily advertising or is trying to expand in a way that is too fast for its own capacity. And you said that on the one hand, those behaviors, they often seem to be failing because it is very difficult to predict indeed how those complex adaptive systems will evolve. But on the other hand, these markets are typically prone to very strong increasing returns, which may amplify the effect of such strategies. So what do you ultimately find? And I'm asking because when it comes to antitrust, if you find that aggressive behaviors are non-efficient because it is too hard to predict and maybe let them go, on the other hand, if we find that increasing return will amplify those strategies, well, maybe then we need to investigate more those markets. So where do we stand when it comes to the outcome of those two contradictory trends that we observe? I'll
Speaker 1
give a sort of slightly more general perspective on policymaking. In a complex reflexive system, such as one we have, we have to be kind of humble about our own limits to knowledge about the future. And our ability to predict the evolution of this kind of a system is very limited. We wouldn't necessarily be able to predict what the beak of the finches that Darwin observed in the Galapagos, how would those evolve a millennia from now or longer. nor can we necessarily predict how the physical and social technologies of the economy are going to evolve over the long term as well. We can make better short-term forecasts and predictions. The models I cited on the financial crisis, COVID, climate, et cetera, show that with the right techniques, we can actually make pretty decent short-term forecasts, particularly in a period when the rules of the game are relatively stable. But when you get into this reflexive dynamic where the rules and structures and technologies of the economy are all evolving too, then things become much harder to forecast. And so policymaking has to take the same lesson that I talked about for companies of kind of bringing evolution inside. So this idea that we can look at the system and come up with the optimal or ideal policy is a myth. We can use these new and better techniques to analyze the system and come up with a good set of hypotheses about what's going to be effective, whether it's in antitrust, technology policy, climate, or any other area. But we have to be willing to try things, even a variety of things, and then be very careful about collecting the evidence on what's actually working and what the impacts are. Do more of what works and less of what doesn't. If, for example, we had done that in climate change, rather than having these fights about mythical carbon prices, which had been politically difficult, very rare, and very ineffective, we would have looked and seen that other policies that have been going on during those decades, feed-in tariffs, utility policies, industrial policies, and so on, were actually working, and were actually abating carbon. And we, with a number of others, did a big study looking back on these to see that those policies were effective. If we'd taken an evolutionary approach, we would have said, ah, that stuff is working. We can start to see why. Let's do more of that and less of the other stuff that hasn't been working. Policymakers find it very, very hard to take this kind of evolutionary approach, taking risks, running experiments. It makes people vulnerable to political attack. Successful evolution requires failure and requires learning from failure. The best companies actually do this. The best companies like to fail small and succeed big and learn from failures. The best militaries also do this. My interactions with members of the U.S. military, I was fascinated at how in-depth they study failure and learn from it. Politicians and policymakers admitting failure, let alone learning from it, seems to be a career ender. So there's this fundamental disconnect where politics and institutions resist evolution and makes it very brittle when what we need in this kind of a world is a much more adaptive and evolutionary approach to policymaking. And
Speaker 2
I believe this is a work for lawyers. And here we can learn from public choice theory when it comes to designing incentives for the policymakers to indeed admit failures. Because so far, if you don't have an incentive, why would you in the first place? So maybe we need a constitutional layer within those agencies to make it so that they have the incentive to document the effect of the policy and then in case of failure, admit it and reorient the policy. Let me ask a final question. There is a saying that it's a $1 billion question. I think here it's more of a $1,000 billion question. So if you can answer that, that will make us very rich. So I hope you can. You talked about failure within companies and I want to ask another question on that. Your work on network shows that the large networks, they scale quickly and more specifically information scales quickly within those large networks because as the number of nodes increases, the degree of connection grows exponentially. On the other end, the large networks are also prone to complexity catastrophes where the growing number of interdependencies means that the likelihood of a positive change somewhere in the network may cascade into negative effects somewhere else in the network. And so it creates, in a way, an interesting tension. The degree of possibility for those large networks increases, but their degrees of freedom also collapse just as quickly because those networks don't really have the freedom to experiment and try new strategies because if it fails, it would fail big time. What does it mean for market enforcers? Should they be concerned with ecosystems that are indeed fully integrated or should there be more on the side of, well, collapse will come eventually, as we already discussed in our discussion today? Yeah,
Speaker 1
so we have these two opposing forces, the benefits of scale, as you say, as networks grow, but then tendency for these networks to kind of get locked up and rigid as the interdependencies become too high. I should note this phrase, complexity catastrophe, I got from Stuart Kaufman, who wrote some interesting papers on this quite a long time ago. And in the case of companies, the primary mechanism that deals with this is actually markets. A canonical story is a company has some success and it scales up, but then it becomes a big bloated bureaucracy and experiences this kind of complexity catastrophe where you just can't do anything or change anything in the company. And eventually it loses fitness in its environment. You know, its products and services aren't that good or relevant anymore, but can't attract the talent and so on. And then the market deals with it and takes it out and the resources go to someone else. And then the process repeats itself over time. So that's also coming back to my earlier comment about how often markets evolve more than individual companies. In governments and politics, it doesn't happen quite as easily or cleanly. We also see scaling up of bureaucracies and institutions and government, and they also experience this kind of complexity catastrophe where the rules become so complicated and interlocked, and also the interests, the political interests, other interests become so complex and interlocked that you literally just can't do anything. You get gridlock, and we've seen this in spades in the US and other countries. The network theorists would say the only solution when you reach that is to bust up the networks and let it reform, to reduce the interdependencies, to create the degrees of freedom where people can act and make change happen and innovate. But that's much harder to do in a political system. Now, in theory, in a democracy, you have a kind of market-like mechanism where you get to some point of failure and the voters say, okay, I've had enough of this. I want to try something else and change something. The health of our democracies has declined, which would be a whole other... That process also hasn't been working it as well as it should. I think some deep thinking is required about are there institutional structures that can be, or innovations that we can create that can allow this evolutionary process to work better in a governmental and policy setting and also to help our democracies be more healthy and do their evolutionary function. as And also do it in a way where we don't, one problem with a complexity catastrophe is if you don't do the kind of, if it builds up too long, then when the collapse happens, it can be quite violent and destructive. And we've seen this in history where a regime will stullify and get rigid, but hang on for a long, long time. And then when the collapse eventually comes, there's a huge amount of suffering, even violence, poverty, and so on until things can start to, you know, rebuild. And so you want to avoid those big crashes. So again, you want to be able to fail small and often and succeed big, not fail big. Yeah, I think
Speaker 2
it's a good place for us to end the conversation. And thank you so much for providing a research program for the, what, 10, 20 years ahead of us. I think those are the very interesting questions, right? Because the impact on the world may be massive if you indeed come up with such policies. Well,
Speaker 1
there's plenty to work on and no shortage of problems to keep us busy.
Speaker 2
Thank you so much for everything you've been publishing, for the eNet Institute, and for coming on the podcast. I've been looking after you for quite some time, and it's a real pleasure. I know we'll come back to our conversations many times. So on that note, to everyone out there, take care of yourself. And if you can, someone else too. Bye-bye.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode