Amplifying Cognition cover image

Amplifying Cognition

Latest episodes

undefined
8 snips
May 15, 2024 • 34min

Tim Stock on culture mapping, the culture of generative AI, intelligence as a social function, and learning from subcultures (AC Ep44)

“True intelligence is a social function. It’s about social cohesion. Intelligence happens in groups, it does not happen in individuals.” – Tim Stock About Tim Stock Tim Stock is an expert in analyzing how cultural trends and artificial intelligence intersect. He is co-founder of scenarioDNA and the co-inventor of a patented Culture Mapping methodology that analyzes patterns in culture using computational linguistics. He teaches at the Parsons School of Design in New York. Website: www.scenariodna.com LinkedIn: Ufuk Tarhan Faculty Page: Tim Stock  What you will learn Exploring the concept of culture mapping  Understanding the subtle signals in cultural trends Discussing the impact of generative AI on creativity and work Differentiating between human and machine intelligence Examining the role of subcultures in societal change Analyzing the future of work and the merging of physical and virtual spaces Emphasizing the importance of structured analysis and collective intelligence Episode Resources Culture mapping Generative AI Artificial intelligence (AI) Douglas Engelbart Intelligence augmentation ChatGPT Cyberpunk Subcultures 15-minute cities ESG (Environmental, Social, and Governance) Nihilism Transcript Ross Dawson: Tim, it’s awesome to have you on the show. Tim Stock: Great to be here. Ross: So I think people need a bit of context for our conversation in understanding the work you do. A lot of its trends are around culture mapping. So, Tim, tell us what is culture mapping. Tim: It’s a culture mapping, really has its roots in understanding what is going on underneath the surface that people aren’t paying attention to. So I searched essentially when we speak to whoever I need to explain cultural mapping to, it’s to help companies understand how and why culture is changing, and how to use that information to make better design and business decisions. And so a lot of those kinds of real changes in culture are not obvious. They’re not things that we can ask people about. So they’re the weaker signals. And so culture mapping allows us to be able to map the relationship between what is the broader culture and subcultures and understand the relationship between those and how they develop narratives within society and cultural change. Ross: So where’s the state of today? So what are some of the signals you’re seeing in the Euro cultural mapping work? Tim: Well, I think that  , we’re in a particular moment where we’re shifting from one kind of age to another in terms of, especially in terms of how people do work, how we understand our relationship to identity, there’s a growing nihilism, I would argue that’s going on. And I think that  , when people say a lot of when we’re talking about things that are happening, the negativity that people would say, is coming out of the pandemic. But again, if you see, from a cultural mapping standpoint, those signals were there already, but those externalities of the pandemic just really exacerbated them. So things like, sort of issues around how we see work, and how we understand our relationship to work, has a lot to do with how technology is changing, it has a lot to do with the kind of work that needs to that is the kinds of skills, all of these kinds of affordances that go along with that. And essentially, culture is always trying to catch up to that particular change. And so at this particular moment, I say, we’re kind of stuck. There’s a moment where we haven’t found our voice yet. And so it’s the reason why we see a lot of this kind of there’s political dysfunction. There’s, there’s issues in terms of I mean, we’re, we’re at a moment where there’s a lot of unrest, and there’s a lot of language around that. And so essentially, I see us trying to as a society, trying to find that trying to find that voice. Ross: So, yeah, there’s a couple of directions for this, we’re looking at the role of generative AI, one is from a cultural response. Another is, I suppose a deeper level is to our understanding of what is our relationship with generative AI. Tim: Yeah, I mean, it comes down to what do we do? And I think that that’s that  , that nihilism is emerging from well, what am I supposed to do? What is what caught the, we, we’ve almost coped, we’ve co-opted a lot of these words like intelligence. So what is left for humans to do? And the state of AI, I would say, is that you would see that there’s a lot of replacing, and mimicking human actions. As sort of, we get sort of things that look like they’re created, the word creativity, for example, has been co-opted, and sort of like so. But, we’re at a point where we need to be asking what is creative. I mean, creativity is a human action, human intelligence emerges differently, the machine intelligence, machine and child, that, machines don’t see ghosts, machines don’t understand, machines can’t believe in, in conspiracies in the same ways that humans can humans, see in between. And it’s how we access information learning, learning as children, how we acquire language, and how intelligence is so tied into narrative in that particular way. But right now, what we see is a lot of replacing things that we normally would do. So the question is, is it especially if you’re a young person today, you’d be like, Well, what should I learn? What are the kinds of skills there’s nobody to tell you? Because it hasn’t been framed? To be able to understand, so in a way, there’s a lot of shift towards the individual. We’re seeing sort of, in every area, from medicine to education to work, we see the individual now having to take on much more responsibility. And I think that causes a lot of anxiety. And so, that’s the moment we’re in until we sort of figure out well, how are we going to use these tools? So that sort of more collectively? I don’t think we figured out the collective part of that. Tim: Yes, yes, last year, I created this mapping intelligence framework. And my first response, as we started to look at chat DBT was that we don’t know what we mean when we say intelligence, essentially, that humans are the reference point. So there’s this debate where we happen to come up with a phrase and you use the phrase artificial intelligence all the way through whereas, if it could be a production of history, you might have called it cybernetics or some other phrase, which will give us completely different frame but because it is art or artificial intelligence, its objective has always been to copy and to replicate or try to be the same as human intelligence, which means puts us in a challenging situation. Now, when these are tools we can augment us, Doug Engelbart talked about augmenting intelligence instead of intelligence, artificial intelligence, intelligence augmentation instead of diligence. So this in a way requires cultural and linguistic reframing so that we can move to the amplifying condition types as reframes.  Tim: Well, it’s a culture mapping, it’s a culture mapping exercise because it means so much of what is currently broken in, in AI in generative AI is that I mean, it’s been programmed, it’s, it is one framework, it has walls, and it is defined this language is defined by the engineers that defined it. And the rules that it lives by are it’s sort of what, what we consider intelligence is what an engineer would consider to be intelligent, which would be to be able to replicate or, and so forth, speak to an artist or speak to a poet, and you get a very different answer. And you almost get this, we’re seeing this very ideological kind of battle going on, and sort of reclaiming kind of ownership over how these, these models are actually trained and so forth. But  , more simple than that, what do we consider to be intelligent? When I think of chat, GBT, I like to think of it, it’s the perfect way of sort of measuring intelligence, because, what you should be saying, we all kind of go through this one, we’ll put something into check GBT, it should be like, caught that stupid like that. Why? Why isn’t that better? That’s our ability. The problem is that we’re too accepting of these answers. And I think that we sort of end there is that part of us as human beings, I mean, the difference between machines and humans, and this is key to cultural mapping as well. True intelligence is a social function. It’s about so-called social cohesion. And so like intelligence happens in groups, it does not happen in individuals. So I think that we have this this idea that individuals kind of create this great, great things in our society, it happens in groups, it happens in these subcultures of groups that create that kind of knowledge, machines can’t do that. But the problem is, we also can be, we can be sort of swayed towards accepting that information, it’s how conspiracies happen. And so like, that’s the moment we’re at, where we have to be more aware. We are, in a way, using when I think of augmenting our intelligence, I think of it’s always interesting, bringing up gender bringing up AI and sort of certain cases. And it’s like the case in terms of dating, the idea of being able to sort of date versions of up-to-date versions of yourself to stop you from making the same mistakes and so forth. It’s like learning throughout this process of vamos helping you to be the person that you want to be.  But the thing is, most of the things in society allow us to sort of copy and sort of, sort of double down on these bad habits. And I see, in a way so much in AI is doing that the Internet didn’t have that problem. Generative AI is too quick. It’s almost like the arc of adoption is so fast. And so with the internet, it allowed us to stay in the subculture space much longer. So it was able to kind of get that kind of life the whole cyberpunk and all. Have these other kinds of surveillance culture and everything, and there are so many cypherpunks and everything emerging with AI now, it’s almost like it’s an immediate meme. It’s all it’s like, it goes from zero to cottage core, and in two seconds, and it’s like, then everything looks the same. And we go, Well, isn’t that okay? We go, Well, if I said, No, will I be wrong, and people don’t want to be wrong. And so they’re like, we’re all kind of affirming a lot of these sort of negative aspects of it right now. Ross: Yes, yes. So I’m saying a lot these days that the biggest risk with AI is over-reliance, where we sort of say, Oh, that’s good. And we just leave it at that, and we don’t exert our cognitive capabilities or stop when it starts going not being as good as it could be. But I think that goes back to your point that intelligence has been a social function. And one aspect of that is that many people find it very useful to be in dialogue with AI you can refine your ideas and have a useful conversation sometimes from an emotional perspective, sometimes from a just idea generation perspective. But it is still essential, it is not a true counterpart. And I don’t think you can have the same dialogue you can have with a single group of humans, Tim: And you can’t and the other part of that key to this Ross that’s different is the fact that we’re having a dialogue and we’re the dialogue you want to have with that it’s sort of like I want to, I want to get better, I want to improve something. But the thing is, that normally within social behavior is that you do that. And if you then begin to disagree, then you create other subgroups of people that believe your particular idea. And you develop a whole group and ideology around that. And that becomes its area of development, you get tools, you get other kinds of technologies that way, the problem is, we’re almost creating this very linear, and it’s, there’s no, there’s no divergence.  I mean, the key to intelligence is divergence. It’s not to affirm what’s there, it’s actually to push away from that and to do something that is diversity is key to biological you to our biological health, and like, it’s the same in terms of intelligence is that you, human beings, by nature, tell them to do something and that they’ll do they have this in, ingrained in them to move away from that if we make it easier to be like everybody else. If we are, we’re diluting and diluting and diluting our intelligence with that process. So in a way, the internet was a form of AI, because it was all of the subgroups working this collective intelligence in that way, understanding how to sort of come back to that in some, some some way with with these augmented these hugely sort of transformative augmenting tools like generative AI, artificial intelligence is that would be that would be what we would need to be moving towards. Ross: Yes, well, yeah, I love what you’re saying to me intelligence is, is diversity or is grounded in diversity. And that is, it can be one of the useful functions. I mean, asking for diverse perspectives during the VI on particular situations is one of my favorite tools and says, Oh, I hadn’t thought of it that way. And that is something additional, but it is complementary to my cognition not to, it’s not the machine doing the intelligence, it might come up with a random, useful and useful but it Yeah, and I’m the one who’s passing it. But it is useful to get those perspectives.  But this in a way, it comes back to how we can do this as well as possible. And to your point, I think there’s conceivably homogenization of thoughts of some kind, which is potentially emerging from this. So how can we use these tools to augment to amplify or perhaps even a better word is extend thinking, as opposed to having a channel into narrower, narrower conduits? Tim: Well, the other I mean, the key is that sort of one fundamental sort of step that can be taken is recognizing what tasks to give AI to do and recognizing that means, intelligence. AI can be incredibly valuable because one of the things that human beings have biases. And so in a way in terms of decision making, they will begin to believe things that are actually against what the decision that needs to be, needs to be made. And so like, you can almost And then we almost sort of affirm that is that there are parts of this rash, this rational area for AI that allow us to sort of what are these rational things that we need to do when we start mixing the rational with the creative and so forth, we kind of mix these two things together, we should be focused on the interpreted the deciphering part, and up upping our game in terms of our deciphering our analysis ability, as opposed to taking what AI is giving us as analysis because it isn’t, it’s just an output of whatever might be flawed in our existing analysis and the input that we put into it. So we need to be a bit better at that. Ross: So how do we get there? What I supposed to get better at, that’s kind of a long analysis.  Tim: It’s a topic that is very much hot within the intelligence community, which is  , they have this thing called structured analytic technique, but nobody uses it. And the idea is that, if you have structure, I mean, the problem is, is that we have these  , the the intelligence community has a methodology of structuring analysis and being able to say, what are we doing, what are we sort of putting off to kind of machines and computational analysis and so forth. But then we kind of go back and say, Well, I trust my gut on this, and I’m just gonna go with my gut on this. And we have to now recognize that so much more, how much faster, things can go wrong. If we don’t, if we don’t slow it down, structure it. So structure, I would say Ross is critical, which goes back to I would say, culture mapping, which is essentially a way of structuring, structuring language to say if I were to if I, if you were to talk about any one concept people would say, Oh, well, I know what you mean, culture mapping is to say, wait a minute, no, there are many different other implied meanings to what you think that I’m talking about. And understanding that structure is important because then we start recognizing why certain things go wrong in society. And I could give you one example right now: we have an existential threat of climate change. And we have over the last 10 years developed programs like ESG, all these different kinds of language around this. And when we’ve done that, we’ve created a counterpoint.  So right now, there is a, there is as much of an ideological movement against all of what we’ve created towards dealing with the sustainability issues that relate to climate change, and we can’t, and you can’t battle them, like there are people who fight against 15-minute cities, or there are people who voted fight against vaccinations there, I was just reading today anti-fluoridation is back in the United States, it’s sort of like any science, any that the irrationality of human beings come because, and it’s, it’s not wrong, it’s the fact that what you’ve done is you’ve led something that should be very structured in an emotional way you’ve packaged it, and you’ve expected that everybody would believe you, and everybody would come around, but actually, change happens socially. And you have to understand to be able to deal with the future is understanding the weight, all the different ways, things are going to change as externalities change all the different contexts as the context changes, and people are going to have different kinds of ideological responses, you need to be able to have a structure to say, I have some scenarios for how that what, how that might how that might particularly play out?  , we saw a lot of these particular signals in the recession. And then it was really clear, but nobody was paying attention to them. And so if they were there, and then the pandemic hits, and then it’s like, then they’re so obvious. Now they become obvious. But it’s the structure that you need to say, how do we then put solutions to understand what you’re dealing with? Because you’re dealing with people and different groups of people. Everybody isn’t the same. Everybody’s not going to believe what you believe. And you have to deal with that kind of variability within society. Ross: So pulling us towards collective intelligence or group intelligence. I suppose you’re a part of the subtext here being Intel Jen says she says a social function is not far, far more than an individual function. And, with these kinds of existential challenges, or more and more complex challenges that we have, we do need to build collective intelligence that is superior to individual intelligence that has to be the path to our collective future. So in that, guys, what are we, this new intelligence that we have? How can we best build the best of humans, particularly human group intelligence, augmented by or supported by some of these new tools? Well, I think I mean, Tim: For us, it’s that it’s being able to see those subcultures that are in that makeup, the society, we sort of think of society as a monolith, or we think of society as being demographically based. It’s sort of divided by age, or it’s divided by ethnicity, or divided by, it’s divided by these cultures, these particular relationships to the subcultures, whether you’re directly related to them or not. And for anything that’s changing within, within society, there’s always this point of culture mapping is that there’s always this process between affirming codes of society. And then there’s always this, this counterpoint that’s always happening. So as language becomes sort of static, and becomes the rule and the law, there’s always this counterpoint. So being able to understand and invest and understand what that response is not capitalized on, not commercialized it, but understand what it is, I’ll give you an example that works that we did back a little bit over 10 years ago, and it was that  , looking at, at bicycles and cities, and  , bicycles as a machine was is was understood as being a leisure product that is sold everywhere. But, these subcultures of cycling that had been sort of living under the surface, were telling you how cities needed to be planned. They were telling you how they needed to function, how adaptable they needed, and not just sort of mobility but also things like food, it’s sort of you start getting other codes of, of behavior that go with that. There are different researchers during the pandemic that studied skaters like skateboarders and made connections between understanding stents, understanding skateboarders, and how to help people age in place.  One of the biggest issues globally is that we’re living longer and that it’s very difficult for people if they’re living into their 80s and 90s to live in the home that they’re in, because the city is the town that they’re in, isn’t planned that way. Do you understand? Adaptability? Do you understand? Do you understand how   how things need to function, you need to look at those parts of the culture that are telling you how things need to change, there was the same thing with our mobile devices.  , we didn’t have VPNs. But some subcultures were telling us that privacy was an issue with technology while everything technology companies were telling us, what are you so worried about? Why can’t we put a camera on everything? And why are you so freaked out about that? Well, subcultures were telling us, and even things like the right to repair.  I mean, right why can I fix my foot? All of those things are there all the time, but we don’t pay attention to them. And we have to understand, we have to sort of, in a way a collective intelligence embraces the full, the full range of what society is, and doesn’t sort of, sort of force it to conform to the king of the model that we have, which is currently what we’ve done. I mean, demographics, sort of like everybody sort of  , fits within a certain box. We’ve had that model. Since then from the 20th century, it’s sort of like it shapes polling and shapes, so many decisions that society mates, but it does, it’s, we see it’s giving us less and less good results.  , it sort of gets it wrong, more and more and more of the time why? Because people now don’t  , they don’t fit nicely in those boxes. And they and the speed of change is so fast it’s like and how they’re influenced by the range by which people are influenced so it  is so much broader because of technology. We have to understand the full breadth of society to be able to do that. That’s what a living foresight model is. What is collective intelligence to me? Ross: Well, I think, to your point, what I take from there is in a way the traditional framing of collective intelligence is you put a bunch of individuals together, and you architect ways in which together, they can be more intelligent. But it’s a well, perhaps the units that you are working with are subcultures. And so you might have a group of people that think a particular way. And then another group of people think quite a different way, another group of people think it in completely different dimensions. Linking together those subcultures that represent a frame of the world, or a way of perceiving things or a way of sensemaking is bringing together those cultures out of which true collective intelligence can emerge, rather than looking at it as this aggregation of individuals. Tim: Yes, I mean, I think we tend to think we think and we think in these boxes too much, I mean, for example, I mean take everybody’s talking about the future of work right now. And they’re really practical issues related to that because it’s like, it comes down to what is an office for what, like, we have all this real estate that we suddenly the pandemic sort of whacked and you go, like, oh, well, what am I going to use that for? Oh, well, we’re going through this nihilistic phase where we’re going to force everybody to come back and we’re going to surveil them, okay, good luck with that over, I’ll give you how much time underneath the surface, I’m telling you, there are these other ways in which people are sharing intelligence and solving problems. It’s why I like there are different I mean, it’s before I’ve been talking about this, there, there are many different kinds of companies and that has sort of tapped into the intelligence within, within video games, for example, because and how people, and  , and they’ve even sort of brought that into and brought that into how tasks and sort of problem-solving is done within a company, but you start dealing with, those are the essential issues of dealing with even more abstract issues, like the relationship between physical space and virtual space and realizing there is no, there isn’t physical or virtual. Now we’re dealing with this emergence of something called fourth space, which is this, where we’re digital and physical at the same time, and who understands that, first, who could give me a framework for that?  Well, I need to be able to tap into those particular groups because that’s going to tell me how to make what an office should be, it’s not going to look anything like what we currently have, it may be a park, or it might be a, it might be a mall, or it might be   because people are going to are, what do they do when they go to work, they communicate, they and more and more of work is becoming kind of a grazing, more than it is sort of the idea of a meeting, we sort of do shorter, it’s shorter kind of creative kind of conversations, and then we go and do our tasks. One of the things that the pandemic taught everybody is what the hell is nine to five? Why do I have to work five days a week, if I can get all my work done in two days, or whatever. So the idea of time has changed. So used to getting all of these kinds of concepts are constantly changing, and society, that we’re not keeping up with the cultures that define what that meaning is. And the subcultures are those groups that are ahead, and we need to sort of understand because then the rest of society pulls that in like they did with privacy because it’s sort of like they the average person didn’t understand that privacy is very abstract, but they kind of go, who has this, who’s ahead on this, and they start pulling in those behaviors, and then they start becoming normalized, and they become habit, and habit becomes culture. And that’s the issue. So studying that kind of relationship between what is the general culture and subculture is what’s, really, really, really critical. Ross: So is there anything which you would finish off with as advice or suggestions for listeners based on what your work or what you’re seeing or what you do?   Tim: Well, I would I would say that there’s a lot of there’s a lot of opportunity within with generative AI and I mean,I also teach a I’ve been teaching a course in in trend analysis for going on 20 years now, and which I have integrated generative AI. But it’s recognizing how we can integrate these tools that so we do not repeat lacing what we do so in a way we should be what we should be right now, what I’m hopeful for is that there’s a great opportunity for kind of a renaissance in, in, in education and sort of defining what kind of skills that we need. And I think that I think these tools can be incredibly valuable in doing that. So I would say, like, recognize kind of what the potential is, and don’t forget that we should be raising the bar as human beings in terms of what we consider to be intelligence, what we consider to be creativity at this particular moment. Yes. Ross: I 100% agree. So where can people find out more about your work, Tim? Tim: You can go to scenario dna.com. I have a blog that’s related to a class that I teach called analyzing trends.com, which I have not posted as much lately, but there is that as well. Ross: Fantastic. Thanks so much for your insights and all of the work you do. Oh, great. Tim: It’s great talking to you, Ross. The post Tim Stock on culture mapping, the culture of generative AI, intelligence as a social function, and learning from subcultures (AC Ep44) appeared first on amplifyingcognition.
undefined
May 8, 2024 • 34min

Ufuk Tarhan on the T-Human model, being an autodidact, oxymoronic technologies, and teaming with humans and AI (AC Ep43)

“I cannot imagine any other way to be successful or to find satisfaction in knowing that you are doing something useful for humanity or any society. Therefore, I believe it is mandatory to take responsibility for our choices.” – Ufuk Tarhan About Ufuk Tarhan Ufuk Tarhan is a prominent futurist, economist, keynote speaker, author, and CEO of digital agency M-GEN. She has worked as a senior executive and board member in a number of prominent technology companies. She is author of two successful books on the future and has received numerous awards including Most Successful Innovative Business Book Award, Most Successful Businesswoman In IT, and various lists of top social media influencers, and was the first female president of the Turkish Futurists Association. Website: www.ufuktarhan.com LinkedIn: Ufuk Tarhan   What you will learn Introducing the ‘T-human’ concept: a new framework for personal and professional development The importance of adaptability in the workplace and beyond Autodidactic learning as a necessity for future success Balancing current roles with future aspirations through hybrid learning The role of technology in enhancing team dynamics and individual capabilities Exploring the intersections of human skills and artificial intelligence Strategies for building a sustainable career in an evolving technological landscape Episode Resources T-human IBM Autodidact learning Blockchain Web3 Synthetic biology Gene editing Qubits ATCG alphabet (referring to the nucleobases adenine, thymine, cytosine, and guanine in DNA) Artificial intelligence (AI) Virtual reality Books As the Future Catches You: How Genomics & Other Forces Are Changing Your Life, Work, Health & Wealth by Juan Enriquez T-İnsan: Geleceğin Başarılı İnsan Modeli by Ufuk Tarhan Yarının İşini Yarına Bırakma by Ufuk Tarhan Yarõnõn __ini Yarõna Bõrakma by Ufuk Tarhan Düşlediğin Gelecek by Ufuk Tarhan Transcript Ross Dawson: Ufuk, it’s a delight to have you on the show. Ufuk Tarhan: Thank you. It’s my pleasure to see you again and to hear you again. Ross: I think the concept of amplifying cognition is central to your work. You’ve described to me this concept of T-human, and love to hear this concept and how you’ve shaped that and applied that in your work. Ufuk: Yeah, thank you. And you were one of the very first ones who picked it up. I’m so happy to explain it. T indeed, I was aware of T-shaped skills. At one of IBM’s conferences, I heard that for the first time, many years ago, more than maybe 20 years ago. Afterward, in years, I transform it into a model, a personal transformational model, to adapt ourselves to the needs of the future. And the first application is, of course, made on me. Because I’ve worked in the IT industry for more than 20 years, as a top manager or CEO. After 20 years, more than 20 years, I decided to change myself, and I decided to reshape my career, my life, and everything. While doing that, at the core, there were future, futuristic studies and thinking about the future more and more, and the technology. I decided to give consultancy services to people and corporations, to teach them or to let them be aware and apply future planning effectively. But at that time, I was a single mother, I was working in a very high-level company, and I needed to earn the money I needed. I couldn’t leave the job immediately. So I needed resources. Then I tried to find a way to develop my knowledge about future studies so that I could form my own consultancy company and give consultancy services.  I remember during University times, I was waking up at 3 am to study for exams. I said that I could do it again, maybe and I could do it. I started to wake up at 3 am three years ago. And then I worked on my today’s knowledge or future studies to increase my knowledge in that area. I was going to work, my daily work and I was a CEO at that time. I was working very seriously Of course, and I was coming back and at 3 pm I was working as a futurist, etc. So I realized that it was a hybrid mood indeed. I had to run to life altogether, the future life, I was preparing my future version. At the same time, I was working on my actual work. I just decided that there should be hybrid moods for everybody. Because we cannot quit our ongoing responsibilities and jobs, we need to earn money or we have other responsibilities. So we have to find a way to run them together. And while I was doing that, I realized that I have to learn so many new things, so many so many new things. I discovered this autodidact learning technique. And, I saw that I’m learning everything almost by myself. I’m digging into every source to get more information, knowledge, etc. So, I said that this is an autodidact, learning it is mandatory for everyone in the world right now because we all of us have to transform ourselves. And we have to create a new version of ourselves. So that’s autodidact, that’s another mandatory thing to learn and to apply.  And then while I was doing that, I realized it again. I deducted many things from my life. I put many things out of my life, people, habits, time, everything, and, I sold them and became a perfect trader, I was the creator of my own life. So I put all this together. I said that to be able to have a successful, successful, sustainable job because I’m mostly concentrating on the sustainable job, topic, and area. And because every one of us needs to work and have a job not only for earning money, but it’s life. So I said that whoever wants to have a sustainable courier life, and job life should apply this T shape including this hack, hybrid autodidact, and the creator moods. And I put all of them together and that T-shape model came out. Just Ross: Just reflecting a little bit on what you’ve just done. I’ve always thought of myself as an autodidact. I have a reasonable amount of formal studies and some postgraduate studies. But essentially, I’ve taught myself almost everything over my guitar, I taught myself guitar by having guitar and just working it all out and taught myself other instruments to teach myself, and the vast majority of what I’ve learned, I’ve taught myself, and I think that’s important. And as you think about this wonderful framework around the hybrid, in the sense of, yes, you do need to be continuing the work, which you are doing now, but at the same time to be renewing yourself. And that’s, of course, a frame for organizational leaders, as well, in the sense of saying, we need to look at our sustainable current business model, we also need to be building new business models.  I think the same applies very much to individuals. And this, you know, the idea of curating as you were saying, selecting, I think is a very important framework. But one thing that sort of, I suppose the question that arises out of that is, all of these take intention, it takes people to take control. Of course, I suppose in a way, what you’re suggesting is that people need to be making their own decisions to choose what to learn to teach themselves to be able to make those selections to be able to continue to renew themselves.  Ufuk: For sure, for sure. I cannot imagine any other way to be successful or to satisfy yourself that you’re doing something useful for humanity or any society or anyone. So I think it’s mandatory to have the responsibility of choosing. Ross: So you’re saying that, so going back to the T-shaped bottle with this, these elements of the hybrid learning they ordered Act and the curation. So what are some of the other frames that you’d put around this T model for us to renew ourselves in a time of change? Ufuk: Thank you. I’m very happy to talk to you because I don’t understand why people are still not so aware of that model, intentionally. So it’s very practical. The first inspiration for why I worked in that T-shape model after transforming myself, of course, I was not saying at the beginning of my transformation period, oh, now I’m hype, I’m in a hybrid mode or no, I’m learning as an autodidact or so on. So I trained later, ah, that was this and that was it and the model came out.  The inspiration of this, of creating that model was that everyone is saying the world is changing. The future will be such that and the future is not a secret anymore. The future is knowledge, like history. If you have to know about our past, the future is history. To create a better future. We also have to know about the future and that is knowledge so everybody first should accept that or become aware of that. So that was my first point. So, with the help of social media and this internet connection, more connectivity everyone is veering off about the future even the little kids know what will happen in the future in every area almost. So, people feel that people feel that they have to change themselves. they have to adapt themselves. Yes, that is no I accept the future will be like that, and I have to change myself, but how somebody should tell me, what I do to transform myself, in your book as you mentioned, we are overloaded with everything, especially with the information. So, among all this information, the fast-moving phase and everything is fast. Everything is too much and I have to change myself. I have many responsibilities, and how am I going to do this? Someone should tell me. And I started that, at that point. And I said that I would tell people how I transformed myself and they should apply this model template to themselves as well. That’s the starting point of T-human and T refers to of course, the golden ratio of the people who human who have sustainable job, sustainable career.  T means on the vertical axis, we have to go deep and deep in one area for instance, its future for me, future studies, especially for business life for work-life, for career.  On the horizontal axis, we have to use this knowledge this expertise in every area, we have to merge this knowledge to industries to areas, and everywhere in any condition. For instance, I can work with organizations in the automotive industry, fashion, education, whatever, whatever it is, so the future of everything, future of anything. In the vertical axis, we have to go deep and deep, it’s endless work, endless learning and experience and on the horizontal, we have to connect this in all areas to all areas. This is the shape and the understanding, but to be a strong T we need other components, other layers and it could it has three Ts inside. That means to be a strong T-shaped human, we need to be tech-savvy, we need to use technology very very strongly and deeply. We have to be very good team players. But in this team, of course, I don’t mean only humans. I also mean the digital facilitator, robots, artificial intelligence whatever it is. Team for me forming is not consist of only humans and also we have to be a transformative or design-thinker, Tinker person. We have to have these three Ts inside this main T and once we have this we have to advance our competencies by using this hack model – hybrid, autodidact and curation, we have to make this stronger and stronger by using this hybrid auto deduct and other Ts curation.  So, this is a complete framework, which brings us to a point where we have a very successful sustainable job because we are transforming ourselves in a very disciplined way and very concentrated way so that we become a people having 5Cs that means we have capability of doing something; we are becoming competent on this area by using 3T and hack. They are certified by society by, by people, by corporates saying that, ‘Oh, you know this, you are certified.’ Once you have been certified and authorized in one area, that brings you high responsibility for creating more and more new things that make you creative. Once you make this you are becoming a changer, a game-changer, which means you are creating new things, which is the most required condition according to Darwin’s survival.  It says that survival depends not on strength or intelligence, but on adaptability to change. So this shapes this frame, and this, whatever you call this template is served to people, if you become a successful, sustainable job player, then apply this to yourself. That’s it. I am suggesting and modeling a concrete model. People who are asking ‘what shall I do?’ You do this to yourself, you reshape your future yourself. Ross: I think it’s a very, very strong and very useful model for a lot of people. So I’d like to dig into a few things there. And one of the ones which perhaps not surprisingly, stands out to me is that of the team player in the sense of, as you say, team player, not only with other humans and machines. Now, machine capabilities are advancing very fast. So, the nature of how we are a good team player changes. So how can we think about being a good or better team player with technologies while AI, for example, continues to progress? Ufuk: Thank you. It is a very sophisticated question indeed. I was thinking what will he ask me to reply or will he want to discuss? Of course I expected such a question and I thought and I found this today and I’m using it for the first time here. I saw that we live in an LS fondue land because we are realizing that ‘yes we are humans we are the strongest species on earth we think so.’ But now we’re much more stronger than ever because we are creating artificial intelligence, robotics, everything — even if we are weak, we use those strong team plays let’s say so. We are in Wonderland. We can do everything deep fakes and other things — whatever we want we can create and I thought that we are in human-IT Technoland.  Technoland is the rabbit hole with AI, so it makes everything so it is a very oxymoron time. I think we can use this metaphor of Alice in Wonderland and humanity technoland.  I like to play with letters and words as Alice’s Human-IT. Yeah and Technoland is Wonderland and the artificial intelligence is the rabbit itself. So when we think about this, if we call the story itself, we are in the same situation. But if we don’t know what to do, we will not be able to respond to this question. Because we know that it has many sides – bad and good. We are living in an oxymoron world — everything is all together and we are in an eclectic world. We are putting on top of it another thing: we mobile, we stand here, and so on. So it’s such a complicated and complex stage right now.  Indeed, I don’t agree. Yes, we are saying that ‘we are in a very fast-forward mood and everything is so fast.’ No, it’s not fast enough. Not even fast, because we are in standby mode. We know what we can do. But to be able to activate all these things and use them everywhere; first, we need to solve this green energy problem, then we have to have the internet faster and achievable every time.Then we have to create this blockchain web three environment, infrastructure to be able to operate on that. Tthen we have to change the finance system from scratch by putting those crypto assets or whatever it is. Then unless we change or we have these sources, these infrastructural elements, it is impossible to get faster. So we are just in Wonderland, and we are rushing around. We are just assuming that they are not reality and AI is at its very primitive level. These are not AIs yet. Oh, we are scared, of course, we have to be scared of the ethics and the other things that can happen. If we don’t structure them correctly. We can imagine everything, but we are not acting yet. Because we don’t have enough sources. At the top of it, it is energy. We don’t have energy. We don’t have an internet connection yet. We don’t have blockchain web three and crypto assets. And there is a third war in the world. But that is not a war among Russia, the United States, Ukraine, or the other parts of the world. It is the war of the old system, the war between the old system and the new system — who are the old, we know, who are the news, the tech giants, the youngsters who are protesting the way of work, let’s call it capitalism or an ongoing system, and they are trying to create a new way. And I call it sustainable capitalism.  So, you asked me to integrate this human and technology? Yes, we are trying to integrate ourselves. So they and that’s for sure we can do it. And we started to work on that. But we are not even at the starting point yet. Let’s solve these listed resources and infrastructure requirements. Ross: Fabulous. So, just the first thing that strikes me about the Alice in Wonderland metaphor is that Alice went down the rabbit hole because she was curious. So it was a story of curiosity. And in going into this marvelous land where we are discovering and learning the unknown. So perhaps that’s part of what we require as well today. Ufuk: Yeah, yeah, for sure. We are wondering, really what will happen and we are, of course, at the change phase. The thing is that it is fast, but it is not . We are just in a loop and we are just waiting by working and rushing and scaring till we solve this problem. This is at the top of this energy problem indeed. That’s the major major problem of humanity.  What I was thinking about this integration…I read a book years ago, most probably you’ve also read the books of him, the name was Juan Enriquez, I think and he was mentioning the future chasing you or something like that the book name and he was describing he was analyzing did power always enhance of societies who create the alphabet because the alphabet is transforming the abstract to the real world. The first alphabet was in caves. And then this specialized alphabet was created by the Chinese. And at those times the power was in Chinese society or in that region, and then the Latin alphabet came with ABC. We have seen who had the power with that because the power of distributing you’re collecting the information. And then, the ongoing alphabet is 101010, a digital one binary, who has this binary alphabet will have the power to transform or to govern the world. And the last, but not least, I guess, the last alphabet is synthetic biology — alphabet genetics, the A, P, C, G, whoever designs this in the A, P, C, G, alphabet, that means get gene editing. And China is declaring that we will be the gene editor, state of the world, the leader of the world. So the last alphabet is not binary, the binary is getting transformed to qubits, which is again another alphabet. This synthetic alphabet will be qubit and will be formed by qubits. And the biological alphabet ATCG will be formed by ATCG.  So, we are just entering the era of not just simply merging humans and machines or robots for artificial intelligence which is much more serious than that. We are indeed just at the gate of creating a new kind of species, that kind of thing, maybe. We just discovering this stuff? And that’s the one I think we should concentrate on while thinking about any AI G and general, artificial intelligence, attics, and so on. So we are squeezing our attention into very closed areas. But we have to open our eyes and open our minds a little bit larger, wider than this. So when I think about this integration, merging, convergence, synthetic information, and logic, I see some other big, big things in front of us. Ross: Yes, indeed. And so if we think about cognition as making sense of the world, taking in information makes sense of that. So humans and some animals have done that. Well, now AI and as you point to it, it is, what do you call transhuman or certainly beyond, current existing species, which will be doing that in quite different ways. Ufuk: Yeah, yeah. That’s so exciting. Indeed. Although we are exaggerating our excitement without thinking about all these important topics. Anyhow, we are going forward and we’re I like this oxymoron very much. For instance, when we say virtual reality, it is itself an oxymoron. And when we say artificial intelligence, it is again, a very big oxymoron. So I repeat this again and again because we need some meeting points. We are under stress to catch up on something, and we’re in really scary mood most of the time. FOMO is our, I think the major illness for all of us. So we have to think that, yeah, we’re oxymorons, we have to live in a physical environment and we accept it. And somehow, maybe at some point, we have to accept and we have to be in a more steady mood. And the reason that latin phrase about this, Festina Lente, hurry up slowly. We have to be in that mood, hurry up slowly will help us to go a little bit imbalanced mood, otherwise, it’s so difficult to maintain healthy. Ross: I think that’s a wonderful note on which to. So how can people find out more about your work? Ufuk: My name is Ufuk Tarhan, they will see that in the description part. When they put ‘.com’ at the end, it is the place where I share everything, but in Turkish mostly because English literacy is very, very poor in my country. That’s another starting point, I said that there is information about the future, and almost all sources are in English, and my people are not able to read them or reach them.  I started to create or produce the content in Turkish. But now the translation of the AIs helps people to read everything, in every language. So I can start to create the content in English. But in any case, even if they are Turkish, they are translated and vice versa. So that’s my address.. Ross: Fantastic! Thank you so much for your time and your insights. Folk. It’s been a wonderful pleasure speaking to you. Ufuk: Thank you very much. It was a big pleasure for me.  The post Ufuk Tarhan on the T-Human model, being an autodidact, oxymoronic technologies, and teaming with humans and AI (AC Ep43) appeared first on amplifyingcognition.
undefined
May 1, 2024 • 36min

Shikoh Gitau on amplifying humanity, Africa’s AI leadership, technology sovereignty, and the power of community (AC Ep42)

“Sovereignty means that I need to be in charge of my destiny and able to control my future. This involves understanding the context in which you’re operating and not allowing others to define that context for you.” – Shikoh Gitau About Shikoh Gitau Shikoh Gitau is CEO of Qhala, a digital innovation company with clients across Africa. She was previously head of Safaricom Alpha, the first corporate innovation hub in Africa and worked for African Development Bank helping governments adopt information technologies. Her numerous awards include being the first African to win the Google Anita Borg Memorial Scholarship, and Africa’s Most Influential Women in Business and Government, Technology. She sits on numerous boards and holds a Ph.D. in computer science. Website: Shikoh Gitau LinkedIn: Shikoh Gitau Twitter: @DrShikoh What you will learn Exploring technology as an amplifier of human intent  The transformative impact of mobile technology in Africa How mobile money revolutionized financial inclusion in Africa The urgent role of AI in addressing critical health issues in Africa Discussing technology sovereignty and the power of defining one’s future The unique communal approach to technology implementation in Africa Future visions: AI’s potential to amplify community and human connection in Africa Episode Resources AI (Artificial Intelligence) M-PESA Mobile Money Wall Street Journal The Economist The New York Stock Exchange The Pathology Network (TPN) Gemini Transcript Ross Dawson: Shikoh, it’s wonderful to have you on the show. Shikoh Gitau: It is wonderful to be here after going through every other challenge, but we are here now. Ross: So you have spent all of your career amplifying people with technology. I would love to just hear your perspectives on how it is we can amplify humanity, and amplify ourselves. Shikoh: I love the word ‘amplify’ because it sets a very good tone for this conversation. So one of my mentors Kentaro Toyama wrote a book at the very beginning of my career. And I remember him giving the talk before he did the book. And he kept saying that technology is an amplifier of human intent. At that time, he was a Senior Director at Microsoft Research in India. And his goal for going to India was to help Microsoft build these technologies to enable human flourishing. I think after years of doing this, he realized that technology builds technology so much, so much to do something, but eventually amplifies a human act, a human intent, a human habit. And that’s what I love. I love this conversation because it set me on my career path. And my career path is you have to. I started looking inside how technology amplifies my intent. I want to be able to change the world. I want to be able to increase thriving and economic emancipation in Africa, how’s technology going to ‘Hey, help me achieve those goals.’ But more importantly, how is technology going to help other people around me and on the African continent to be more specific, be able to achieve their own goals? And that is how I got my career started in technology. So it was very interesting when I saw this. I’m thinking oh, amplifying cognition is part of human humanity and humaneness. For me, that is how I’m jumping into this looking at it from like, not just like an AI perspective, because AI is just another technology. And when I say that some people take it personally, I’ve been working in technology. I’ve gone through so many fats and buzzwords and hypes of technology. So I know AI. Well, it is a significant technology, it is one of the other technologies. For me, I feel like one of the technologies in Africa is a mobile, mobile phone. The mobile phone did change our lives. Yeah, to be totally honest. It changed how Africa works. And if I was to choose between, like, we are back to whatever Dark Ages and I was to choose between AI and mobile devices, I’ll always choose mobile devices. So I’ve seen this hype, I’ve seen it happen. And I’ve seen the amplification part of it. So I am, I am riding the hype, but I am very conscious that it is just amplifying what we as human beings want to achieve.  Ross: Yeah, hello, I love what you’re saying, typically around this idea of intent. That’s the first thing that really struck me about generative AI is that what it doesn’t have is intent. That’s what humans have intent on. And I think this point around you saying that the mobile phones, essentially Africa leapfrogged. So it led to mobile payments because it had the mobiles. And that’s what people had. And so it did lead the world and these technologies. I’m interested in thinking about other things with any other technologies now, where Africa could leapfrog in the same way that it did with the applications of mobile phones. Shikoh: So specifically taking Mobile Money, right? We always say it’s like a nice cliche that always says that a necessity is the mother of invention, right? So for us, having Mobile Money was not innovative. It was not, innovating for the sake of being on the Wall Street Journal, The Economist, The Times, or being listed in the New York Stock Exchange, because that’s many of the founders when you meet a founder, in Silicon Valley in New York, in Florida these days. You will find somebody just wants to be listed and their goal is to be able to create this company that is then listed on the New York Stock Exchange. That was not the intention. We intended to solve a very painful problem. At the time M-PESA. was coming in. That was 1997. We only had a 2% penetration rate in mobile, mobile, and mobile financial services, that is somebody who has a bank account, people who are able to save, people who are able to access credit, people who are able to access insurance, we’re just a measly 2%. And those are the 2% that were employed with formal jobs. Right now we lead the world with, like, 98% Mobile Money, we flipped the numbers. Why? Because my mobile phone is my bank account. Every time I go to the US, at least in the last two years, I’ve seen things changing in the US and Europe. So every time I go to the US, I’ll just be carrying my mobile phone around. And every time I needed to pay, I’d remove my phone, and I realized, Oh, my God, they don’t have M-PESA. Here, I need to go and find my card. So I had to carry cash and cards. I don’t carry cash in Kenya, why? My Mobile Money, my mobile phone is my Mobile Money. And that’s how we reflect. So in the last two to three years after COVID, especially when I saw the larger adoption of mobile payments in the US and Europe, I realized that we’ve been experiencing this for more than a decade as there is no surprise here, you’re talking to us. And in the same sort, again, amplifying human intent. Our intent was to solve this very painful problem around financial services access. In the same way, I strongly feel and after saying, I mean, I have been in Europe for the last couple of weeks, I realized that we are going to replug even in AI because you see there is no urgency in Europe to adopt AI. Zero, like I was thinking, zero urgency in all this conversation. Everybody’s like, Yeah, but things work. Why should we make them faster? What I mean, is that things actually do work for them. There is no need for interest, what is it called? Efficiency, because efficiency is there. Like AI is for the many people that I was speaking to another additional chore. For us, AI is a necessity. It’s the difference between life and death. And always give this example if we work with one of our startups called TPN, The Pathology Network. Let me give you and let me just put it in context. There are 3000 pathologists on the African continent, which is 1.5 billion people. Do you do the numbers in terms of ratio? Just do the numbers in terms of ratio, in terms of GP, this one to 3500 people, one doctor to 3500 people. It should be one to 50 people, I’m just putting it in context for you. So when AI is coming to bridge the gap of I’m a pathologist, I’m able to solve this by uploading pictures online, getting that quick initial diagnosis, getting connected to the right medicine or the right treatment plan. We will use it because we are solving a problem that is actually killing us. Yeah. When you’re being told, if you’re able to install this an A&P and see and follow the following instructions listen out, check out for these take a picture uploaded of your what is it called of your plants and to see if they are doing well if there is a disease, what to project to how much production utility of land is going to have? You’re going to use it, you’re going to use it for your kids. I hear everybody in the US call screen time. I’m saying screen time is the only time my child can be able to access this information when I’m at home on my phone, I’m going to give them my phone to go and learn. We are adopting this technology to be able to take us to that place where everybody in the world is about efficiency and increasing productivity. It is nice to have. It’s a chore. I’m using the word chore because I remember somebody saying you know AI is just another technology etc. Things already work here. And I’m thinking to myself, ask us if we’re using it to replug ourselves. So the world is still figuring it out. Africa is going to lead in AI. I am more than confident of that because I have our team that is actually working really hard to solve some of those problems that the foundational problems and our meet. I’m meeting all these amazing innovators across the continent who are saying if we have this, we will be able to solve this. So our goal is to be able to work with partners to solve this, the foundational aspects of it. So I know like, let the world continue fighting about regulation, fighting about what is the role of AI? Is it going to take up humanity, we are going to show them how it’s going to be used to solve our problems. Ross: That is awesome. That is absolutely fantastic. I can just imagine, as you say, the scope, you know, all, as you say, relatively speaking on the margins in the sort of the highly developed nations, but in terms of the incredible difference that these tools can make, and you’re just giving me a fantastic example there. It’s staggering, in terms of the potential, so very, very excited to see that and how that can be applied at scale. So one of the impacts, and somewhere do you look at as in terms of sovereignty, and technology, particularly given that has been dominated by big tech, which, because it’s big, it dominates? It takes power from us in many ways, I suppose. And whereas there’s, of course, technology has always had the potential to give power to individuals, it says somehow, we often haven’t taken that. And so perhaps those again, Africa can lead and point away to where the individual can be the leader. In a world where technology holds the balance of power. Shikoh: Yeah, when I think about sovereignty, I think about sovereignty of the individual from technology, but also think of sovereignty of geographies and entities, apart from others, they mean, loosely defining a serenity, it means the ability to define your own future and your own destiny very loosely, like, if you read all the definition, it goes to that. It’s that ability and capability to be able to define that right? Sometimes that ability is taken away from you. Why? Because somebody else somewhere else is sitting trying to define what your future looks like. Sovereignty means that I need to be in charge of my destiny, I need to be able to be in charge of my future. Yeah. And that means being able to understand the context that you’re operating in. Yeah, and not letting other people define your context for you. And that’s part of the foundational work we’ve been talking about on AI in Africa is saying, Guys, when you go to Europe, they’re saying, oh, we need to regulate, we need to regulate, we need to protect ourselves, we need to do this. But when you go there, everything is working for them. Everything works. Yeah, they have a super supercomputer, they can switch on a flip chart as a flip switch, right? They have been collecting data for ages, our data has already been moved from one person to the other. Yeah, they have talent, they have mechanisms and processes that they are working on, and they have the luxury to start talking about regulation. And for them, as I mentioned, it is just an updated technology. It’s not for many of the people that we spoke to, it’s not adding anything, it’s not adding any further efficiency to what they do. The things already work for them. The pace might meet, maybe wanting, but it’s still working for them, they don’t need to do any other thing. But for us, we need to first challenge ourselves, which means that we have to define many of these things for ourselves, which means we have to define things for ourselves. Because for them, they are in no hurry to make anything work because things already work. Things are not working for us. And they don’t understand that things are not working for us. And that’s the whole sovereignty part is understanding the context and defining it for yourself, and defining that future for yourself. And that is the ability and capability of doing yourselves.  So it is building these capabilities on the continent to be able to do this for ourselves to be able to define and innovate for ourselves because we understand our problems further. If I went to Europe, and I told them, I have to Uber for pathologists. They’ll be looking at me and thinking what do you mean, you have to Uber for a pathologist? I’m saying this because there’s only one pathology in my whole county of 10 million people. And they’re like, what, what? Exactly because it is not something that occurs to them. I cannot just walk into a medical center and get health care, then they have a right to get health care and then fight to the state later for payments. We don’t have that luxury not because a state cannot pay, we don’t have the doctors, right? So we have to be able to see that the syringe I’m thinking about is the serenity of mind and mindset. Knowing that we are solving for ourselves, we are solving for things that are very, very African. Yeah. And other people are coming from totally different contexts. My totally different place, and their idea of the world. And the less they’re looking at the world is very, very different from us. And then accepting and acknowledging that the lens we look at the world is extremely different from theirs. Ross: There’s lots I want to dig into there. So first things you talked about, you mentioned the luxury of regulation. And I think you’ve just flown back from Europe, where there’s got the most intense regulations around AI and technology and data and so on. So you’re suggesting that Africa can flourish better where there is less regulation? Because some of the regulation is, of course, trying to avoid over-concentration of power? When you look at regulation, or its potential role in allowing Africa to flourish through technology, how do you envisage that? Shikoh: To be totally honest, after being in Europe, I’ve stopped hating on them, because I was just thinking, Why are you pushing this down our throat, literally, but being there and seeing for them, there is no, like, whether AI works or not, whether it’s regulated or not, it will not affect their life. Yeah, so I totally get it from them like, we have the luxury of saying, let’s regulate everything, completely. And then let’s try it out slowly by slowly if it works for us. Let’s try everything and see what works for us. Not if it’s what works for us, which are two distinct things. Yeah, for them it is if it works for us, and that’s well and good. If it doesn’t work for us, that’s well and good. It will not affect them. For us. Whatever works for us, works for us, gives us a big step change. And the difference between If and What is huge. It’s miles apart, because, for us, we are looking at how we can innovate around this technology to meet and close our gaps. Yeah, for them. They’re like saying, how can we use this technology for efficiency to just make life a little bit better? But if it does not exist, it is fine. It does not exist, we don’t care. And for me, that is the difference. And for me, my mind shift was changed from actually not needing AI to survive. We do need AI in the same way. I mean, in the same way, they fought everything from mobile phones to cloud computing, everything. We don’t have enough computers, we will rely on cloud computing, right? We don’t have a computer in every school. We rely on mobile phones for that, right? We don’t have teachers, we don’t have these things. So we have to rely on innovation. Yeah. We have to constantly and consistently be innovating around technology. they are just small, you know, in innovation, they say they’re these dramatic big innovations. And these, what’s the word for that innovation that we call it, it is called Step CI, that’s the top step change innovation. If you read the book by Christian Christensen, you will see the differences in innovation. For most of Europe, it’s like small, small innovation, that is helping them just a little bit. It is not 10% incremental in terms of difference for us. It’s a 500% difference if this thing works. And for us, it’s what can you do for us? Not if it can do for us?  Ross: Yeah, one of the things you mentioned there is around education. And I think that’s yeah, that’s one of the, as you say, completely transformative in quality education personalized to all African youth. That’s that, you know, the impact of that is absolutely incredible. But one thing you said earlier was about somebody the fact of the unique African way of thinking from mine, which is very distinct, and for many reasons for the rest of the world, and without trying to define it because of that. That’s too big a question. So well, how does that inform your vision of Africa’s potential? So what is the as we see Africa as the continent of extraordinary potential for so many reasons today? How does the uniquely African perspective way of thinking and way of shaping the potential of the vision for what Africa can become in the coming years? Shikoh: To explain how Africa is different I always give this example because people don’t understand these until they are able to be practical in how they do. So in the US, when you put Siri or Google Drive or Google, any of these mapping technologies, they only say, drive straight, turn left, turn right, go straight, turn left and right. We agree on that, right? There are places on the continent, where people will say, walk north, then turn south. Yeah, the other people will take a walk straight up, when you see a tree of this kind, tan on your left. Those are different ways of thinking, right? And what happens is that we are put in a box to turn left and turn right. Every time somebody says turn right, I have to lift my hand and say which hand I used to write. And then that is my right hand. And then the other one is left. So that’s how I in my head, I’m able to figure right and left. It’s not automatic in my head. For many people, it’s left and right is very automatic in their heads, right? And now you can imagine, when, as a community, we are a very community-based communal culture across Africa. It’s not a Kenya thing. It’s not a South African thing across cultures, we have a term for it called a Nguni Bantu, but it’s everywhere. Ubuntu is everyone’s continent where we are because you are, there’s no individuality, which is another condition around AI that I always argue against, right? It’s about not budgeting time around us as a community.  So when I’m giving directions for somebody to go somewhere else they go when you get you when you see this house, that is so and so’s tree, so actually is named after a person, then you turn left, then you see so and so’s bridge, then you cross the bridge. So we’ve personalized all these things around our community, around, our heritage, right? And that is what we are bringing to the world, right? And you’re bringing this idea of like, my humanity is not based necessarily on me as an individual or my intellect. So every time I ask my friends about having costumes with my US friends, they say, You’re not scared about AI, I’m saying no. Why? Why? Why should I be scared about AI, it’s going to take away our ability to be like, unique, and individual. Saying my uniqueness is not formed by my intellectuality, we recognize that there are other intellectual beings in the universe. As part of our growing up. We were taught that intellectual beings can be innate and animate beings. Yeah, we are taught that in many, many African countries, not just Africa. I mean, like, even in Asia, they have a lot of these beliefs that our intellectuality does not make us unique. Our ability to think and reason is not just unique to human beings, what makes us part of a community is that that is what makes us human, our ability to talk to each other, to be able to have compassion and have kindness, you know, to each other is what makes us as a humanity. But when you go to the US when you go to Europe, everybody’s by themselves, they go to their small apartment, they don’t know who their neighbor is, as good as every like every holiday like those are two holidays these last few days they eat, whether you’re Muslim or not, you’re partying. During the Christmas holidays, everybody’s partying with their neighbors. Why? Because that’s how we were brought up in the community and I’m saying not putting up. I’m going to my neighbors if the whole community comes together, we bring our food together and have a really good time. Right? And for us that communal way of thinking that I don’t think of as myself alone, I think of myself in the context of other people is what makes a difference. Yeah. When I think about sheep, I don’t think of sheep as an individual. I think about Chico within the context of my family and the village I came from. Yeah, so anything I do impacts my village and everything in my village does impact me. It is two-way, it’s a two-way conversation and people don’t understand that. We have deep roots here. We have a deep association with ourselves and that is something that does not translate into many of the AI models we are pushing like you know paper I don’t if you don’t read our paper, we have something you’re calling Data Sets and Data Systems. Data sets are what the world knows. Data Systems is how that data is being used in our context. And that is what is important for us not to lose sight of what Africa is about.  Ross: So I mean, that goes to, I suppose to the point that mobile technology being such a transformative tool for Africa, and for many, many reasons, but including the fact the reality is that families are different places, and people are connected, and so to connect, and that’s a very obvious supportive community. So I’d love to hear how you see AI, being able to amplify community or the relationship between how AI in an African context might be, you know, relates to the reality of community and the ability to support and to grow, grow the community.   Shikoh: So let me backtrack. So I feel old when I say this. I was amongst the first five people at most, who studied the impact of the internet on the continent through mobile phones. So my research actually my PhD research was around, what the internet looks like for the billions of people on the continent on their mobile phones. That was my world. My whole dissertation was about the very unknown days of the Internet or the mobile Internet on the continent. So it was very, very, very early days, no smartphones yet, right? And in the middle of doing my research, Facebook became publicly accepted, acceptable on the continent. So it stopped being like a university only. Products become and everyone can be able to access Facebook products. At that time, I saw a switch even in my research, because at that time, the internet, Facebook became the internet. Right. And that equating Facebook to the internet changed. Why? Because I’m able to connect like I was not in the country. So, people who I have not seen for many years while coming onto Facebook, I was able to connect to them and link back to my childhood, link back to many, many things, right? Think of it from that point of view. So Facebook became the internet for many, many African people. Yeah, so we have to credit Facebook for that. And then it became even better with WhatsApp. Yeah. Now I am able to create these tight-knit communities within WhatsApp. Yeah. And for many, many, even my grandmother, my mother, everybody, my extended family is on we have a WhatsApp group for the extended family and magazines and every level of community that you can think about to my siblings, right? And that has connected us as families like people, we will I mean, without the internet, who never would have lost love made lose if they like I wouldn’t have traveled out of the country. I couldn’t connect with my cousins. Now. We were very, very tight. We could talk every day until midnight. It’s a nice, nice community and family is what is called family is not child on these platforms, right? When I think about AI, AI has the ability to do that even better. Yeah, the ability to engage a baby’s ability to enable people to know patterns, not being able to connect, find, find help for each other, close family for us is not only for connecting but also finding help and supporting each other in very, very dark times. But also like I’m having a baby. Does anybody want to come and sit with me for the next two weeks? Right? So that is where we are, we are looking at the internet being an intricate part of us, again, amplifying our intent to be a community and helping us critical communities across the board. Yeah, and for me, that is what is critical is being able to create those bonds, using agents using understanding using our understanding about each other, like agents understanding all the single each other and be able to notify us if something is not happening or not being able to seek for health help, both in health, financial education, any type of help that somebody is able to do that. But most importantly, helping us bond better. Yeah. Because once I have a better understanding, I’m able to bond better.  Ross: Fantastic. So to finish up, just like to get a few words from you on the potential for amplifying the humanity of Africa. So you know, I think that’s your mission with extraordinary other people on that journey with you. And you know, just love what were Where could this go? What is that? What is that vision for how Africa’s beautiful humanity can be amplified to the fullest? Shikoh: Before I can amplify our humanity, AI needs to accept and acknowledge that Africa actually exists. Yeah, we just wrote an op-ed a few weeks ago, where like, our hook was the Gemini debacle. Right. And everybody liked it, it was hilarious for me. Because as an African woman, and uneducated when I was in March, I’ve been erased. Yeah, over and over again, I do not exist, the number of times I receive an email saying, Dear Mr. Gitau, all of these things, because nobody bothered to Google and find out that I’m a woman, right? So it was very hilarious to see these threads upon threads of conversation around Gemini racing. White men, because it is post-primary around white men more than anything. And as I was telling my friends at Google, you need to give the guy who made that bag and give them a race, because, they brought to life, what we experience every day. Yeah, but when it’s flipped on the other side, then yeah, it is actually quite painful, right? And we need to be able to acknowledge Yes, that it was a bug. And or not, I don’t know, I only see some bugs that are normal technology. So I understand it cannot be a bag. But for me, what’s exciting about that is being able to showcase that this can be undone. the narrative of humanity can be undone. So it is a very conscious thing that people actually do. Yeah, in the same way, you can arrest somebody, you can decide not to erase them.  So acknowledging that the African continent is a continent of 1.5 billion people, we are a huge landmass, and not minimizing us to something small in the middle of the globe, we are bigger than the biggest continents of the continent. We are the largest continent, but every time we teach geography, we are minimizing the place of Africa in the world, right? We are minimizing the intellectuality of African people of black women of African origin. Acknowledging that AI can help us acknowledge that Africans can do the rest. Because right now what we are fighting is we are fighting bias, barriers, and hurdles to get acknowledgment, one acknowledgment, acknowledgment is there, and we will do the rest. We are not asking the world a favor to do us, we are saying, can we stop believing that Africa is this small thing in the middle of the continent? That is a nuisance to the world? Africa has a lot to offer to the world. That is my closing remark. Ross: That’s fantastic. Yeah, as you say, you’ll be able to do it for yourself and you already are. And I think that it’s not that ignoring will be fading away as Africa makes a bigger, bigger impact of duty on you and so many other wonderful people in the continent. So thank you so much, not just for your time and your insights today, but also for all of the wonderful work you’re doing to amplify humanity not just of Africa, but better the world. Thank you.  Shikoh: Thank you so much. The post Shikoh Gitau on amplifying humanity, Africa’s AI leadership, technology sovereignty, and the power of community (AC Ep42) appeared first on amplifyingcognition.
undefined
Apr 24, 2024 • 39min

Tom Hope on AI to augment scientific discovery, useful inspirations, analogical reasoning, and structural problem similarity (AC Ep41)

“The unique ability of AI and LLMs recently to reason over complex texts and complex data suggests that there is a future where the systems can help us humans find those pieces of information that help us be more creative, that help us make decisions, and that help us discover new perspectives.” – Tom Hope About Tom Hope Tom Hope is Assistant Professor and Head of the AI Research Lab at Hebrew University of Jerusalem and a Research Scientist at Allen Institute for AI. His focus is developing artificial intelligence methods that augment and scale scientific knowledge discovery. His work has received four best paper awards and been covered in Nature and Science. Google Scholar: Tom Hope LinkedIn: Tom Hope What you will learn Exploring the intersection of AI and scientific discovery The role of large language models in navigating and utilizing vast scientific corpora Current capabilities and limitations of LLMs like GPT-4 in generating scientific hypotheses Innovative strategies for enhancing LLM effectiveness in scientific research Designing multi-agent systems for more insightful scientific paper reviews Future projections on AI’s evolving role in scientific processes Complementarity of human and AI cognition in scientific discovery Episode Resources AI (Artificial Intelligence) LLM (Large Language Models) GPT-4 Claude PubMed Simulated annealing Swarm optimization AlphaFold Semantic Scholar Google Scholar People Nicholas Carlini (DeepMind researcher) Nicky Kittur (from CMU) Joel Chan Daphna Shahaf   Transcript Ross Dawon: Tom, it’s awesome to have you on the show. Tom Hope: Thank you, thank you for having me. Ross: I love the work which you are doing. And I suppose the big frame around this is how we can use computation to accelerate and augment scientific discovery. So,  just love to sort of start off well, what are some of the ways in which computation including large language models can assist us in the scientific discovery process? Tom: One of the main ways I currently look at this is using large language models and more generally, AI to tap into huge bodies of humanity’s collective knowledge, scientific corpora, as a great example, millions of papers, over 1 million papers coming out in PubMed, every single year. Of course, you have patterns, you have many other sources of technical knowledge. And these sources of knowledge, potentially our treasure trove of many millions, if not billions, of findings, methods, approaches, perspectives, insights; but our human cognition, while extremely powerful, and its ability to extrapolate and be creative, pull together all kinds of diverse perspectives, it’s still very limited in its ability to explore this vast potential space of ideas, this combinatorial space of all the different things you can combine and the different things you can look into.  As our knowledge continues exploding, so obviously, there are going to be more and more directions to explore as a result. So this problem keeps accelerating, with our knowledge accelerating. So the unique ability of AI and LLM recently to reason over complex texts, and complex data suggests that there is a future where the systems can help us humans, find those pieces of information that help us be more creative, that help us make decisions that help us discover new perspectives. By taking out problem contexts, the current thing we are interested in and working on a decision we want to make. And then somehow representing that in a way that enables retrieving these different nuggets or pieces of knowledge from these massive corpora, synthesizing whatever was retrieved into some sort of actionable inspiration or insight that helps us make the decision. And potentially, even automating some of these decisions and some of these hypotheses that we make as part of our process, there’s still a long way to go there.I guess we’ll talk about that right now. Ross: Yep. Well, in one way, I’d also love to dig into some of the specifics and the details of the strategies for that. And also, just to start off, just actually pulling back to the big picture. I mean, how do you envisage the complementary roles of human cognition? And let’s call it AI cognition in this process of scientific discovery? Where might that go in terms of those complementary roles? Tom: So, we are living in quite revolutionary times in this area, right? I mean, things keep changing very rapidly. So to prophesize on what the ability of AI is going to be in a year from now, or even in a week from now, is a risky business, right? We can talk about what things are currently look like – currently the ability of MLMs and this new like as the representative of state of the art, AI, the ability to extrapolate from what it’s seeing, it’s massive training, like the entire web or the entire corpus of archive papers, let’s say. The ability is quite limited. In our experiments and experiments by others is a nice quote I like from a Deep Mind researcher, Nicholas Carlini, that working with GPT4 is less like having a co-author on a paper, more like some addition working with a calculator. So a particularly strong calculator, right? But still, it’s calculated. So if you wanted to come up with a new direction or creative direction, which as a scientist or as a researcher, that’s a lot of what we do. So currently, it’s quite limited. To give you an interesting example, I just yesterday tried to prompt GPT4 to come up with a creative new idea for mining scientific literature for generating new scientific hypotheses. It’s kind of a meta kind of question. Because you’re asking, it’s how it could use itself to come up with a new scientific direction. I told it to be non generic and to be technical and go into details, etc. And what I came up with was, use predictive analytics and natural language processing to find new trends and directions. Okay, so then I tell you, well, GPT4 that’s a bit too generic. Can you please be more specific? And then it’s okay, so let’s use quantum natural language processing and quantum predictive analytics. So its ability to do this test is very limited at the moment. It will either kind of go for these generic suggestions or recombine all kinds of popular concepts and software we want from an AI scientist.  So currently, as a short answer, based on the current state of the art, and again, not saying what will happen in a week or in a year, it’s time. Currently, LLMs can be our extensions, to scale up the way we search for the relevant pieces of knowledge, and potentially search for inspirations. Because, we’re currently limited in our ability to see very narrow kinds of segments of human knowledge. Even in our very own specific areas, we’re kind of losing the ability to keep track. So it could be that even if we’ve slightly extended out of our narrow kind of tunnel vision, will suddenly the kind of gold nugget that great inspiration we’re missing will be out there, right. So LLMs can be that sergent. But the ability to synthesize a creative idea and to reason over it, and extrapolate into proposing something new and solid and reasonable. Currently, that’s where humans are still needed. Ross: Yeah, absolutely. And for good times to come.  Tom: It looks like.  Ross: So what I love about your work is that you have found ways to architect or to use LLMs and ways that are far more effective than out of the box. So for example, just ask GPT4, or Claude or something, it might give you a decent answer, or it might not. And even if you’ve broken, poke and prod a bit at it, whereas you have discovered or created various architectures, we’re bringing these together. And so for example, in your literature based discovery, or in multi agent review processes, or, indeed, in your wonderful recent paper on scientific inspiration machines optimized for novelty.  So, we’d love to just hear. I suppose the principles that you have seen work in how you take LLM is beyond just a text interface, towards  where it does create better, more insightful, more valuable complements to scientific understanding and advancement. Tom: Yeah, sure. So, one core principle goes back to what we just discussed: the ability to retrieve useful inspirations. Okay, so we need to think about what an inspiration is, right? An inspiration is something that stimulates in our mind some sort of new perspective, or some sort of novel way to look at the problem – that’s, let’s say, one of the main ways to think about inspiration. And now you want to be able to give the LLM the ability to retrieve useful inspirations. That is, problems, let’s say or potential solutions from somewhere around the design space of the problem you’re currently looking at. So problems that are not too near but also potentially not too far. There is some sort of sweet spot for innovation, right? So if you want to be able to translate what I just said into some technical notions, you can embed your problems, and embed the solutions in some sort of vector space that enables the LLM to search for these inspirations. Then, prompt the LLM to consider those inspirations, synthesize a new direction, and then reconsider its idea in light of what’s out there already. And that’s when it’s in the specific context when you’re trying to innovate. Innovation, from the novelty is directly tied in to comparing to what’s out there and expanding. And extrapolating out of what we currently know.  So the second design principle is to have the LLM reconsider its ideas by comparing to existing work. And that is, again, a form of retrieval. But it’s a different form of retrieval. Whereas, in the initial retrieval I mentioned, we want to be able to retrieve kind of structurally related partially related pieces of information, not necessarily more like things that are in the immediate neighborhood of your problem, but things that are kind of slightly outside of it. In the second phase of retrieval, we want the LLM to kind of be very accurate. And given that it’s an idea, we wanted to now find the closest matching ideas out there, kind of like what a reviewer would do when considering a scientific paper. When a reviewer considers the scientific paper they want to know — Okay, here are five papers that are the closest to what these new papers are proposing, how close are they? Is the idea that’s being proposed incremental or not? And the LLM needs to be endowed with this ability to find the most relevant work, and then compare and contrast it and kind of iterate over that. So those two design principles we implemented in that paper you mentioned of innovating, of scientific inspiration mentioned machines optimized for novelty.  Ross: Just one question is, do the major large language models have sufficient corpus of scientific research? Or does this require fine tuning or retrieval, augmented generation are other approaches to ensure that you’re addressing the right body of work.  Tom: In my experience, it definitely requires retrieval augmentation, fine tuning could also help — that’s a different story, because our ability to fine tune GPT4, for example, does not exist, right, because it is not open for fine tuning. And it’s quite a big leap over other state of the art models, you know, Claude 3 is now getting close, but also we cannot find that. And retrieval augmentation is crucial for multiple reasons. First, you know, while the language models have been trained on, as we said, the entire web and probably have seen many of these papers out there; that does not mean that we can directly access that knowledge and get the LLM to access that latent knowledge with some prompt. If you just ask it to, let’s say, come up with a way to relate to the work that’s closest to some input problem that you feed in, it may well hallucinate a lot. And also kind of tend to focus. And this is rough intuition tend to focus on the more popular common areas that it’s seen during training in less and less exponentially at the kind of tails of the distribution of, let’s say, scientific papers and see and this is kind of very hand wavy, because no one knows exactly what’s, how to quantify what’s going on there when it’s retrieving knowledge on this latent parameter space. But intuitively, that’s probably what’s happened. Right? So by retrieval, you can get a much finer level of resolution control when you’re able to retrieve the exact scientific papers or sentences of nuggets of information you want the LLM to consider when it’s coming up with a new idea. Ross: So I was very interested in what you said earlier about finding the ideas that are sufficiently far away, but not too far away, as it were. And so how can you architect that, as you say, given that, the LLM probably is not really familiar with those concepts within the body of work that it has.  Tom: So the way I think about this is via structural similarity, structural connections. To give you one of the most concrete examples, analogies. I’ve, in the past, and also fairly recently worked with, for example, Nicky Kittur, from CMU, Joel Chan, Daphna shall have on computational analogy, which is this kind of long old idea in the eye, where given some input, you can find abstract structural connections and analogies to other inputs. So for example I like, let’s say you have some problem in optimization, you want to optimize some complex function or objective. Where would you get inspiration for doing that? Right? So if you use this kind of standard, let’s say, search over a big corpus of technical problems, and solutions, you’ll find many other optimizations, maybe you’ll find some sort of other pieces of knowledge on mathematics and operations, research, etc.  But can we go further and find inspirations from let’s say, nature, from physics from, from how animals cooperate, right? So that is actually something that humans have done in the past, right? So humans have used inspiration from thermodynamics to come up with what’s known as simulated annealing, right? The same sort of analogy between how thermodynamics behaves and and metals and mental heating and cooling, etc, to come up with some analogy for the energy of an objective function, or swarm optimal optimization approaches – optimization approaches based on multiple agents, let’s say ants, searching some complex space, and then gradually converging into the local or global optimum points. So that’s something that with standard search, you’re not going to be able to find, but with structural kind of abstractions, being able to match on partial aspects of a problem or partial aspects of a solution, you can certainly get the retrieval to go out outside it’s kind of initial local bubble and find more diverse perspectives. Ross: Structural structures or problems and if you can find similar ones, that’s immensely valuable. So how specifically do you get the LLM to be able to identify structurally similar problems or challenges?  Tom: I’ll give you one example that we kind of pioneered a few years ago, where we break down and input text, let’s say a description of a past idea in a scientific paper, we break it down into two fundamental aspects: problems, mechanisms, the relations between the mechanisms and the problems right. So which mechanism was used for which sub problem connections between mechanisms etc. And given that you have this breakdown, you can now build a kind of a search engine that finds you ideas that share similar mechanisms. Ross: To what degree is it humans or AI, which are doing that structural mapping?  Tom: AI does the two main kind of heavy lifting of this pipeline. The first is going over millions of let’s say papers or patterns, etc. and automatically extracting these aspects, the purposes, the mechanisms, etc. And then, as a second step, when you have some sort of input, let’s say you want to find inspiration. So you conduct automated retrieval. You find inspirations with similar mechanisms, but very different problems.Then you can start by embedding these different aspects, you can come up with all kinds of similarity metrics, that consider partial matches partial matches by matching on certain mechanisms or matching on mechanisms while constraining the domain or the problem space to be distant than the inputs. And in that way, you can, for example, given some sort of problem on designing materials, you can come up with inspirations from biology, some of those real examples we’ve seen, or we’ve helped researcher, who is having some problems with discovering connections to between graph mining and, and some whatever their application domain was, I won’t go into those details right now. But discovering some connections between that into decision theory. So by kind of conditioning on certain mechanisms and problem key phrases, but not others. Ross: So one of your papers you looked at using a LLM to provide review feedback to scientific papers. And I suppose the basic idea was that if you just asked you that LLM didn’t do a particularly great job. But you built a multi agent structure, which created a far better, more incisive, more useful feedback on the paper. So the thing about multi agents is the architecture as in how the multiple agents combined, in order to be able to create better insights, I would love to hear how you have structured those multiple agents to create that better review feedback on a scientific paper. Tom: So just to connect that to what we’re saying, right, the ability to review and an initial idea to review a scientific paper, it’s kind of fundamental, if you want to automate the process of coming up with better hypotheses, right, because a reviewer agent can then refine an initial idea. And the most basic form of review is finding related work. And the contrasting to it, which, as I discussed, is something we’ve already done.  But now, in the paper that you’re just mentioning, we tried really hard to get GPT4 for you know, against state of the art to, to give us better feedback on a manuscript. And when I mean, a manuscript, it’s a full PDF of, it’s not just an abstract or a few sentences. And a main issue we saw is in terms of specificity. So when we asked GPT4 for or even with a lot of prompting effort to generate some sort of critical review of the paper, they often came up with suggestions like, you should consider more ablation studies, or you should consider adding statistical tests, etc,. And when you think about it, those are nearly always correct, right? I mean, it’s pretty rare to have a paper that shouldn’t consider more ablation studies, or do more mystical tests. So if you just evaluate the accuracy of that, well, it’s probably gonna get you very close to 100% because it’s pretty much always correct. But is that really useful? We’ve also seen some previous work, also very recent on using LLM, like GPT for generic reviews. And they seem to have promising results. When you dig deeper into them, we find that a lot of the so-called promising results are because of that, because they generate kind of generic suggestions. So to make LLMs more specific, what we found to be the most effective currently at least, is that multi agent architecture you mentioned. So to get multiple LLMs to each one focus on a very specific aspect of a paper or in a very specific aspect of a reviewing process, right. So it’ll focus only on the experiment section or want to focus only on clarity to focus only on the aspect of novelty compared to previous work. And then to get them to orchestrate, right? So you can think of the orchestration of an idealized metal reviewer, right? So I meant to review, unfortunately, at least in our area of AI hasn’t had the bandwidth and time to kind of coordinate between reviewers and to have them kind of focus on specific aspects, you sometimes see that in journals, and kind of high quality journals not flooded by so many 1000s of submissions every month, that the editor will kind of reach out to expert reviewers each one focusing on a specific aspect of related to their expertise. And then coordinating between. So this kind of orchestrated LLM can take that role. Ross: So are there any, are there any specific aspects of that orchestration in terms of how you guide the LLM to do that. Tom: Our focus was to break the task down into multiple LLMs, each focusing on specific aspects. And then the orchestrator wasn’t something far from what you’d imagine in the basic implementation of it. So it would take kind messages from each one of the reviewers, consolidate them, and pass other messages back to other reviewers so they can consider other contexts from other LLMs. Part of the reason we did this also was because at least when we were conducting our experiments, using one large language model to take in a full scientific paper was outside of its reasonable ability in terms of the context window. When I say reasonable ability, I mean that got added the ability to take in 128k tokens toward the end of our experiments cycle. But even with that, there’s a lot of work on what’s called last and the last in the middle effects are the ability of the LM to reason over complex, long documents kind of diminishes quite rapidly, even with fairly easy questions. And this is a very complex question, requiring kind of back and forth reasoning and comparing different parts of a paper and seeing if one claim is supported by another, etc. So that’s why we needed to kind of break down into the multiple agents, the orchestration was fairly standard. In that sense, the main component here is how to break down into different aspects of reviewing. Ross: So there, we’ve talked about a few different structures, the multi agent, the analogical reasoning, the other ways to be able to find structurally similar problems on. Are there any other high level architectures that you point to in your work that enable LLMs to accelerate scientific discovery. Tom: So in terms of analogical reasoning, you can think of zooming out of that as a specific design choice for falling under the more general let’s say building blocks of creative thinking, such as associative thinking or divergent thinking. And analogies that, let’s say, as a fundamental and kind of wide reaching function for achieving those rights. So a different way to think about this would be to let’s say, Forget about the aspect of analysis, but just diversify the inspirations that you’re looking for – not necessarily in terms of analogies, but just diversifying your retrieved nuggets of knowledge. And this is something we’ve also been exploring recombination, tightly related to analogies, but not exactly the same, when we’re trying to kind of recombine concepts. And when we try to recombine these concepts, the question is, how do you select the right one you want to recombine things that have not been not too close to each other, have not been recombined in the past or their nearest neighbors have not been recombined in the past? But also you want some notion of feasibility, right? You want to be able to kind of maybe predict the outcome of what’s going to happen when these two concepts are merged together. Is it going to have some sort of no sort of impact that you can anticipate is the combination based on historical combinations is this combination likely, in some ways, you have this kind of very challenging balancing act of novelty likelihoods as a feasibility impact, we’ve started scratching the surface on some of these, right? So, and then work for just one example. And I can elaborate if you want more in Simon, the assignment paper, we also have fine tuning experiments. And fine tuning allows you to learn from past combinations of ideas. And when you’re fine tuning, you’re essentially optimizing for likelihood, right? The likelihood function, and LLM is what you’re optimizing, the likelihood of seeing a sequence of tokens given the input. And in our case, that translates to the likelihood of proposing some idea given a problem, right. And if you can learn from past examples, you’re optimizing for the likelihood, which corresponds to a different notion of what you want the LLM to do when it’s coming up with ideas. But of course, if you’re optimizing only for likelihood, you’re kind of converging into the mainstream, like into the writer, and you want to balance it with novelty. That’s what we’ve started to do inside. Ross: So to round out, I mean, you’re on the edge of this idea of how we can use AI to accelerate scientific discovery. So what is now the frontier? What are the research directions? Where do we need to push against to take the ability for AI to potentially vastly accelerate our scientific discovery process? Tom: So it’s important to note and obviously, we’re not gonna have time to discuss those. It was important to note that scientifically, discovery is not only about, let’s say, hypothesizing creative directions, right? I mean, alpha fold, as a kind of leading recent example for protein structure, and then leading to protein generation is a great example of an AI that can help boost scientific discovery without necessarily being creative in the sense that we think about it at least.  So there’s a lot of tasks that fall under the process of making scientific discoveries – designing experiments, conducting the experiments with some sort of agent that can actually issue commands to a robot, let’s say in a lab in a wet lab, for example, and then a feedback loop that can kind of help the agent decide what are the more in kind of promising areas in the space of ideas, and then some of the some research groups working on that, as we speak. Another process is finding the information you need, not necessarily inspiration, the information you need to solve a problem. You’re currently having some problem in your experiments on optimizing some part of your device or process, etc. How can an AI agent help you understand your current context, your current problem, and then find information that’s needed to solve it without necessarily being created? So there’s a lot of different aspects that go into science. And all of them need solutions. I think of it as kind of zooming out and thinking of one kind of big answer. The big question is to break down the scientific process into these major building blocks, components, modules, having agents, whether LLM or some other future architecture that may magically emerge and focus on these different modules and components. And do all of that work, while somehow understanding our human contexts or human objectives? Right, what we’re trying to achieve is our preferences, very kind of ill defined, but you know, innate, fundamental human concept and human experience. That’s very hard to convey to LLM by just you know, seeing let’s say, your code or your Google Docs is not necessarily capturing what you want to achieve, what are your preferences? What are your subjective utility functions? What’s your career goal? For example, right, or why a certain combination of ideas is something that appeals to you more than some other combination of ideas, because maybe it aligns more with your values or your ethics, right? So all these different considerations then how to translate those into kind of specific commands, the specific MLMs that can perform actions in each one of those modules that form the scientific process. So that’s the kind of the biggest, let’s say, frontier, how to build systems and models that can do that. Ross: Yep. And clearly we are whilst, your work of you and your colleagues has taken us quite a long way, there’s a massive amount still to go. And this is, but it’s still such an important domain. I mean, the application of this could be transformative, and everything from healthcare to saving advancements space travel to who we are, and all this understanding. So it’s an incredibly exciting field.  So Tom, where can people go to find out more about your work? Tom: So please check out my semantic scholar page. And of course, my Google Scholar page. Owork as a semantic scholar. I’ll also mention my Google Scholar page. And check out more broadly the fascinating work done at AI to unscientific discovery. And of course, colleagues from other institutes also have my website online, you can quickly find me on Google. And please feel free to reach out if you found any of this interesting. Ross: Fantastic! We’ll provide links to all of your significant papers and also the related areas of interest. Thank you so much for not just your time and insight today, Tom but also your very important work. Tom: Thank you very much for having me and have a good evening. The post Tom Hope on AI to augment scientific discovery, useful inspirations, analogical reasoning, and structural problem similarity (AC Ep41) appeared first on amplifyingcognition.
undefined
Apr 17, 2024 • 33min

Céline Schillinger on network activation, curious conversations, podcasting for connection, and creative freedom (AC Ep40)

“Criticizing and blaming people, organizational culture, or the company for problems doesn’t lead you to a better place. What may lead you to a better place is to actually roll up your sleeves, connect with each other, and do something about it.” – Céline Schillinger About Céline Schillinger Céline Schillinger is Founder and CEO of We Need Social, which works with organizations globally on engagement leadership. She is the author of Dare to Un-Lead, which was Porchlight Leadership & Strategy Book of the Year and on the Thinkers50 Best Management Booklist. Previously she worked in senior roles in the pharmaceutical industry across many countries and continents. Her extensive awards include Knight of the French National Order of Merit. Website: www.weneedsocial.com LinkedIn: Céline Schillinger What you will learn Exploring the journey from entrepreneurial beginnings to corporate transformation The shock of transitioning to a large pharmaceutical company’s culture The power of forming an employee network to instigate positive change Challenging traditional hierarchies with network activation Leveraging digital tools and volunteer networks for organizational innovation Embracing agency, networking, and community for future-ready organizations Personal practices for amplifying individual capabilities and fostering connections Episode Resources Sanofi Network Activation Employee resource groups Community Studio   Book Dare to Un-Lead: The Art of Relational Leadership in a Fragmented World by Céline Schillinger Transcript   Ross Dawson: Celine, it’s a delight to have you on the show. Céline Schillinger: Thank you so much, Ross. Thanks for having me. Ross: So you work a lot with organizations and amplify their capabilities. And I think the really interesting starting point was, how is it that you think of what organizations are and how they function? What are the underlying principles that guide you? Céline: Yeah, you know, this question came to me quite late in life. And actually, I started my career in small organizations in a very entrepreneurial kind of setting. I was working in Asia at the time. I moved to Asia, quite young, on my own to look for a job, look for adventure. And I started to build my career there, and I spent years in Vietnam, and then in China, and then I joined a large pharmaceutical company returning to Europe after about 10 years. And that was a shock for me to discover this whole new world of large enterprise. It had a different language that I did not understand. I thought I was already sort of a seasoned professional with 10 years experience behind me, but I did not understand this new language. It was talking about frameworks and metrics processes, and I wondered. I did not even understand the job description, I was off the job I was responding to the job offer is so funny, I asked someone to help me decipher this, I think, but that’s part of organizational culture, to have this their own language and references and acronyms and all those things and ways of doing of course, so I discovered the large enterprise. And for a while, I did not question or did not even wonder how it worked. Because I was all in on the pleasure of discovery. It was all about experimenting and meeting new people, and it was great. And then progressively I started to realize that, yeah, there’s there are…how can I say principles ways of working, which do not necessarily emerge from which are kind of a religion kind of in a way – they do not emerge from the field or from common sense or the ways of working are prescribed and determined by habits, beliefs, and not necessarily by what would be needed, by customers by efficiency and so on. And I thought of, I had, maybe this kind of ethnological view coming from outside coming from a very different world. I started to question this, and question my role in perpetuating role models, behaviors that made no real sense. What was my role in maintaining that? Could I contribute to changing them a little bit instead? But what could I do on my own? So probably nothing. But then, about 15 years ago, I joined forces with other colleagues. And we formed a network of people wanting to bring about positive change, not wanting to protest. No, so I didn’t join any union. For example, I joined a network. I co created a network. And that was when I remember the surprise, the puzzled look on the face of HR, HR did not understand what this thing was about. ‘An employee network. Well, what is it?’ It was before employee resource groups became popular? And then it was really weird for them, some of them. I remember somebody asking me who’s the boss of your network, I would say, we have no boss, it’s a network. But they felt like it was impossible to imagine another way of organizing than the one they were accustomed to. In the organization. A pyramid with a boss with a senior leader or the top, people reporting to him or her – often it’s a him and we created a bit by chance originally was a bit of came a bit of a surprise for me, but we created something new a new way of delivering value, delivering value by connecting people around something they want to achieve together. There was no hierarchy, no one giving orders to each other to anyone else. There was a common desire, I was fueled by this willingness, this desire to create change, create an impact.  Ross: This was around 15 years ago within the organization? Céline: 2010, in a big pharma company that I was working with, at that time, called Sanofi. We created a new space for freedom, a space for creativity, where we sort of realized we empower ourselves. And we sort of realized that criticizing, blaming people or organization culture or the company for problems, what leads you to a better place, what may lead you to a better place is to actually roll up your sleeves, connect with each other and do something about it. Right?  Ross: Absolutely.  Céline: We had no idea until we started this and did it and, and it was amazing to realize that we had more power than we thought. And we didn’t need a roadmap created by somebody else, we didn’t need an order by or a job description or whatever, for other forms of prescription to create, and to innovate. And, and so we did that. And to me, it was a whole new world opening up to the whole new world of agency connection, and community building. Ross: Originally, I mean, I think organizations are networks intrinsically, they always have been.  Céline: Yes, you’re right. Ross: So kind of, that’s been harder to image, you know, given the traditional hierarchical structures. And yeah, the first thing that started to shift us more towards the realities of networks was actually email. So anybody could send an email to anyone else in the organization. And so that’s the flattening of the organization’s ability to connect. So the networks have always been there. That’s the reality, all organizations are networks, they function as networks, but that recognition, and giving it the name, and to give me as you say, framing in the way, what you’ve done, just gives enormous power to the ability to create values. I think this idea of, you know, whoever’s in the organization, be able to connect them with where they can create the most value, solve a challenge to see an opportunity. And so if you can have that fluid, network enablement, that creates an extraordinary value in the organization. Céline: Definitely. I remember the time, I was very naive, and I remember drawing the org chart to newcomers who wanted to understand what this company was about and who we were…and I said, ‘Look, let me draw you the organization chart. This is how we work’ – how naive was that, right? Now, in hindsight, I am like, oh, this is just a symbolic representation.  Ross: Yes, it is not the reality.  Céline: Exactly. It is not the reality.  It is far from it, right? It does serve some purpose, including ego boosting purpose, which is not the most useful thing for business. But yes, we definitely need to expand people’s views to other forms of representation. And one of them is something I’ve been working on lately. I call it “network activation” using visualization tools. So there are plenty of them on the market. And some of them can be extremely useful. Using some of those tools to make people look at themselves as a network and realize visually that they are a network they are connected by so many more things than they even imagine. And it’s very often an aha moment for them. To see that what matters is not so much who is where in which position. How long are they still going to be the boss of this or that, but what matters is the density and the quality of our connections. Ross: So, are you using digital trails or survey-based or how are you discovering what the networks are? Céline: Yes, yes. So survey-based, is quite simple and very powerful. Because then you involve people in the responses in the process, right? You explain to them Ross: They’re also thinking about it. Céline: Exactly.  Ross: I mean, I always love one of the best questions…there’s a number of wonderful questions in network surveys. And one of them is, you know, who helped you the most in doing your work. And often, it’s not the boss, or the person reporting to you; someone different in the organization. These kinds of things, and people start thinking, ‘Oh, well, actually, who is it that I draw on when I need help?’ And that’s the start, you know, that’s already a way of awakening that awareness.  Céline: Yeah, you’re right. These questions are not always easy to answer. But other questions are easier. For example, what do you know? Which country have you worked in? What? Those kinds of questions about personal professional experience, history, skills, aspirations. Then on a map, you realize that these things are actually common with other people that you didn’t even know existed. But now you have a reason to go and talk with them, or to create something or sometimes you realize that there are potential nodes that can become communities of practice, for example, which are a fantastic way to further an organization — to connect the system to more of itself.  Ross: So thinking about this, this idea of amplifying cognition, or just thinking about amplifying organizational capabilities. So this is, as you described, this is a wonderful network activation tool. So what are some of the other approaches that you use with organizations to be able to amplify the capabilities of individuals or the organization as a whole? Céline: So you see, we are here on the podcast.  Podcast, I think, is a super interesting tool as well, to bring to the world of organizations. So I’ve also been working with a partner Lila North, on the Community Studio — some it’s an offer, we’ve, we’ve been implementing successfully in several organizations by which you get a group of volunteers, create together an internal podcast, with a series of episodes and the volunteer group is gets renewed after each season, each podcast season. So you amplify the group of people that the podcast community, the internal podcast community, through, it’s really important to have community engagement there so that the guests can become part of this community. And you have this community, this community grows progressively and becomes a sort of not a platform in the sense of a technical platform, but an opportunity, a group that enables cross entity cross level conversations. And that creates a habit or an openness to curious conversations. It’s really hard in organizations today to have curious conversations about each other. We’re so focused on our rules on milestones or deliverables and we’re still enclosed in this hierarchical structure very much. And this pushes communication habits with. With these kinds of things, internal podcasts, the studio is run by a community of volunteers, and from which we collect insights in order to create meta conversations. Were able to open up I wouldn’t say change dramatically, culture, this is I don’t think this is possible anyway. But at least open up new possibilities. And whether people seize or not depends on them. So we always remain very aware of the freedom we need to let people act otherwise if they act upon order from anybody else or upon our suggestion or if it’s not their own. You like this ownership piece that makes it sustainable. Ross: Most of the best ideas come from conversations — the best thinking, ways, and perspectives. So what you’re doing is basically having these conversations in public so they can be heard by the organization. I love this idea of being able to distill that into meta conversations. But I’m interested in some of the practicalities of that. I mean, you’ve got networked people who are interested in that. But how do you disseminate that to make that people listen to it? One of the ways in which you helped him propagate this through the organization? Céline: You have to make it interesting. So we equip volunteers with good question, interviewing skills. And it’s fascinating to see that they become better and better at interviewing people. First, in the first episodes, they sort of follow the script, you know, the questions we’ve written together, it’s very, very scripted. And progressively, you see them evolving, and actually paying attention to the responses of their guests, and asking follow up questions. And that kind of thing. It’s really fascinating to see it develop skills, but also it creates better episodes, more interesting questions. Yep, sessions.  And so with promotion, and engagement, it’s also part of the work that volunteers get involved in. And so we equip them with that kind of skills, we help them become engaging leaders, rather than just makers of something. You know, it’s about engaging colleagues creating connections, and then connections, and conversations over those. Those first conversations, it’s really interesting to hopefully see it grow and expand throughout the organization, some departments or less, for example, it’s very easy to get salespeople interviewed. Those people are used to talking and you know, bringing their points. And for some others, it’s more difficult for people in maintenance jobs and technical jobs on the front line. But that’s the challenge. That’s part of it, it’s precisely what we try to bring volunteers to do more of, you know, go and have those. Reach out to these. These people who do not have a voice, try to build rapport, create the conditions for them to come in and talk and express their views because we don’t hear those people enough.  Ross: I couldn’t imagine to see the…you’ll get a far richer flavor of the organization, usually just speaking to the people you deal with in your current projects and your work. So to be exposed to, as you say, technicians or maintenance or other far, far flung parts of the organization that would really make you feel more belonging. So this is internal only. So just available on the intranet. Céline: Yes, this is internal only because you have freedom of speech. It’s already not easy for people to speak openly on a podcast on an internal podcast. So if it was external, it would be really way too challenging. But you know, I’m thinking of another example of amplification, which is a piece of work I was involved in, back in 2014, to ‘18, when I was an internal change agent, I did not work on my own at that time yet. But it was really an interesting piece of work involving volunteers as well as quality improvement. And for many years, the company had tried to establish pharmaceutical company factories, enormous industrial challenges. And for many years, the company had addressed those challenges. Were through the quality department — a small group of experts, professional people, highly dedicated to their mission.  But this was not enough. And the outcomes were not great. It was only when we amplified this work by involving volunteers, by involving people from all over the company, not just quality professionals, but anyone, anyone who wanted you can be you could be, I don’t know, a legal expert, you could be an admin, you could be a technician, anyone was welcome to participate in this movement. And there are ways of creating a movement. And it doesn’t work. All the time in this particular case, it worked beautifully. And we engaged I think, around 5000 people instead of the originally, I don’t know maybe 250. You were involved in creating quality, improving quality and by having many more people, but also many more viewpoints, many more a greater diversity of perspectives. And also by making this work, not just an intellectual work, like, how do we solve problems, but an emotional work, too? How do we connect around solving problems? How do we make it engaging? Interesting? How do we create enthusiasm? How do we make people? How do we create desire? Right? So this is what made it work. Ross: So extending your ideas, I mean, I’m not sure to what degree you think of yourself as a futurist. But I’d like you to cast forward to, you know, this, these the ideas you have around the sorts of organizations that are truly effective. So let’s say you know, 2030, whatever in the years to come, we have many unfolding forces. So what are some of the ways in which you would point to this very successful organization of the future, and how that can be enabled? Céline: You know, I think I’m not a futurist at all, I’m a presentist. Because these things already work now. So you don’t have to wait until tomorrow to put them in practice. They do work already now. And I would say, I would recommend three key practices or lines of thoughts, right? That I have found for myself and my colleagues, and my clients now immensely useful, the first line of thought is around agency, creating more space for creative freedom. Instead of trying to enclose people further and further into narrow job descriptions or scripted courses of action, instead of trying to transform them in a way as in robots, we will never be great robots. So it’s about removing those expanding territories, in which people can first get back this thinking capacity that is often lost under process in organizations. So recreating space and time to think and it starts with ourselves, right? What do I maintain from this system that would deserve to be changed? How can I be authentic to my n and really walk the talk and what I do, and what I think and so agency creating more space for people to to act for impact, to decide to negotiate to create sense making opportunities and so on. The second line of thought is around networks, creating, removing this, we talked about this pyramid, hierarchical thought pattern, I think the hierarchy will not go away. I think it is still useful in many ways, but removing the patterns of domination and submission that it entails, will be immensely useful. So that information can flow faster and be more readily accessible throughout a network. That’s why we need to bring in.  Ross: What are some of the enablers of that? I suppose we want to create more networked organizations that often say that the successful organizations of the future will be very effective networks. What are some of those things that enable that? Céline: think volunteer networks, think digital networks, enterprise social platforms, think communities of practice? Think network activator with a network visualization, I mentioned think those kinds of the Community Studio amplifying stories in a peer to peer mode. The possibilities are infinite. As soon as we move away from this pyramidal thought pattern, and try to look at networks and what could enable them then we realize the possibilities are limitless. Now we need to find, I would say the most practical and simple solutions to put that forward but creating a volunteer network around an opportunity. Something that really matters to an organization is a good way to start. If you create a volunteer group around something that is not really valued by the data – that is not that important for the organization will not produce much impact. But if leaders, leaders, if the company gets really serious about it, let’s involve more people.  Let’s change the type and the nature of our conversations. I remember the clients I worked with a few years ago who had decided to create a new technology that they wanted to roll out. And I suggested that instead of rolling it out, they create conversations around with people who would be affected by this new technology, and create a volunteer group with people who wanted in order to address that issue, the technology and more broadly, the digital future of the company. And by doing that, it was a very simple move. But by doing that, we transform people from victims of a change to co creators of a change. And we formed networks between these people, and between these people and their leaders — their leaders and titles. Ross: So that was the third point too? Céline: Yeah, to community building exactly, let’s third line of thought is creating community, bringing the network together and making it stick together. So that it doesn’t go in all directions, and sticking together around a big opportunity, the vision of a better future that is CO created by those people, not just by the executive team in a boardroom. But that involves at least a representative sample of the organization. So that a diversity of perspective is already present from the very start of an initiative. And then there’s a lot of effort to be made to reinforce the value of this community so that people do not default back to a purely functional role, a vision of their role. Ross: So, to round out, I’m interested in just you personally, in how you…you have expansive ways of thinking and experience in ways do you apply that? So I’d love to hear anything you would do personally, to be able to amplify your own cognition and thinking and capabilities. Céline: I’ve been using digital networks a lot myself, and that has been a huge enhancer amplifier. enabler, I’ve been able to connect with people to learn about new ideas, new thoughts. I was an avid fan, that’s the first thing. So connecting with people, and I’m very sad of what Twitter has become, because I don’t like it anymore. But what it was in the past was a really amazing blessing. So I’m very grateful for that.  And the other practice that I’ve used personally was to write. And it took some effort for me. At first I felt absolutely unable and not even legitimate. I thought, you know, what could I write about? Why would people even read anything about me? Or about my ideas, and I was pushed gently by friends of mine who said, ‘Yeah, you should.’ And actually writing is a fantastic way to organize your thoughts, to expose them to others, to be challenged by others, to grow. Now I look as I look back at some of my older posts and think, I wouldn’t think this way anymore. But it was a necessary step in this process of sense making, really. So I think this is a great thing and making them public so that you can share and connect and learn from others. Ross: It creates a feedback loop. It networks of thought and I think one of the largest networks of people is networks of ideas and thoughts. And when you put things out there, then that starts to catalyze, these different connections, these different ideas, the different possibilities, so, absolutely, Céline: yeah. And participating in podcasts, like yours is also another way. That’s why I’m extremely grateful for the opportunity. Ross: Where can people go to find out more about your work? Céline: They can find me on LinkedIn. They can find me also on my website, weneedsocial.com Ross: Fantastic. Thank you so much for your time and your insights, Céline.. It’s wonderful work you’re doing. Céline: Thank you so much, Ross. Very grateful. The post Céline Schillinger on network activation, curious conversations, podcasting for connection, and creative freedom (AC Ep40) appeared first on amplifyingcognition.
undefined
12 snips
Apr 10, 2024 • 40min

Sangeet Paul Choudary and Ross Dawson debate AI in the future of work (AC Ep39)

Guest Sangeet Paul Choudary discusses the impact of AI on skill premiums, rise of platforms in the gig economy, dynamics between labor and technology, role of AI in human roles, adaptability in tech change, and strategies for success in an AI-driven future.
undefined
8 snips
Apr 4, 2024 • 32min

Charles Hampden-Turner on Mobius leadership, reconciling paradoxes, dilemma strategies, and conscious capitalism (AC Ep38)

British management philosopher, Charles Hampden-Turner, discusses the power of paradoxes in understanding the human mind and the role of conscious capitalism in today's business world. He explores the use of the Mobius strip as a metaphor for solving complex problems and addresses societal polarizations through integrated thinking. Visualizing paradoxes and the application of imagery in comprehending complex ideas are also highlighted.
undefined
24 snips
Mar 27, 2024 • 32min

Charlene Li on generative AI strategy, AI book editors, prompt libraries, and wisdom hacking (AC Ep37)

Charlene Li, author and advisor, discusses generative AI strategy, AI book editors, prompt libraries, and wisdom hacking. Topics include AI's role in research and writing, customizing AI for personalized guidance, strategic business proposals with AI insights, and harnessing AI for customer experience and operational efficiency.
undefined
Mar 20, 2024 • 35min

Philipp Schoenegger on AI-augmented predictions, improving human decisions, LLM wisdom of crowds, and how to be a superforecaster (AC Ep36)

AI researcher Philipp Schoenegger discusses the intersection of AI and human decision-making, the impact of ChatGPT on research, AI-augmented forecasting, wisdom of AI crowds, becoming a superforecaster, blending intuition with AI, and insights into the future of AI-enhanced judgment.
undefined
16 snips
Mar 13, 2024 • 34min

Bryan Cassady on AI innovation, Humans + AI idea evaluation, increasing diversity with AI, and evidence-based innovation (AC Ep35)

AI's impact on innovation, human-AI collaboration for idea evaluation, leveraging diversity for better outcomes, and evidence-based innovation. Bryan Cassady discusses bridging knowledge in AI and innovation, enhancing team alignment with AI, and exploring interactive AI's role in idea generation.

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode