Ground Truths cover image

Ground Truths

Latest episodes

undefined
Aug 4, 2023 • 39min

Melanie Mitchell: Straight Talk on A.I. Large Language Models

Transcript with LinksEric Topol (00:00):This is Eric Topol, and I'm so excited to have the chance to speak to Melanie Mitchell. Melanie is the Davis Professor of Complexity at the Santa Fe Institute in New Mexico. And I look to her as one of the real, not just leaders, but one with balance and thoughtfulness in the high velocity AI world of large language models that we live in. And just by way of introduction, the way I got to first meet Professor Mitchell was through her book, Artificial Intelligence, A Guide for Thinking Humans. And it sure got me thinking back about four years ago. So welcome, Melanie.Melanie Mitchell (00:41):Thanks Eric. It's great to be here.The Lead Up to ChatGPT via Transformer ModelsEric Topol (00:43):Yeah. There's so much to talk about and you've been right in the middle of many of these things, so that's what makes it especially fun. I thought we'd start off a little bit of history, because when we both were writing books about AI back in 2019 publishing the world kind of changed since . And in November when ChatGPT got out there, it signaled there was this big thing called transformer model. And I don't think many people really know the difference between a transformer model, which had been around for a while, but maybe hadn't come to the surface versus what were just the deep neural networks that ushered in deep learning that you had so systematically addressed in your book.Melanie Mitchell (01:29):Right. Yeah. Transformers are, were kind of a new thing. I can't remember exactly when they came out, maybe 2018, something like that, right from Google. They were an architecture that showed that you didn't really need to have a recurrent neural network in order to deal with language. So that was one of the earlier things, you know, and Google translate and other language processing systems, people were using recurrent neural networks, networks that sort of had feedback from one time step to the next. But now we have the transformers, which instead use what they call an attention mechanism where the entire text that the system is dealing with is available all at once. And the name of the paper, in fact was Attention is All You need. And that by attention is all you need they meant this particular attention mechanism in the neural network, and that was really a revolution and enabled this new era of large language models.Eric Topol (02:34):Yeah. And as you aptly pointed out, that was in, that was five years ago. And then it took like, oh, five years for it to become in the public domain of Chat GPT. So what was going on in the background?Melanie Mitchell (02:49):Well, you know, the idea of language models (LLMs) that is neural network language models that learn by trying to predict the next word in a, in a text had been around for a long time. You know, we now have GPT-4, which is what's underlying at least some of ChatGPT, but there was GPT-1 and GPT-2, you probably remember that. And all of this was going on over those many years. And I think that those of us in the field have seen more of a progression with the increase in abilities of these increasingly large, large language models. that has really been an evolution. But I think the general public didn't have access to them and ChatGPT was the first one that like, was generally available, and that's why it sort of seemed to appear out of nothing.SPARKS OF ARTIFICIAL GENERAL INTELLIGENCESentience vs IntelligenceEric Topol (03:50):Alright. So it was kind of the, the inside world of the computer science kinda saw a more natural progression, but people were not knowing that LLMs were on the move. They  were kinda stunned that, oh, look at these conversations I can have and how, how humanoid it seemed. Yeah. And you'll recall there was a fairly well-publicized event where a Google employee back I think last fall was, put on suspension, ultimately left Google because he felt that the AI was sentient. Maybe you'd want to comment that because that's kind of a precursor to some of the other things we're going to discuss,Melanie Mitchell (04:35):Right? So yeah, so one of the engineers who was working with their version of ChatGPT, which I think at the time was called LaMDA was having conversations with it and came to the conclusion that it was sentient, whatever that means, , you know, that, that it was aware that it had feelings that it experienced emotions and all of that. He was so worried about this and he wanted, you know, I think he made it public by releasing some transcripts of his conversations with it. And I don't think he was allowed to do that under his Google contract, and that was the issue.  tThat made a lot of news and Google pushed back and said, no, no, of course it's not sentient. and then there was a lot of debate in the philosophy sphere of what sentient actually means, how you would know if something is sentient. And it Yeah. and it's kind of gone from there.Eric Topol (05:43):Yeah. And then what was interesting is then in March based upon GPT-4 the Microsoft Research Group published this sparks paper where they said, it seems like it has some artificial general intelligence, AGI qualities, kind of making the same claim to some extent. Right?Melanie Mitchell (06:05):Well, that's a good question. I mean, you know, intelligence is one thing, sentience is another. There's a question of whether, you know, how they're related, right? Or if they're related at all, you know, and what they all actually mean. And these terms, this is one of the problems. Of course, these terms are not well-defined, but most, I think most people in AI would say that intelligence and sentience are different. You know something can be intelligent or act intelligently without having any sort of awareness or sense of self or, you know, feelings or whatever sentience might mean. So I think that the sparks of AGI paper from Microsoft was more about this, that saying that they thought GPT-4 four, the system they were experimenting with, showed some kind of generality in its ability to deal with different kinds of tasks. You know, and this, this contrasts with the old, older fashioned ai, which typically was narrow only, could do one task, you know, could play chess, could play Go, could do speech recognition, or could, you know, generate translations. But it, they couldn't do all of those things. And now we have these language models, which seemed to have some degree of generality.The Persistent Gap Between Humans and LLMsEric Topol (07:33):Now that gets us perfectly to an important Nature feature last week which was called the “Easy Intelligence Test that AI chatbots fail.” And it made reference to an important study you did. First, I guess the term ARC --Abstract and Reasoning Corpus, I guess that was introduced a few years back by Francois Chollet. And then you did a ConceptARC test. So maybe you can tell us about this, because that seemed to have a pretty substantial gap between humans and GPT-4.Melanie Mitchell (08:16):Right? So, so, so Francois Chollet is a researcher at Google who put together this set of sort of intelligence test like puzzles visual reasoning puzzles that tested for abstraction abilities or analogy abilities. And he put it out there as a challenge. A whole bunch of people participated in a competition to get AI programs to solve the problems, and none of them were very successful. And so what, what our group did was we thought that, that the original challenge was fantastic, but the prob one of the problems was it was too hard, it was even hard for people. And also it didn't really systematically explore concepts, whether a, a system understood a particular concept. So, as an example, think about, you know, the concept of two things being the same, or two things being different. Okay?(09:25):So I can show you two things and say, are these the same or are they different? Well, it turns out that's actually a very subtle question. 'cause when we, you know, when we say the same we, we can mean sort of the, the same the same size, the same shape, the same color, this, you know, and there's all kinds of attributes in which things can be the same. And so what our system did was it took concepts like same versus different. And it tried to create lots of different challenges, puzzles that had that required understanding of that concept. So these are very basic spatial and semantic concepts that were similar to the ones that Solei had proposed, but much more systematic. 'cause you know, this is one of the big issues in evaluating AI systems is that people evaluate them on particular problems.(10:24):For example, you know, I think a lot of people know that ChatGPT was able to answer many questions from the bar exam. But if you take like a single question from the bar exam and think about what concept it's testing, it may be that ChatGPT could answer that particular question, but it can't answer variations that has the same concept. So we tried to take inside of this arc domain abstraction and reasoning corpus domain, look at particular concepts and say, systematically can the system understand different variations of the same concept? And then we tested this, these problems on humans. We tested them on the programs that were designed to solve the ARC challenges, and we tested them on G P T four, and we found that humans way outperformed all the machines. But there's a caveat, though, is that these are visual puzzles, and we're giving them to GPT-4, which is a language model, a text, right? Right. System. Now, GPT four has been trained on images, but we're not using the system that can deal with images. 'cause that hasn't been released yet. So we're giving the system our problems in a text-based format rather than like, like giving it to humans who actually can see the pictures. So this, this can make a difference. I would say our, our our, our results are, are preliminary .Eric Topol (11:57):Well, what do you think will happen when you can use in inputs with images? Do you think that it will equilibrate there'll be parity, or there still will be a gap in that particular measure of intelligence?Melanie Mitchell (12:11):I would predict there, there will still be a big gap. Mm-hmm. , but, you know, I guess we'll seeThe Biggest Question: Stochastic Parrot or LLM Real Advance in Machine Intelligence?Eric Topol (12:17):Well, that, that's what we want to get into more. We want to drill down on the biggest question of large language models. and that is, are they really you know, what is their level of intelligence? Is it something that is beyond the so-called stochastic parrot or the statistical ability to adjudicate language and words? So there was a paper this week in Nature Human Behavior, not a journal that normally publishes these kind of papers. And as you know it was by Taylor Webb and colleagues at U C L A. And it was basically saying for analogic reasoning ,making analogs, which would be more of a language task,  I guess, but also some image capabilities that it could do as well or better than humans. And these were college students. So , just to qualify, they're, they're not, maybe not, they're not fully representative of the species, but they're at least some learned folks. So what did, what did you think of that study?Melanie Mitchell (13:20):Yeah, I found it really fascinating. and, and kind of provocative. And, you know, it, it kind of goes along with a, a many, there's been many studies that have, have been applying tests that were kind of designed for humans, psychological tests to large language models. And this one was applying sort of analogy tests that, that psychologists have done on humans to, to, to large language models. But there's always kind of an issue of interpreting the results because we know these large language models most likely do not think like we do. Hmm. And so one question is like, how are they performing these analogies? How are they making these analogies? So this brings up some issues with evaluation. When we try to evaluate large language models using tests that were designed for humans. One question is, were these tests at all actually in the training data of a large language model? Like, had they, you know, these language models are trained on enormous amounts of text that humans have produced. And some of the tests that that paper was using were things that had been published in the psychology literature.(14:41):So one question is, you know, to what extent were those in this training data? It's hard to tell because we don't know what the training data exactly is. So that's one question. Another question is are these systems actually using analog reasoning the way that we humans use it? Or are they using some other way of solving the problems? Hmm. And that's also hard to tell. 'cause these systems are black boxes, but it might actually matter because it might affect how well they're able to generalize. You know, if I can make an analogy usually you would assume that I could actually use that analogy to understand some new, you know, some new situation by an analogy to some old situation. But it's not totally clear that these systems are able to do that in any general way. And so, you know, I tdo hink these results, like these analogy results, are really provocative and interesting.(15:48):But they will require a lot of further study to really make sense of what they mean, like to when you give, when, when the, the, you know, ChatGPT passes a bar exam, you might ask, well, and let's say it's, you know, it does better than most humans, can you say, well, can it now be a lawyer? Can it go out and replace human lawyers? I mean, a human who passed the bar exam can do that. But I don't know if you can make the same assumption for a language model, because it's the way that it's doing, answering the questions in a way that its reasoning might be quite different and not imply the same kinds of more general abilities.Eric Topol (16:32):Yeah. That's really vital. And something else that you just brought up in multiple dimensions is the problem of transparency. So we don't even know the, the specs, the actual training, you know, so many of the components that led to the model. and so you, by not knowing this we're kind of stuck to try to interpret it. And I, I guess if you could comment about transparency seems to be a really big issue, and then how are we going to ever understand when there's certain aspects or components of intelligence where, you know, there does appear to be something that's surprising, something that you wouldn't have anticipated, and how could that be? Or on the other hand, you know, why is it failing? so what is, is transparency the key to this? Or is there something more to be unraveled?Melanie Mitchell (17:29):I think transparency is, is a big part of it. Transparency, meaning, you know, knowing what data, the system was trained on, what the architecture of the system is. you know, what other aspects that go into designing the system. Those are important for us to understand, like how, how these systems are actually work and to assess them. There are some methods that people are using to try and kind of tease out the extent to which these systems have actually developed sort of the kind of intelligence that people have. So, so one, there was a paper that came out also last week, I think from a group at MIT where they looked at several tasks that were given that GPT-4 did very well on that seemed like certain computer programming, code generation, mathematics some other tasks.(18:42):And they said, well, if a human was able to generate these kinds of things to do these kinds of tasks, some small change in the task probably shouldn't matter. The human would still be able to do it. So as an example in programming, you know, generating code, so there's this notion that like an array is indexed from zero. The first number is, is indexed as zero, the second number is indexed as one, and so on. So but some programming languages start at one instead of zero. So what if you just said, now change to starting at one? Probably a human programmer could adapt to that very quickly, but they found that GPT-4 was not able to adapt very well.Melanie Mitchell (19:33):So the question was, is it using, being able to write the program by sort of picking things that it has already seen in its training data much more? Or is it able to, or is it actually developing some kind of human-like, understanding of the program? And they were finding that to some extent it was more the former than the latter.Eric Topol (19:57):So when you process all this you lean more towards because of the pre-training and the stochastic parrot side, or do you think there is this enhanced human understanding that we're seeing a level of machine intelligence, not broad intelligence, but at least some parts of what we would consider intelligence that we've never seen before? Where do you find yourself?Melanie Mitchell (20:23):Yeah, I think I'm, I'm, I'm sort of in the center ,Eric Topol (20:27):Okay. That's good.Melanie Mitchell (20:28):Everybody has to describe themselves as a centrist, right. I don't think these systems are, you know, stochastic parrots. They're, they're not just sort of parroting the data that they, they've been trained on, although they do that sometimes, you know, but I do think there is some reasoning ability there. Mm-hmm. , there is some, you know, what you might call intelligence. You know, it's, it's, but the, the question is how do you characterize it and, and how do you, I for the most important thing is, you know, how do you decide that it, that these systems have a general enough understanding to trust them,Eric Topol (21:15):Right? Right. You know,Melanie Mitchell (21:18):You know, in your field, in, in medicine, I think that's a super important question. They can, maybe they can outperform radiologists on some kind of diagnostic task, but the question is, you know, is that because they understand the data like radiologists do or even better, and will therefore in the future be much more trustworthy? Or are they doing something completely different? That means that they're going to make some very unhuman like mistakes. Yeah. And I think we just don't know.End of the Turing TestEric Topol (21:50):Well, that's, that's an important admission, if you will. That is, we don't know. And as you're, again I think really zooming in on, on for medical applications some of them, of course, are not so critical for accuracy because you, for example, if you have a, a conversation in a clinic and that's made into a note and all the other downstream tasks, you still can go right to the transcript and see exactly if there was a potential miscue. But if you're talking about making a diagnosis in a complex patient that can be, if, if you, if we see hallucination, confabulation or whatever your favorite word is to characterize the false outputs, that's a big issue. But I, I actually really love your Professor of Complexity title because if there's anything complex this, this would fulfill it. And also, would you say it's time to stop talking about the Turing tests that retire? It? It's, it's over with the Turing test because it's so much more complex than that .Melanie Mitchell (22:55):Yeah. I mean, one problem with the Turing test is there never was a Turing test. Turing never really gave the details of how this, this test should work. Right? And so we've had Turing tests with chatbots, you know, since the two thousands where people have been fooled. It's not that hard to fool people into thinking that they're talking to a human. So I, I do think that the Turing test is not adequate for the, the question of like, are these things thinking? Are they robustly intelligent?Eric Topol (23:33):Yeah. One of my favorite stories you told in your book was about Hans Clever and the you know, basically faking out the potent that, that there was this machine intelligence with that. And yeah, I, I think this, this is so apropo a term that is used a lot that a lot of people I don't think fully understand is zero shot or one shot, or can you just help explain that to the non-computer science community?Melanie Mitchell (24:01):Yeah. So, so in the context of large language models, what that means is so I could, so do I give you zero, zero shot means I just ask you a question and expect you to answer it. One shot means I give you an example of a question and an answer, and now I ask you a new question that you, you should answer. But you already had an example, you know, two shot is you give two examples. So it's just a ma matter of like, how many examples am I going to give you in order for you to get the idea of what I'm asking?Eric Topol (24:41):Well, and in a sense, if you were pre-trained unknowingly, it might not be zero shot. That is, if, if the, if the model was pre-trained with all the stuff that was really loaded into that first question or prompt, it might not really qualify as a zero shot in a way. Right?Melanie Mitchell (24:59):Yeah. Right. If it's already seen that, if it's learned, I think we're getting, it's seen that in its training data.The Great LLM (Doomsday?) Debate: An Existential ThreatEric Topol (25:06):Right. Exactly. Now, another topic that is related to all this is that you participated in what I would say is a historic debate. you and Yann LeCun, who I would not have necessarily put together . I don't know that Yan is a centrist. I would say he's more, you know, on one end of the spectrum versus Max Tegmark and Yoshua BengioEric Topol (25:37):Youshua Bengio, who was one of the three notables for a Turing award with Geoffrey Hinton So you were in this debate. I think called a Musk debate.Melanie Mitchell (25:52):Monk debate. Monk.Eric Topol (25:54):Monk. I was gonna say not right. Monk debate. Yeah. the Monk Debates, which is a classic debate series out of, I think, University of TorontoMelanie Mitchell (26:03):That's rightEric Topol (26:03):And it was debating, you know, is it all over ? Is AI gonna, and obviously there's been a lot of this in recent weeks, months since ChatGPT surfaced. So can you kind of give us, I, I tried to access that debate, but since I'm not a member or subscriber, I couldn't watch it, and I'd love to actually but can you give us the skinny of what was discussed and your position there?Melanie Mitchell (26:29):Yeah. So, so actually you can't, you can access it on YouTube.Eric Topol (26:32):Oh, good. Okay. Good. I'll put the link in for this. Okay, great.Melanie Mitchell (26:37):Yeah. so, so the, the resolution was, you know, is AI an existential threat? Okay. By an existential, meaning human extinction. So pretty dramatic, right? and there's been, this debate actually has been going on for a long time, you know, since, since the beginning of the talks about this, the “singularity”, right? and there's many people in the sort of AI world who fear that AI, once it becomes quote unquote smarter than people will be we'll lose control of it.(27:33):We'll, we'll give it some task like, you know, solve, solve the problem of carbon emissions, and it will then misinterpret or mis sort of not, not care about the consequences. it will just sort of maniacally try and achieve that goal, and in, in the process of that, for accidentally kill us all. So that's one of the scenarios. There's many different scenarios for this, you know and the, you know, debate. The debate was, it was very a debate is kind of an artificial, weird structured discussion where you have rebuttals and try, you know. But I think the debate really was about sort of should we right now be focusing our attention on what's called existential risk, that is that, you know, some future AI is going to become smarter than humans and then somehow destroy us, or should we be more focused on more immediate risks, the ones that we have right now like AI creating disinformation, fooling people and into thinking it's a human, magnifying biases in society, all the risks that people, you know, are experiencing immediately, right. You know, or will be very soon. and that the debate was more about sort of what should be the focusEric Topol (29:12):Hmm.Melanie Mitchell (29:13):And whether we can focus on very shorter, shorter immediate risks also, and also focus on very long-term speculative risks, and sort of what is the likelihood of those speculative risks and how would we, you know, even estimate that. So that was kind of the topic of the debate. SoEric Topol (29:35):Did, did you all wind up agreeing then thatMelanie Mitchell (29:38):? No. Were youEric Topol (29:38):Scared or, or where, where did it land?Melanie Mitchell (29:41):Well, I don't know. Interestingly what they do is they take a vote at the beginning of the audience. Mm-hmm. And they say like, you know, how many people agree with, with the resolution, and 67 percent of people agreed that AI was an existential threat. So it was two thirds, and then at the end, they also take a vote and say like, how many, what percent of minds were changed? And that's the side that wins. But ironically, the, the voting mechanism broke at the end, . So technology, you know, for the win ,Eric Topol (30:18):Because it wasn't a post-debate vote?Melanie Mitchell (30:21):But they did do an email survey. Oh. Oh. Which is I think not very, you know,Eric Topol (30:26):No, not very good. No, you can't compare that. No.Melanie Mitchell (30:28):Yeah. So I, you know, technically our side won. Okay. But I don't take it as a win, actually. ,Are Your Afraid? Are You Scared?Eric Topol (30:38):Well, I guess another way to put it. Are you, are you afraid? Are you scared?Melanie Mitchell (30:44):So I, I'm not scared of like super intelligent AI getting out of control and destroying humanity, right? I think there's a lot of reasons why that's extremely unlikely.Eric Topol (31:00):Right.Melanie Mitchell (31:01):But I am, I do fear a lot of things about ai, you know, some of the things I mentioned yes, I think are real threats, you know, real dire threats to democracy.Eric Topol (31:15):Absolutely.Melanie Mitchell (31:15):That to our information ecosystem, how much we can trust the information that we have. And also just, you know, to people losing jobs to ai, I've already seen that happening, right. And the sort of disruption to our whole economic system. So I am worried about those things.What About Open-Source LLMs, Like Meta’s Llama2?Eric Topol (31:37):Yeah. No, I think the inability to determine whether something's true or fake in so many different spheres is putting us in a lot of jeopardy, highly vulnerable, but perhaps not the broad existential threat of the species. Yeah. But serious stuff for sure. Now another thing that's just been of interest of late is the willingness for at least one of these companies Meta to put out their model as an open Llama2. Two I guess to, to make it open for everyone so that they can do whatever specialized fine tuning and whatnot. Is that a good thing? Is that, is that a, is that a game changer for the field? Because obviously the computer resources, which we understand, for example, GPUs [graphic processing units] used-- over 25,000 for GPT-4, not many groups or entities have that many GPUs on hand to do the base models. But is having an open model, like Meta’s available is that good? Or is that potentially going to be a problem?Melanie Mitchell (32:55):Yeah, I think probably I would say yes to both .Eric Topol (32:59):Okay. Okay.Melanie Mitchell (33:01):No, 'cause it is a mixed bag. I, I think ultimately, you know, we talked about transparency and open source models are transparent. I mean, I, I don't know if, I don't think they actually have released information on the data they use to train it, right? Right. So that, it lacks that transparency. But at least, you know, if you are doing research and trying to understand how this model works, you have access to a lot of the model. You know, it would be nice to know more about the data it was trained on, but so there's a lot of, there's a lot of big positives there. and it also means that the data that you then use to continue training it or fine tuning it, is not then being given to a big company. Like, you're not doing it through some closed API, like you do for open AI(33:58):On the other hand, these, as we just saw, talked about, these models can be used for a lot of negative things like, you know, spreading disinformation and so on. Right. And giving, sort of making them generally available and tuneable by anyone presents that risk. Yeah. So I think there's, you know, there's an analogy I think, you know, with like genetics for example, you know, or disease research where I think there was a, the scientists had sequenced the genome of the smallpox virus, right? And there was like a big debate over should they publish that. Because it could be used to like create a new smallpox, right? But on the other hand, it also could be used to, to, to develop better vaccines and better treatments and so on. And so I think there, there are, you know, any technology like that, there's always the sort of balance between transparency and making it open and keeping it closed. And then the question is, who gets to control it?The Next Phase of LLMs and the Plateau of Human-Derived Input ContentEric Topol (35:11):Yeah. Who gets to control it? And to understand the potential for nefarious use cases. yeah. The worst case scenario. Sure. Well, you know, I look to you Melanie, as a leading light because you are so balanced and, you know, you don't, the interest thing about you is what I have the highest level of respect, and that's why I like to read anything you write or where you're making comments about other people's work. Are you going write another book?Melanie Mitchell (35:44):Yeah, I'm thinking about it now. I mean, I think kind of a follow up to my book, which as you mentioned, like your book, it was before large language models came on the scene and before transformers and all of that stuff. And I think that there really is a need for some non-technical explanation of all of this. But of course, you know, every time you write a book about AI, it becomes obsolete by the time it's published.Eric Topol (36:11):That that's I worry about, you know? And that was actually going be my last question to you, which is, you know, where are we headed? Like, whatever, GPT-5 and on and it's going, it's the velocity's so high. it, where can you get a steady state to write about and try to, you know, pull it all together? Or, or are we just going be in some crazed zone here for some time where the things are moving too fast to try to be able to get your arms around it?Melanie Mitchell (36:43):Yeah, I mean, I don't know. I, I think there's a question of like-- can AI keep moving so fast? You know, we've obviously it's moved extremely fast in the last few years and, but the way that it's moved fast is by having huge amounts of training data and scaling up these models. But the problem now is it's almost like the field is run out of training data generated by people. And if people start using language models all the time for generating text, the internet is going be full of generated text, right? Right. HumanEric Topol (37:24):WrittenMelanie Mitchell (37:24):Text. And it's been shown that if these models keep, are sort of trained on the text that they generate themselves, they start behaving very poorly. So that's a question. It's like, where's the new data going to come from?Eric Topol (37:39):, and there's lots of upsettedness among people whose data are being used.Melanie Mitchell (37:44):Oh, sure.Eric Topol (37:45): understandably. And as you get to is there a limit of, you know, there's only so many Wikipedias and Internets and hundreds of thousands of books and whatnot to put in that are of human source content. So do we reach a, a plateau of human derived inputs? That's really fascinating question. I perhaps things will not continue at such a crazed pace so we can I mean, the way you put together A Guide for Thinking Humans was so prototypic because it, it was so thoughtful and it brought along those of us who were not trained in computer science to really understand where the state of the field was and where deep neural networks were. We need another one of those. And you're no one, I nominate you to help us to give us the, the, the right perspective. So Melanie, Professor Mitchell, I'm so grateful to you, all of us who follow your work remain indebted for keeping it straight. You know, you don't get ever get carried away. and we learn from that, all of us. It's really important. 'cause this, you know, there's so many people on one end of the spectrum here, whether it's doomsday or whether this is just stochastic parrot or open source and whatnot. It's really good to have you as a reference anchor to help us along.Melanie Mitchell (39:13):Well, thanks so much, Eric. That's really kind of you. Get full access to Ground Truths at erictopol.substack.com/subscribe
undefined
Jun 21, 2023 • 34min

Al Gore: The Intersection of A.I. and Climate Change

Transcript with some hyperlinksEric Topol (00:00):Hello, Eric Topol here. And what a privilege to have as my guest Al Gore, as we discuss things that are considered existential threats. And that includes not just climate change but also recently the concern about A.I. No one has done more on the planet to bring to the fore the concerns about climate change. And many people think that the 2006 film, An Inconvenient Truth, was the beginning, but it goes way back into the 1980s. So, Al it's really great to have you put in perspective. Here we are with the what's going on in Canada with more than 12 million acres of forest fires that are obviously affecting us greatly, no less the surface temperature of the oceans. And so many other signs of this climate change that you had warned us about decades ago are now accelerating. So maybe we could start off out, where are we with climate change and the climate reality?The Good News on Climate ChangeAl Gore (01:00):Oh, well, first of all, thank you so much for inviting me to be on your podcast again, Eric. It's always a pleasure and especially because you're the host and we, we have very interesting conversations that aren't on the podcast. So, , I'm looking forward to this one. So, to start with climate you know, the old cliche, there's good news and bad news. Unfortunately, there's an abundance of bad news but there's also an awful lot of good news. Let me start with that first and then turn to the more worrying trends. We have seen the passage in the US last August of the largest and most effective best funded climate legislation passed by any nation in all of history. The so-called Inflation Reduction Act is an extraordinary piece of legislation.(01:55): It's billed as allocating $369 billion to climate solutions. But actually, the heavy lifting in that legislation is done by tax credits, most of which are open-ended and uncapped, and a few without any time limits, most a 10-year duration. And the enthusiastic response to the legislation after President Biden signed it has now made it clear that that early estimate of 369 billion is a low-ball estimate, because Goldman Sachs, for example, is predicting that it will end up allocating 1.2 trillion to climate solutions. A lot of other investors and others using economic models are estimating more than a trillion. So, it's really a fantastic piece of legislation and other nations are beginning to react and respond and copy it. One month after that law was passed the voters of Australia threw out their climate denying government and replaced it with a climate-friendly government, which immediately then set about passing legislation that adopts the same goals as the US IRA and the Australian context.(03:19):And they stopped the biggest new coal mine there. And anyway, one month after that, in October, the voters of Brazil threw out their former president often called the “Trump of the Tropics” and replaced him with a new president, a former president who's a new president, who has pledged to protect the Amazon and the European Union in responding to the evil, evil and cruel invasion of Ukraine by Russia. And the attempted blackmail of nations in Europe, dependent on Russian gas and oil responded not by bending their knee to Vladimir Putin, but by saying, wait a minute, this makes renewable energy, freedom, energy. And so they accelerated their transition. And so these are all excellent signs and qualifies as good news. The other good news is not all that new, but it's still continuing to improve.(04:28):And that is the astonishing reductions in cost for electricity produced by solar and wind, and the reductions in cost for energy storage, principally in batteries and electric vehicles and a hundred other less well known technologies that are extremely important. We're in the midst of early stages of a sustainability revolution that has the magnitude of the industrial revolution, coupled with the speed of the digital revolution. And we're seeing it all over the place. It’s really quite heartening. One quick example last, the, the biggest single source of global warming pollution is the generation of electricity with gas and coal. Well, last year, if you look at all the new electricity generation capacity installed worldwide 90% of it was renewable. In India, 93% was solar and wind. And India's pledged not to give permits for any new coal burning plants for at least five years, which means never, probably because this cost reduction curve, as I mentioned, is still continuing downward electric vehicles, we're now seeing that the purchases have reached 15% of the market globally.(05:56):Norway's already at 50%. They've actually outlawed the sale of any new internal combustion engines. And indeed, many national and even municipal and state jurisdictions have prospectively served notice that they, you won't be able to buy them after a certain day, 2030,  in many cases and the auto companies and truck and bus companies have long since diverted their research money all their R & D is going into EVs now. And that's the second largest source of global warming pollution. I could go through the others, but I want, I'll just tell you that there is a lot of good news.And the Bad NewsNow, the bad news is we're still seeing the crisis get worse, faster than we're deploying all of these solutions. And, the inertia in our political and economic systems is partly a direct result of huge amounts of lobbying and campaign contributions and the century old net of political and economic influence built up by the fossil fuel industry.(07:18):And they're opposing every single solution at the state level, the local level, the national level, the international level. Now, this COP 28 [the 2023 United Nations Climate Change Conference] coming up at the end of the year in the United Arab Emirates is actually chaired by an oil and gas company CEO-- It's preposterous. And they already have in the last two COPS, more lobbyists registered as participants than all than the five or six largest national delegations combined. And we're seeing them really oppose this change. And meanwhile, the manifestations of the crisis are steadily worsening. You mentioned the fires in Canada that are predicted to burn all summer long. And I was in New York City last week, and you, you know, from the news stories it, it was horrific. I got there the day after the worst day, oh my God.(08:21):But I saw and heard from people just the tremendous problems that people have. It's also going on in Siberia, by the way, and these places that are typically beyond the reach of TV crews and networks that don't capture our attention unless something happens to blow the smoke to where we live. And that's what's happened here. But there are many other extremely worrying manifestations that aren't getting much attention. I do think we're going to solve this, Eric. I'm very optimistic, but the question is whether we will solve it in time. We are what's the right way to say this? We're tiptoeing through a minefield with tripwires and toward the edge of a cliff. I don't want to torture the metaphor, but actually there are several extremely dangerous threats to ecological systems that are in a state of balance now, and are being pushed out of their equilibrium state into a different format.(09:35):The ocean currents--we're already seeing it with the jet stream in the northern hemisphere. You may have seen on the weather maps. They're now using these a lot where it's getting loopier and more disorganized. That's what the last few winners has, has pulled these big loops, have pulled arctic air down into areas far south in the US and in other regions, by the way. And it’s making a lot of the extreme events worse. Now, we're entering an El Nino phase in the Pacific Ocean comes around every so often, and this one is predicted to be a strong one, and that's going to accentuate the temperature increase. You know, it was [recently] 110 degrees last week in Puerto Rico, 111 degrees in several countries in Southeast Asia.(10:31):Last summer, China had a heat wave that the historians say about, which the historians say there's nothing even minimally comparable in all prior known, and the length, the extent, the duration, the intensity. And we saw monsoons lead to much of Pakistan underwater for an extended period of time. I could go on, but the net and balance out the good news and the bad news we are gaining momentum. And soon we are going to be gaining on the crisis itself and start deploying solutions faster than it's getting worse. So I remain optimistic, and I always remind people, if you doubt we have the political will to see this through, remember that political will is itself a renewable resource.The Intersection of A.I. and Climate ChangeEric Topol (11:27):Yeah, that's a great optimistic point, and we sure appreciate that, because it's pretty scary to see these trends that you reviewed. Now, as you know recently there was a large group of AI scientists this one led by Sam Altman of OpenAI, who put out a statement, a one-sentence statement, and it said, “Mitigating the risk of distinction from ai, which you and are enthusiastic about, should be a global priority alongside other societal scale risks, such as pandemics and nuclear war.” Well, obviously, also climate change. So how do you see the AI intersection of climate change? Because as you well know, GPT-4, having pre-trained with some 30,000 graphic processing units [GPUs], the issues about consumption of energy carbon emissions, the need for water cooling, is AI going to make this situation worse, or will it make it better?Al Gore (12:33):Well, yeah. You know, I understand. Well, both would be my answer. And we don't have enough data yet to really know for sure which way it will tip. Maybe we'll talk about the existential risks from generative AI. As this conversation continues, there are many who have spoken up and said, well, wait a minute, before we focus on that, we need to look at the risks that are right, staring us right in the face. I mean, the use of these AI driven algorithms, not necessarily generative AI, but the AI-driven algorithms in social media are causing tremendous harm right now. You've heard about the rabbit holes that people get drawn down into on the internet. That's because of the AI-driven algorithms and the tracking of confidential information about what people are looking at and what they're interested in.(13:40):And these are rabbit holes are ,a little bit not to shift metaphors, a little bit like pitcher plants in that they have slippery slides and, oh, and, you know, what's at the bottom of the rabbit hole? That's where the echo chamber is. And when you spend long enough in the echo chamber, then those who are feeding the information to you weaponize a new form of AI, not artificial intelligence, artificial insanity. And, and we see it all over the place where people are utterly convinced of completely ridiculous and provably false conclusions and, and motivated to go out and act in the real world. On that basis we, we see the fakes and the concerns about video and audio deep fakes, and how that's going to have an impact on us and, and all manner of other concerns that need to need to be addressed.(14:43):But the existential threat is one that I do want to come back to. But, turning to your specific focus on whether it is going help or hurt or both where climate is concerned, I have co-founded a coalition called Climate TRACE that uses AI in an extremely effective, beneficial way. Trace stands for tracking real-time atmosphere, carbon emissions, and we have a coalition of AI firms, NGOs, university groups and the whole coalition works together to identify with AI, the point source of every single significant stream of emissions of global warm inclusion everywhere on the planet. We released it at the last United Nations Conference, the one that was held in Egypt last year. The top 72,000 emission point sources around the world this fall; we will release the top 70 million emission sources.(15:54): We also have every agricultural field in the world down to a 10 meter by 10 meter resolution. We have all, every single power plant, all the steel mills, every large ship, every large plane, most every well, we have all of the significant greenhouse gas emissions that wouldn't have not, that would not have been possible without ai. Now, this is not generative AI. We have used generative ai --not ChatGPT--we tried that, but there are others that are actually more proficient in the views of our team members at writing code. It has saved us time and enhanced our productivity in writing code. So that's one example where AI has been a big help. And we see it in modeling, and we see it in the preparation for adaptation and in other ways. Now, the downside is, you said in your introductory phrasing that the energy requirements and the emissions are just enormous because it is an extremely energy intensive exercise.(17:09): And you have to have the GPUs as well as the energy. So it's you could call it “oligopogenic”-- that may not be a word. It may be a hallucination, like GPT is famous for, but what I mean is it, it does tend to favor a very small number, a very wealthy, very powerful, very large companies. Basically, Google and Microsoft are driving the, the rest of the world to try to desperately catch up. You know, the CEO of Microsoft. They stole a march on Google with the release of ChatGPT and then that fascinated people and the pickup and use of GPT unbelievable is just, it, it's there's been nothing like it in.(18:19):Previous technological history. The CEO said that he wanted to make call Google out and make him dance. Well you know, Peggy Noonan said in one of her columns, that's not a responsible way for the CEO of such a company to talk. I, I like him, and I'm not really taking a poke at him, insofar as I'm making the point that there're really two companies, and the internal dynamic between the two is driving this frenzy of investment and activity, and the underlying platform, the large language models, they're all almost a commodity now. They're all over the place and have been for a while. But the need for the GPUs, the need for the energy consumption that's limiting the cutting edge developments to these two companies. For now, China doesn't trust it because they don't trust the enhanced political influence.(19:22):It might give those using it or the enhanced insight. And there are others that will try to find a way to use it, of course. But the, the emissions itself are extremely harmful and the use of generative AI in the hands of irresponsible actors. And, unfortunately, we're human beings and we have a lot of irresponsible actors around this, around this country, around the world. And they could use that to really put climate disinformation into high gear. They, they can use it in a variety of ways to further enhance the disruption, the disruptive tactics they've used in the past.Eric Topol (20:15):Yeah. Well, that's what I wanted to get into more on this. We have, I think, you know, if you want to put an existential risk at the highest level, maybe if you were assign 10 to climate change and you've brought up the fact that the large language models generative AI will make worse, the things we've already seen, the, the hacking of democracy and all the fake stuff that's the conspiracy theories that it will reinforce. And the question is, where are you, where did you place the whole generative AI era that we've now entered in if you were to weigh it against existential threat, just other, one other thing. You've, you undoubtedly, because you read more than anyone I know you're a true scholar, and you've read these doomsayer essays about hacking a democracy and(21:11): the end of the world, and some of the notable leaders in AI like Geoffrey Hinton to leave Google. And so we have, on the one hand some people saying this is a real threat to the world. And then we have Marc Andreesen who wrote, “Why AI Will Save the World” last week , a long read on this. So where do you, where do you see the existential threat of now that AI has gone into high gear, as you noted, more than a billion unique users of ChatGPT within 90 days, which is unprecedented. I mean, withAl Gore (21:45):All cap, nothing else is even close in history. Yeah,Where are we with Artificial General Intelligence?Eric Topol (21:48):Yeah. So, do you see that this has been exaggerated, the risk of generative AI? Or how do you compare it to the climate change crisis?Al Gore (22:01):Well it's a great question, Eric. And of course lots of people we know are breaking their brains trying to answer that question. I think we need a little more experience with it because our understanding is going to develop as we have more experience. But at the same time, we're trying to catch up in our basic understanding of what the heck's going on with these things. And they don't actually know it's important to note they don't know how it's doing what it's doing. And I'll, I'll circle back to that. But while we're trying to figure it out, it's continuing to advance at warp speed. GPT-4 in the cleverly titled, the provocatively titled, research paper “Sparks of Artificial General Intelligence” that Microsoft put out is already demonstrating capacities that are shockingly comparable to human capacity is the way they put it.(23:13):This less than a year after Google fired a young researcher named Blake Lemoine who said that he thought theirs had become sentient. And they fired him right away. These multiple co-authors of this paper from Microsoft weren't fired. They're in charge of the thing, and they're basically saying close to what the guy at Google said, who got fired.I think that if you listen to Geoffrey Hinton, the so-called godfather of generative AI, and there's so many, many parents of generative AI. But what caused him to change his mind, in his words, were when he realized that it is very likely to become much smarter than we are, than the smartest human beings ever are. And coupling that level of superintelligence, the phrase some have used with access to all of the knowledge that humanity has ever compiled means there is an unpredictable unquantifiable risk that we might no longer be the apex lifeform on this planet.(24:47):And that generative AI might be used that in ways that would be threatening to us. I think we need more experience with it in before we decide, okay, that's it. We not going to unplug all these dang things and bust them up with sledgehammers. That's not going to happen. Cause there's so many different entities pursuing it. But, you know, I placed this the context of one of the themes in that runs through the history of science, Eric. And that is, as we have seen in the past, new discoveries that have challenged our human understanding of our place in creation. For example, when Galileo said, the Earth's not the center of the universe, it's not the center of the solar system, the church said ah, off to prison with you, they put him on trial.(25:58):because that challenge our prime place in what we had thought was God's design. Then Darwin, of course, placed us solidly in the animal kingdom, descended from, from primates and apes and monkeys. And of course, that struggle is still, I used to represent Dayton, Tennessee and the United States House of Representatives where, where the, the Scopes Trial took place, the so-called monkey trial. And there have been a succession of other similar blows to the collective ego of humanity. We used to assume confidently that the earth was probably the only place in the whole universe that life where life emerged. And now the common assumption is it's ubiquitous throughout the universe and maybe in advanced forms and lots and lots of places. And by the way, the universe isn't the only universe they tell us.(26:55):Now, the emerging better view is that we're in a multiverse, and that's all above my pay grade. But within that, within that continuum of successive blows to the collective ego of humanity, here comes an assertion that something other than a human being may be conscious. And our immediate reaction, as it, as our predecessors' reactions were with Galileo and Darwin, et cetera, nah, that can't be we're special. No, it can't be. We're the only ones. Well maybe not. They are edging closer and closer to a point where scientists and engineers are likely to say, yep, it is conscious. Maybe it won't happen. I kind of think it is already beginning to happen. I think there's an explanation for it, but we're going to have to catch up to that explanation. And we're going have to build this airplane of regulation and safeguards while we taxi it out to the runway.Can AI Help Solve the Climate Crisis?Eric Topol (28:06):Well, you know, I share that view. You know, I don't think that continuing to say this is just a stochastic parrot is where we're at right now. It's a form of intelligence from machines that we haven't seen previously. And as you've really zoomed in on this is the big debate about the level of understanding the so-called “world model.” And, you know, this is something that is only going to get more capable over time. And that gets me to kind of close the loop on our discussion. Do you foresee that we could get to a point where our machine help would come up with new solutions? I mean, as you've summarized, you have phenomenal AI tracking of climate change, but could you foresee that there are potential solutions that we haven't thought of, that, that generative AI could help us as humans to solve the climate crisis?Al Gore (29:05):Yeah, I think that's very likely.  You know, one of the new professions that's just emerged as a, a prompt engineer—we'll have to have people trained in prompting these large language models in a way that gets us to the kinds of exchanges you're talking about. But we've, even before generative AI arrived, we have had multiple examples of artificial intelligence solving problems that we humans have not been able to solve. One example that I wrote about several years ago was the long-term effort to try to decode the genetics of a little thing called the planarian worm. It's been of extreme interest because it can regenerate every part of its body. And in, in such an efficient way they've been trying to understand it.(30:07):So a group of scientists took all of the raw data from all of the failed experiments collected during all of the failed experiments to try to solve that problem, fed 'em into an AI. And the AI said, okay, here's the answer. And it was credited. The AI agent was credited as one of the co-authors of the resulting study. We've had we've had problems in fluid dynamics solved by artificial intelligence that were impenetrable to us. So there's no question in my mind that some of the solutions that we're looking for, for the climate crisis will be found with the assistance of generative AI. I'm certain of that.Eric Topol (30:53):Well, that adds to the optimism that we want to close up with because we need that in the face of what we're seeing that's palpable every day regarding climate change. And, you know, I think this discussion, Al,  I could spend the whole day with you because it's so stimulating and your ability to cite history, as well as current and future perspective is, for me, unparalleled. So, I really enjoyed this discussion with you, and I hope we'll have another one real soon, because this generative AI era is zooming, like I've never seen ChatGPT in November, GPT-4 in March, and you know what's next here.Al Gore (31:35):So GPT-5 is coming in December, as you said. And, before you conclude, Eric, let, let me just give back to you my admiration for the work that you've been doing on the applications of generative AI in healthcare and the development of even better healthcare technologies. You're the leading exponent of this whole field of knowledge now. And you know, you helped us get through the, our effort to understand the pandemic and all the twists and turns and all of that. And now you're taking the lead on the application of AI in healthcare, and thank you very much. I speak for a lot of people in saying that.Eric Topol(32:19):Well, that's really kind to you. That's, that's where my interest was before the pandemic. And now the good part is to be able to get back to it full force. But I do think, unlike the overall existential concerns regarding AI and the large language models of AI, the net benefit for healthcare is just much more obvious. Yes, there are concerns, of course, regarding patient prompts and getting inaccurate responses. However, what it can do for the, the medical community and for patient autonomy is, is really quite extraordinary. So, in that regard another good way to, to sum up our, our discussion here because that's a very, I'm very sanguine about, as we get better about implementing AI in healthcare, it'll make a big difference particularly now with this multimodal AI that brings in images, the records, you all the data that voice, you know, the ambient voice of office visits, as well as even bedside rounds. It's really quite exciting. And I know we're going be talking about that some more in the months ahead. So thank you so much. You've, you've brightened up this day because all I keep seeing are these apocalyptic photos of New York and what's going on out there, graphs of the oceans sea surface temperature. And I'm thinking, oh my, how we keep losing ground on what you told us about for decades. And I like hearing that you think these solutions are and be increasingly to catch up to that. So thank you.Al Gore (33:59):Thank you, Eric. Get full access to Ground Truths at erictopol.substack.com/subscribe
undefined
Jun 6, 2023 • 42min

Hannah Davis: A 360° on Long Covid

TRANSCRIPTEric Topol (00:00):Hello, this is Eric Topol, and it's really a delight for me to welcome Hannah Davis who was the primary author of our recent review on Long Covid and is a co-founder of the Patient-Led Research Collaborative. And we're going to get into some really important topics about citizen science, Long Covid and related matters. So, Hannah, welcome.Hannah Davis (00:27):Thank you so much for having me.Eric Topol (00:29):Well, Hannah, before we get into it I thought because you had a very interesting background before you got into the patient led research collaborative organization with graphics and AI and data science. Maybe you could tell us a bit about that.Hannah Davis (00:45):Sure. Yeah. Before I got sick, I was working in machine learning with a particular focus on generative models for art and music. so I did some projects like translating data sets of landscapes into emotional landscapes. I did a project called The Laughing Room, where there was a room and you went in and the room would listen to you and laugh if it thought you said something funny, . and then I did a lot of generative music based on sentiment. So I, I did a big project where I was generating music from the sentiment of novels and a lot of kind of like critical projects, looking at biases in data sets, and also curating data sets to create desired outcomes in these generative models.Eric Topol (01:30):So, I mean, in a way again, you were ahead of your time because that was before ChatGPT in November last year, and you were ahead of the generative AI curve. And here again, you're way ahead in in the citizen science era as it particularly relates to the pandemic. So, I, I wonder if you could just tell us a bit I think it was back, we go back to March, 2020. Is that when you were hit with Covid?Hannah Davis (01:59):Yes.Eric Topol (02:00):And when did you realize that it wasn't just an acute phase illness?Hannah Davis (02:06): for me, honestly, I was not worried at all. I, my first symptom was that I couldn't parse a text message. I just couldn't read it, thought I was tired. an hour later, took my temperature, realized I had a fever, so that's when I kind of knew I was sick. but I really just truly believed the narrative I was going to get better. I was 32 at the time. I had no pre-existing conditions. I just was, you know, laying around doing music stuff, not concerned at all. And I put a calendar note to donate plasma two weeks out, and I was like, you know, I'm going to hit that mark. I'm going to donate plasma, contribute, it'll be fine. And that day came and went. I was still, you know, pretty sick with a mild case. You know, I didn't have to be hospitalized.(02:49):I didn't have severe respiratory symptoms. but my neurological symptoms were substantial and did increase kind of over time. And so I, I was getting concerned. Three weeks went by, still wasn't better. And then I read Fiona Lowenstein’s op-ed in the New York Times. They were also very young. They were 26 at the time, they had been hospitalized, and they had this prolonged recovery, which we now know as Long Covid. and they started the Body Politic Support Group joined that saw thousands of people with the same kind of debilitating brain fog, the same complete executive functioning loss, inability to drive, forgetting your family members' names who were all extremely young, who all had mild cases. and that's kind of when I got concerned because I realized, you know, this was not just happening to me. This was happening to so many people, and no one understood what was happening.Eric Topol (03:49):Right. extraordinary. And, and was a precursor, foreshadowing of what was to come. Now, here it is, well over three years later. And you're still affected by all this, right?Hannah Davis (04:02):Yes. Pretty severely.Eric Topol (04:04):Yeah. And I learned about that when I had the chance to work with you on the review. You were the main driver of this review, and I remember asking you, because I, I didn't know anyone in the world that was tracking Long Covid like you and to be the primary author. And then you sent this outline, and I had never seen an outline in all my years in academic medicine. I never saw an outline like this of the review. I said, oh my God, this is incredible. So I know that during that time when we worked on the review together, along with Lisa McCorkell and Julia Moore Vogel, that, you know, there, there were times when you couldn't work on it right there, there were just absolutely, you would have some good days or bad days. And, and that's the kind of, is that kind of the way is, how it goes in any given unit time?Hannah Davis (04:55):I think generally, I, I communicated as like 40% of my function is gone. So, like, I used to be able to have very, very full days, 12 hour days would work, would socialize, would do music, whatever. you know, I, I have solidly four functional hours a day. on a good day, maybe that will be six. On a bad day, that's zero. And when I push myself by accident, I can get into a crash that can be three to seven days easily. Hmm. and then I'm, then I'm just not, you know, able to be present. I don't feel here. I don't feel cognitively able, I can't drive. And then I'm just completely out of the world for a bit of time.Eric Topol (05:35):Yeah. Wow. So back in the early days of when you were first got sick and realized that this was not going to just go away, you worked with others to form this Patient -Led Research Collaborative organization, and here you are, you didn't have a medical background. You certainly had a data science and computing backgrounds. But what were your thoughts? I mean, citizen science has taken on more of a life in recent years, certainly in the last decade. And here there's a group of you that are kind of been leading the charge. we'll get to, you know, working with RECOVER and NIH in just a moment. But what were your thoughts as to whether this could have an impact at working with these, the other co-founders?Hannah Davis (06:27):I think at first we really didn't realize how much of an impact we were going to have. The reason we started collecting data in the first place really was to get answers for ourselves as patients. You know, we saw all these kind of anecdotes happening in the support group. We wanted to get a sense of which were happening the most at what frequency, et cetera. and it really wasn't until after that when like the CDC and WHO started reaching out, asking for that data, which was gray literature at the time that we kind of realized we needed to formalize this and, and put out an official paper which was what ended up being the second paper. But the group that we formed really is magical, I think like, because the primary motivator to join the group was being sick and wanting to understand what was happening. And because everyone in the group only has the kind of shared experience of, of living with Long Covid, we ended up with a very, very diverse group. Many, many different and I think that really contributed to our success in both creating this data, but also communicating and, and doing actionable policy and advocacy work with it.Eric Topol (07:42):Did you know the folks before? Or did you all come together because of digital synapses?Hannah Davis (07:47):Digital synapses? I love that. Absolutely. No, we didn't know each other at all. they're now all, you know, they're my best friends by far. you know, we've been through this, this huge thing together. but no, we didn't meet in person until just last September, actually. And many of them we still haven't even met in person. which makes it even more magical to me.Eric Topol (08:13):Well, that's actually pretty extraordinary. So together you've built a formidable force to stand up for the millions and millions of people. As you wrote in the review, 65 million people around the world who are suffering in one way or another from Long Covid. So just to comment about the review --you know, I've been working in writing papers for too long, 35 years. I've never, in my entire career, over 1300 peer reviewed papers on varied topics, ever had one that's already had 900,000 downloads, is the fourth most cited paper and Altmetric since published the same timeframe in January of all 500,000 peer-reviewed papers. Did you ever think that the, the work that, that you did and our, you know, along with Lisa and doing, and I would ever have this type of level of interest?Hannah Davis (09:16):No, and honestly, it's so encouraging. Our, our second paper to me did very well. and, you know, was, was widely viewed and widely cited, and this one just surpassed that by miles. And I think that it's encouraging because it communicates that, that people are interested, right? People, even if they don't understand what long covid is, there is a huge desire to know. And I think that putting this out in this form, focusing on the biomedical side of things really gives people a, a tool to start to understand it. And from the patient side of things, more than any other paper I've heard we, we get so many comments that are like, oh, I brought this to my doctor and, you know, the course of my care change. Like he believed me and he started X treatment. and that, that's the kind of stuff that just makes us so, so meaningful. and I'm so, so grateful that, that we were able to do this.Eric Topol (10:16):Yeah. And as you aptly put it, you know, a work of love, and it was not easy because the reviewers were not not all of them were supportive about the real impact, the profound impact of long covid. So when you now every day you're keeping track of what's going on in this field, and there's something every single day. one of the things, of course is that we haven't really seen a validated treatment all this time, and you've put together a list of candidates, of course, it was in the review, and it constantly gets revised. What are some of the things that you think are alluring from preliminary data or mechanisms that might be the greatest unmet need right now of, of getting some relief, some remedy for this? What, what, what's your sense about that?Hannah Davis (11:13):I think the one I'm most excited about right now are JAK/STAT inhibitors. And this is because one of the leading researchers in viral onset illness Ron Davis and that group believe that basically they're, they have a shunt hypothesis, and that means they, they basically think there's a switch that happens in the body after you've, you've had a viral illness like this, and that that switch can actually be unswitched. And that, to me, as a patient, that's very exciting because, you know, that that's what I imagine a cure kind of looks like. and they did some computational modeling and, and identified JAK/STAT inhibitors as one of the promising candidates. so that's from like the, like hypothetical side that needs to be tested. And then from the patient community, from some things we're seeing I think really easily accessed ones include chromolyn sodium.(12:14):So these are prescription antihistamines. they're both systemic. So Coen has been seeming to work for patients with brain fog and sleep disorders. And chromolyn sodium particularly works in, in patients with gastrointestinal mast cell issues. People are going on to kind of address the micro clots. I, for me personally, has been one of the biggest changers game changers for my brain fog and kind of cognitive impairment type things. but there's so many others. I mean, I think we, we really wanna see trials of anticoagulants. I'm personally really excited to start on ivabradine which is next up in my queue. And, and seems to have been a, a game changer for a lot of patients too. I V I G has worked for patients who are, have been able to get it, I think for both I V I G and ivabradine. Those are medications that are challenging to get covered by insurance. And so we're seeing a lot of those difficulties in, in access with a couple of these meds. But yeah, just part of, part of the battle, I guess,Eric Topol (13:32):You know, one of the leading of many mechanisms that in this mosaic of long covid is the persistence of virus or virus components. And there have been at least some attempts to get some Paxlovid trials going. Do you see any hope for just dealing, trying to inactivate the virus as  a way forward?Hannah Davis (13:54):Absolutely. Definitely believe in the viral persistence theory. I think not only Paxlovid, but other a covid antivirals. I know that Steve deas and Michael Paluso at U C S F are starting a couple long covid trials with other covid antivirals that yeah, for sure. I think they all obviously need to be trialed A S A P. And then I also think on the viral persistence lens, ev like almost everyone I know has viral reactivation of some sort like EBV, CMV,  VZV, you know, we obviously see a lot of chickenpox or shingles reactivations and antivirals targeting those as well I think are really important.Eric Topol (14:41):Yeah. Well, and I also, just the way you're coming out with a lot of this, you know terminology and, you know science stuff like I V I G for intravenous immunoglobulin and for those who are not, you know, just remember, this is a non-life science expert who now has become one. And that goes back again to the review, which was this hybrid of people who had long covid with me who didn't to try to come up with the right kind of balance as to, you know, what synthesizing what, what we know. And I think this is something the medical profession has never truly understood, is getting people who are actually affected and, and becoming, you know, the real experts. I mean, I, I look to you as one of the world's leading authorities, and I learn from you all the time.(15:35):So that goes to RECOVER. So there was a long delay in the US to recognize the importance of long covid. Even the UK was talking to patients well before they ever had a meeting here in the us, but eventually, somehow or other they allocated a billion dollars towards long covid research at the NIH. And originally, you know, fortunately Francis Collins, when he was director, saw the importance, and he, I learned bequeathed that 2 of the NIH institutes, one of the directors, Gary Gibbons visited me recently because of a negative comment I made about RECOVER. But before I go over my comment, you've been as he said, you, and Lisa McCorkell ,among others from the Patient-led Collaborative have had a seat at the table. That's a quote from Gary. Can you tell us your impression about RECOVER you know, in terms of at least they are including Patient-Led research folks with long covid as to are they taking your input seriously? And what about the billion dollars ?Hannah Davis (16:46):Oh, boy. tricky question. I don't even know where to start. Well, I mean, so I think recover really messed up by not putting experts in the field in charge, right? Like we are, we have from the beginning have needed to do medical provider education at the same time that all these studies started getting underway. And that was just a massive amount of work to try to include the right test to convince medical professionals why they weren't necessary. all that could have been avoided by putting the right people in charge. And unfortunately, that didn't happen. unfortunately recovers our, our best hope still or at least the, the best funded hope. so I really want to see it succeed. I think that they, they have a long way to go in terms of, of really understanding why patient representation matters and, and patient engagement matters.(17:51):I, you know, it's been a couple years. It's, it's still very hard to do engagement with them. it's kind of a gamble when you get placed on a, a committee if they are going to respect you or not. And, and that's kind of hard as people Yeah. Who are experts now, you know, I've been in the field of Long Covid research more than anyone really I'm working with there. I, I really hope that they improve the research process, improve the publication process. the, a lot of the engagement right now is, is just tokenization. you know, they, they have patient reps that are kind of like just a couple of the patient reps are kind of yes men you know, they, they get put on higher kind of positions and things like that. but they're, I think there's 57 patient reps in total spread across committees. we don't have a good organizing structure. We don't know who each other are. We don't really talk to each other. there, there's room for a lot of improvement, I would say, well,Eric Topol (18:59):The way I would put it is, you know, you kind of remember it like when you have gatherings where there's an adult table, and then there's the kiddie's table. Absolutely. Folks are at the kiddy table. I mean, yeah. And it's really unfortunate. So they had their first kind of major publication last week, and it's led to all sorts of confusion. you wrote about it, what did we, what did we glean from that, from that paper that was reported as a 10% of people with covid go on to Long Covid, and there were clearly a risk with reinfections. Can you kind of review that and also what have we seen with respect to the different strains as we go on from, from the Wuhan ancestral all the way through to the  various lineages of omicron. Has that led to differences in what we've seen with Long Covid?Hannah Davis (19:56):Yeah, that's a great question and one that I think a lot of people ask just because it, you know, speaks to the impact of long covid on our future. I think not just this paper, but many other papers at this point, also, the, the ONS data have shown that that Long Covid after omicron is, is very common. I think the last ONS data that came out showed of everyone living with Long Covid in the UK. After Omicron, which was the highest group of all of them. we certainly saw that in the support groups also, just, just so many people. but people are still getting it. I think it's because it, most cases of Long Covid happen after a mild infection, 75 to 90%. And when you get covid, now, it is a mild infection, but whatever the pathophysiology is, it doesn't require severe infection.(20:50):And you know, where I think we hopefully have seen decreases in like the, the pulmonary and the cardiovascular like organ damage types we're not seeing real improvements at all in kind of the long term and the neurological and the ones that end up lasting, you know, for years. And that's really disappointing. in terms of the paper, you know, I think there were two parts of the paper. There were those, those items you mentioned, which I think are really meaningful, right? The, the fact that re infections have a higher rate of long covid is like ha needs to have a substantial impact on how we treat Covid going forward. that one in 10 people get it after Omicron is something we've been, you know, shouting for, for over a year now. and I think this is the first time that will be taken seriously.(21:42): but at the same time, the way RECOVER communicated about this paper and the way that you talked to the press about this paper shows how little they understand the post-viral history right, of, of like thinking about a definition.  Why wouldn't they know that would upset patients? You know, that and the fact that they, in my opinion you know, let patients take the brunt of that anger and upset you know, where they should have been at the forefront, they should have been engaging with the patient community on Twitter is really upsetting as well. Yeah.Eric Topol (22:20):Yeah. And you know, I, when I did sit down with Gary Gibbons recently, and he was in a way wanting to listen about how could recover fulfill its goals. And I said, well, firstly, you got to communicate and you got to take the people very seriously not just as I say, put 'em at the Kiddie table, but, you know, and then really importantly is why isn't there a clinical trial testing any treatment? Still today, not even a single trial has been mounted. There's been some that have been, you know, kind of in the design phase, but still not for the billion dollars. All that's been done is, is basically following people with symptoms as already had been done for years previously. So it's, it's just so vexing to see this waste and basically confusion that's been the main product of RECOVER to date and exemplified by this paper, which is apparently going to go through some correction phases and stuff. I mean, I don't know, but whether that's going to the two institutes that it's, it's N H L B I, the National heart, lung and Blood, and the Neurologic Institute, NINDS, that are the two now in charge of making sure that RECOVER recovers from where it's, it's at right now. And yeah, so lack of treatments, and then the first intervention study that was launched incredibly was exercise. Can you comment about that?Hannah Davis (23:56):It's unreal. You know, it's, it, it just speaks to the lack of understanding the existing research that's in this space. Exercise is not a treatment for people with hem. It has made people bedbound for life. The risks is are not, the risks are substantial. that there was no discussion about it, that there was no understanding about it. That, you know, even patients who don't have pem who wouldn't necessarily be harmed by this trial deserve better, right? They still deserve a trial on anticoagulants or literally anything else than exercise. And there's, it just, it, it's extremely frustrating to see it, it would have been so much better if it was led by people who already had the space, who didn't have to be educated in post exertional malaise and the, the underlying underpinnings of it. and just had a sense of, of how to continue forward and, you know, patients deserve better.(24:55):And I think we're, we're really struggling because yeah, there's, there's going to be five trials as I understand it, and that's not enough. And none of them should be behavioral or lifestyle interventions at all. you know, I think it also communicates just the, the not understanding how severe this is. And I get that it's hard. I get that when you see patients on the screen, you think that they're fine and that's just how they must look all the time. But recover doesn't understand that for every hour they're asking patients to engage in something that's an hour, they're in bed, you know, that, that they're, they take so much time away from patients without really understanding like the, the minimum they should be able to do is, is understand the scope and the severity of the condition, and that we need to be trialing substantially more serious me treatments than, than exercise. right,Eric Topol (25:54):Right, right. And also the recognition, of course, as you know very well about the subtypes of long covid. So, you know, for example, the postural orthostatic tachycardia syndrome pots and how, you know, there's a device, so you don't have to always think about drugs where you put it in the back of your ear and it's neuromodulator to turn down your vagus nerve and not have the dizziness and rapid heart rate when you stand and all the other symptoms. And, you know, it costs like a dollar to make this thing. And why don't you do a trial with that? I mean, that was one of the things, it doesn't have to always be drugs, and it doesn't have to, it certainly shouldn't be exercise. But you know, maybe at some point this will get on on track. Although I'm worried that so much of the billion dollars has already been spent and no less the loss of time here, I people are suffering. Now, that gets me to this lack of respect lack of every single day we are confronted with people who don't even believe there's such a thing as long covid after all this time, after all these people who've had their lives profoundly disrupted.(27:04):What, what can you say about this?Hannah Davis (27:07):It's just a staggering, staggering lack of empathy. And I think it's also fear and a defense mechanism, right? People want to believe that they have more control over their lives than they do, and they want to believe that, that it's not possible for them personally to get a virus and then never recover and have their life changed so substantially. I really genuinely believe the people who don't believe long covid is real at this point you know, have their own things going on. And just, yeah,Eric Topol (27:38):It's kinda like how Covid was a hoax, and now this is, I mean, the, you, you just, ofHannah Davis (27:44):Course, but it's true, like it's happened with, it happened with me, CF s it happened with HIV AIDS. Mm-hmm. someone just showed me a brochure of, of a 10 week lifestyle exercise intervention for aids, you know, saying that you could positively think your way out of it. All that is, is, is defense mechanism, just, yeah. You know, it's repeating the same history over and over.Eric Topol (28:07):Well, I think you nailed it. And of course, you know, it was perhaps easier with Myalgic encephalomyelitis when it weren't as many people affected as the tens of millions here, but to be in denial. the other thing is the young people perfectly healthy that are those who are the most commonly affected. a lot of the people who I know who have been hit are like you, you know, very young and, and you know like Julia in my group who, you know, was a big runner and, you know, can't even go blocks at times without being breathless. And this is the typical, I mean, I saw in clinic just yesterday, an older fellow who had been in the hospital for a few weeks and has terrible long covid. And yes, the severity of covid can correlate with the sequela, but because of just numbers, most people are more your phenotype. Right, Hannah.Hannah Davis (29:08):Right, exactly. It's a weird like math thing for people to wrap their head around. Like, yes, if you're hospitalized, the chance of getting long covid is much, much higher than if you were not hospitalized. But because the vast number of cases were not hospitalized, the vast number of long cases, long covid cases were not hospitalized. but I think like all of these things are interesting clues into the pathophysiology. You know, we also see people who were hospitalized who recover faster than some of these, the neurocognitive mild, my mild encephalomyelitis subtypes for sure. I think all of that is, is really interesting and can point to clues about kind of what is, what is happening at the core.Eric Topol (29:54):Yeah. And that I wanted to get into before I wrap up some of the things that are new or added since our review in published in January. so I just recently reviewed the brain in long covid with these two German studies, one of which showed the spike protein was lighting up in the reservoir, the kind of initial reservoir, the brain, the skull, and the meninges. the, the, basically the layers covering the brain, the, particularly the skull bone marrow. And that's where all these immune cells are in high density that are patrolling the brain. And so it really implicated spike protein per se, in people who've had covid. and then the other German study, which was so striking in mild covid, the majority of people where they had it 10 months later, all this signature by m r i, quantitative, m r i of major inflammation with free water and this so-called mean diffusivity, which is basically the leaking and you know, the inflammation of the brain.(31:01):And so, and that's as long as they follow the people, you know, if they followed 'em three years, they'd probably still see this. And so there's a lot of brain inflammation that is linked to the symptoms as you've described. You know, the brain fog, the memory executive function. But we have no remedy. We have no way, how can we stop the process? How can we turn it around like, as you mentioned, like a jak stat inhibitor in other ways that we desperately need to get into testing. so that was one thing I, I wonder, I mean, I think people who have had the symptoms of cognitive effects know there's something going wrong in their brain, but here is, you know, kind of living proof that what there's sensing is now you can see it. thoughts about that?Hannah Davis (31:52):I mean, I think the research is just staggering. It's so, so validating as someone, you know, who was living this and living the severity of it, you know, without research for years, it's, it's wonderful to finally see so many things come out. but it's overwhelming research. And I, I don't understand kind of the lack of urgency. Those are two huge, huge studies with huge implications. you know, that the, that the spike would still be in the skull like that in the, in the bone marrow like that. and the neuroinflammation I think, you know, feels very obvious in terms of what, like the symptoms end up presenting. why aren't we trialing things like the, the, this is just destroying people's lives. Even if you don't care about people's lives, like it will destroy the economy. Like people are still getting this, this is not decreasing. these are really, really substantial tangible injuries that are happening.Eric Topol (32:52):Yeah, I know. And, and there's not enough respect for preventing this. The only way we know to prevented it for sure is just not to get covid, of course. Right. And then, you know, things like vaccines help to some extent. The magnitude, we don't know for sure, you know, maybe metformin helps but, you know, prevention and everyone's guard, not everyone, but you know, vast majority, you know, really let down at this point when there's not as much circulating virus as there has been. Now, another area where it has really been lit up since our review was autoimmune diseases. So we know there's this common link in some people with long covid. There's lots of auto antibodies and self-destruction that's ongoing. The immune system has gone haywire. But now we've learned, you know, this much higher incidence of rheumatoid arthritis and lupus and across, you know, every one of the autoimmune diseases.(33:44):So the impact besides the brain autoimmune diseases and then the one that just blows me away at the beginning of the pandemic, even in the first year there were starting to see more people showing up with type two diabetes and say, ah, well it must be a coincidence. And now there are 12 large studies, every single one goes through of a significant increase in type two diabetes and, and possibly even autoimmune diabetes, which makes sense. So this is the thing I wanted to clarify cuz a lot of people get mixed up about this, Hannah, there's the symptoms of long covid, some of which we reviewed, many of the long lists we haven't. But then there's also the sequela to organ hits like the diabetes and immune system and the brain and you know, also obviously kidney and heart and on and on. Can you help differentiate? Cause a lot of people get mixed up by all this stuff.Hannah Davis (34:46):Yeah, I mean I think, you know, we started out with symptoms because that's what we knew, that's what we were talking about. but I do think it's helpful to start, and I, I do think it would be helpful to do a big review on conditions and that does include ME/CFS and Diso but also includes diabetes, includes heart attacks and strokes are includes dementia risks. and yeah, I think the, the difficulty with kind of figuring out what, what percent of long covid are each of these conditions is really biased by the fact that for that, doctors can't recognize me CFS and dysautonomia that it doesn't end up in the EHR data. And so we can't really do these large scale like figuring out the percentage of what is what. but I think like, I, I saw someone describe long covid recently as like a, a large scale neurocognitive impairment emergency, a a large scale cardiovascular event emergency. I think those are extremely accurate. the immune system dysfunction is really severe. I really would like to see the conversation start moving more toward the, the conditions and the pathophysiologies based on what we're finding yeah, more than, more than just the symptoms.Eric Topol (36:15):Right. And then, you know, there's this other aspect of the known unknown, so with two other viruses. So for example, back in 1918 with influenza, it, it took 15 years to see or more that this would lead to a significant increased risk of Parkinson's disease. And then with polio, the post-polio syndrome showed up up to 30 years later with profound progressive muscular atrophy and, you know, falls and all sorts of major neurologic hits that were due for from the original polio virus. And so, yeah, some of the things that we're learning here with long covid hopefully will spill over to all these other post-infectious processes. But I think what's emphasizing in our discussion is how much more we, we really do need to learn how we desperately need some treatments, how we desperately need to have the respect for this syndrome that it deserves which still isn't there, it's just, it's unfathomable to me that we still have people dissing it on a daily basis and, and not, you know, a small minority, but actually a pretty strident group that's, that's not so small.(37:35):Now, before you wrap up, what have I missed here? Hannah with you, because this is a rarefied opportunity to have a sit down with you about what's going on in long covid and also to emphasize citizen science here because this is, if there's anything I've ever seen in my career to show the importance of citizen science, it's been the long covid story. you as one of the leaders of it. So have I missed something?Hannah Davis (38:05):I feel like we actually covered a pretty good bit. I would say maybe just for people listening, emphasizing that long covid is still happening. I think, you know, so many people that we see recently got long covid after getting vaccinated or having a prior infection and just kind of relaxing all their precautions and they're, they're angry. You know, the, the newer group of long Covid folks are angry because they were lied to that they were safe, and that's completely reasonable. you know, that it's still happening in, in one in 10 vaccinated omicron infections is a huge deal. and, and I think yeah, just re-emphasizing that, but overall that, yeah, you know, this is very serious. I think there's my, my MO for Twitter, really, honestly, despite all the, the accusations of fear mon mongering, I really don't put extreme stuff online, but I really do believe that this is this is currently leading to, you know, higher rates of, of heart attacks.(39:08):I do believe that we will see a, a wave of early onset dementia that is honestly is happening already you know, happening in my friend group already. and like you said there, there's a lot of unknowns that can be speculated about the fact that we see E P V reactivation in so many people. Are we gonna see a lot of onset multiple sclerosis mm-hmm. you know, lymphomas other E B V sequelae, like the danger's not over the danger's actually, like pretty solidly. there's pretty solidly evidence for some, some pretty serious things to come and you know, I keep saying we gotta get on top of it now, butEric Topol (39:55):Well, I, I always the, unfortunately, some, some people don't realize it, but the eternal optimist that we will get there, it's taking too long, but we got to ratchet up the heat, get projects like RECOVER  and elsewhere in the world to go in high gear and, you know, really get to testing the promising candidates. You so have aptly outlined here and in your writings. you know, I think this has been an incredible relationship that I've been able to develop with you and your colleagues and I've learned so much from you and I will continue to be following you. I hope everyone listening that if they don't already follow you and, and others that are trying to keep us up to speed, which you know, just this week again, there was a Swiss study, two year follow up showing that the number of people that were still affected significantly with long covid symptoms at two years was 18%.(40:58):That's a lot of folks, and they were unvaccinated, but still, I mean, they, in order to have two year follow up, you're going to see a lot of people who before the advent of vaccines. So this, if you look at the data, the research carefully and it gets better quality as time goes on, because we have control groups, we have matched controls, we have, you know, hopefully the beginning of randomized trials of treatment. we'll hopefully get some light. And part of the reason we're going to get there is because of you and others, getting us fully aware, keeping track of things, getting the research committee to be accountable and not just pass off the same old stuff, which is not really understanding the condition. I mean, how can you start to really improve it if you don't even understand it? And who are you going turn to to understand it? you don't, you don't just look at, you know, MRI brain studies or immune lab studies. You got to talk to the folks who, who know it and know it so well.. All right, well this has been hopefully one of many more conversations we'll have in the future and at some point to celebrate some progress, which is what we so desperately need. Thank you so much, Hannah.Hannah Davis (42:19):Thank you so much. Absolute pleasure.LinksOur Long Covid review with Lisa McCorkell and Julia Moore-Vogelhttps://www.nature.com/articles/s41579-022-00846-2The Brain and Long Covidhttps://erictopol.substack.com/p/the-brain-and-long-covidHeightened Risk of Autoimmune Diseaseshttps://erictopol.substack.com/p/the-heightened-risk-of-autoimmuneCovid and the Risk of Type 2 Diabeteshttps://erictopol.substack.com/p/new-diabetes-post-acute-covid-pascThanks for listening and reading Ground Truths.Please share if you found this informative.Your free subscription denotes your support of this work. Should you decide to become a paid subscriber you should know that all proceeds go to support Scripps Research. That has already helped to bring on several of our summer high school and college interns. Get full access to Ground Truths at erictopol.substack.com/subscribe
undefined
May 22, 2023 • 43min

Peter Lee and the Impact of GPT-4 + Large Language AI Models in Medicine

Link to the book: The AI Revolution in MedicineLink to my review of the bookLink to the Sparks of Artificial General Intelligence preprint we discussedLink to Peter’s paper on GPT-4 in NEJMTranscript (with a few highlights in bold of many parts that could be bolded!)Eric Topol (00:00):Hello, I'm Eric Topol, and I'm really delighted to have with me Peter Lee, who's the director of Microsoft Research and who is the author, along with a couple of colleagues for an incredible book called The AI Revolution in Medicine, GPT-4 and Beyond. Welcome, Peter.Peter Lee (00:20):Hello Eric. And thanks so much for having me on. This is a real honor to be here.Eric Topol (00:24):Well, I think you are in the enviable position of having spent now more than seven months looking at GPT-4’s S capability, particularly in the health and medicine space. And it was great that you recorded that in a book for everyone else to learn because you had such a nice head start.  I guess what I wanted to start with is, I mean, it's, it's a phenomenal book. I [holding the book up], this prop. I can't resistPeter Lee (00:52):Eric Topol (00:53):When, when I got it, I, I couldn't, I stayed up most of the night because I couldn't put it down. It was, it is so engrossing. But when you, when you first got your hands on this and started testing it, what were, what were your initial thoughts?Peter Lee (01:09):Yeah. I, let me first start by saying thank you for the nice words about the book, but really, so much of the credit goes to the co-authors, Carey Goldberg and Zach Kohane and Corey in particular took my overly academic writing. I suspect you have the same kind of writing style as well as Zach's pretty academic writing and helped turn it into something that would be approachable to non-computer scientists and as she put it, as much as possible as a page turner. So I'm glad that her work helped make the, the book an easy read. I,Eric Topol (01:54):I want to just say you're very humble because the first three chapters that you wrote yourself were clearly the, the best ones for me. Anyway. I don't mean to interrupt, but it, it, it is an exceptional book, really.Peter Lee (02:06):Oh thank you very much. It means a lot. Hearing that from you. You know, my own view is that the, the best writing and the best analyses and the best ideas for applications or not of this type of technology in medicine are yet to come. But you're right that I did benefit from this seven-month head start. And so, you know, I think the timing is, is very good. but I'm hoping that much better books and much better writings and ideas will come, you know, when you start with something like this, I, I suspect, Eric, you had the same thing. you start off with a lot of skepticism and I, in fact, I sort of now made light with this. I talk about the nine stages of grief that you have to go through.(02:55): I was extremely skeptical. Of course, I was very aware of GPT 2, GPT 3 and GPT 3.5. I understand, you know, what goes into those models really deeply. and so some of the claims, when I was exposed to the early development, GPT-4 just seemed outlandish and impossible. So I, I was, you know, skeptical, somewhat quietly skeptical. We've all been around the block before and, you know, we've heard lots of AI claims and I was in that state for maybe more than two weeks. And then I started to become in that two weeks annoyed, because I know that some of my colleagues like falling into what I felt was the trap of getting fooled by this technology. And then that turned into frustration and fear. I actually got angry. And one colleague who I won't name I've since had to apologize because then I into the phase of amazement because you start to encounter things that you can't explain that this thing seems to be doing that turns into joy.(04:04): I remember the exhilaration of thinking, wow, I did not think I would live long enough to see a technology like this. and then intensity, There was a period of about three days when I didn't sleep, I was just experimenting. Then you run into some limits and some areas of puzzlement and that's a phase of chagrin. And then real dangerous missteps and mistakes that this system can make that you realize might end up really hurting people. and then, you know, ChatGPT gets released and to our surprise it catches fire with people. And we learn directly through communications that some clinicians are using it in clinical settings. And that heightens the concern. And I, I can't say I'm in the ninth stage of enlightenment yet, but you do become very committed to wanting to help the medical community get up to speed and to be in a position to take ownership of the question of whether, when, and how a technology like this should be used. and that was really the motivating force behind the book. And it, it was really that journey. And that journey also has given me patience with everyone else in the world, because I realize everyone else in the world has to go through those same nine, nine stages.Eric Topol (05:35):Well, those stages that you went through are actually a great way to articulate this pluripotent technology. I mean, I think you, you touched on that chat. ChatGPT was released November 30th and within 90 days had a billion distinct users, which is beyond anything in history. And then of course, this transcended that quite a bit as you showed in the book coming out in you know, just a very short time in March. right. And I think a lot of people want access to GPT-4 because they know that there is this jump in its capabilities. But the book starts off after Sam Altman's forward, which was also nice because he said, you know, this is just an early, as you pointed out there, there's a lot more to come in the large language model space.(06:30):But the grabber to me was this futuristic, this second year medical resident who's using an app on the phone to get to the latest GPT to help manage her patient, and then all the other things that it's doing to check on her patients and do all the things that are the tasks that clinicians don't really want to do, that they need help with. And that just grabs you as to the futuristic potential, which may not be so far away. And I think then you get into the nuts and bolts, but one of the things that I think is a misnomer that you really nailed is how you say it isn't just that it generates, but it really is great at editing and analyzing. And here it's, it's called generative AI. Can you, can you expound on that? And it's unbelievable conversationalist capability.Peter Lee (07:23):Yeah. you know, the term Generative AI, I tried for a while to push back on this, but I think it's just caught on and I've given up on that. And I get it. You know, I, I think especially with ChatGPT it's of course reasonable for the public to be, you know infatuated with a thing that can write love letters, write poetry and that generative capability. and of course, you know school children writing their essays and so on this way. But as you say one thing we have discovered through a lot of experimentation is it's actually somewhat of a marginal generator of text. I would not say at all. That is, it is not as good a poet as good human poets. It's not the, you know, people have programmed GPT-4 to try to write whole novels and it can do that,(08:24):they aren't great. and it's a challenge, you know within Microsoft, our Nuance division has been integrating GPT-4 to help write clinical and encounter notes. and you can tell it's just hitting at the very limits of the capabilities in and of the intelligence of GPT-4 to be able to do that well. But one area where it really excels is in evaluating or judging or reviewing things. And we've seen that over and over again. in chapter three. You know, I have this example of its analysis of some contemporary poetry which is just stunning in its kind of insights and its use of metaphor and allegory. And but then in other situations in interactions with the New England Journal Journal of Medicine experimentations with the use of GPT-4 as an adjunct to the review process for papers it is just incredibly insightful in spotting inconsistencies missing citations to precursor studies to understanding lack of inclusivity and diversity, you know, in approach or in terminology.(09:49):And these sorts of review things end up being especially intriguing for me when we think about the whole problem of medical errors and the possibility of using GPT-4 to look over the work of doctors, of nurses of insurance, adjudicators and others, just as a second set of eyes to check for errors check for kind of missing possibilities if there's a differential diagnosis. Is there a possibility that's been something that's been missed? If there's a calculation for an IV medication administration, well, it's a calculation done correctly or not. And it's in those types of applications of GPT-4 as a reviewer, as a second set of eyes that I think I've been especially impressed with. And we try to highlight that in the book.Eric Topol (10:43):Yeah. That's one of the very illuminating things about going well beyond what are the assumed utilities in a little bit, we'll talk about the liabilities, but certainly these are functions part of that flurry potent spectrum that I think a lot of people are not aware of. One, particularly of interest in the medical space is something I had not anticipated as, you know, when I wrote a Deep Medicine chapter, “Deep Empathy,” I said, well, we got to rely totally on humans for that. But here you had examples that were quite stunning of coaching physicians by going through their communication, their note and saying, you know, you could have been more sensitive with this. You could have done this, but you, you could be more empathic. And as you know, since the book was published, there was an interesting study that compared a couple hundred questions directed to physicians and then to ChatGPT, which of course isn't necessarily called, we wouldn't say it's state of the art at this point, right. But what was seen that chatbot exhibited, the more empathy, the more sensitive, higher quality responses. So do you think, ultimately that this will be a way we can actually use technology to foster a better communication between clinicians and patients?Peter Lee (12:10):Well I'll try to answer that, but then I want to turn the question to you because I'm just dying to understand how others especially leading thinkers like you think about this. Because as a human being and as a patient, there's something about this that doesn't quite sit right. You know I, I want the empathy to come from my doctor, my human doctor that's in my heart the way that I feel. And yet there's just no getting around the fact that GPT-4 and even weaker versions like GPT 3.5, CHatGPT can be remarkably empathetic. And as you say, there was that study that came out of UC, San Diego Medicine, Johns Hopkins Medicine that you know, was just another fairly significant piece of evidence to that point.Here's another example. You know, my colleague Greg Moore was assisting a patient who had late stage pancreatic cancer.(13:10):And there was a real struggle for both the specialists and for Greg to know what to say to this desperate patient how to support this patient. And the thing that was remarkable Greg decided to use GPT-4 to get advice and they had a conversation and there was very detailed advice to Greg on what to say and how to support this patient. And at the end when Greg said, thank you, GPT-4 said, and you're welcome, Greg, but what about you? You know, do you have all the support that you need? This must be very difficult for you. So the empathy just goes remarkably deep. And, you know, if you just look at how busy good doctors and especially nurses are, you can start to realize that people don't necessarily have the time to think about that.(14:02):And also that what GPT-4 is suggesting ends up being a prompt to the human doctor or the human nurse to actually take the time to reflect on what the patient might need to hear, right. What might be going through their minds. And, and so there is some empathy aid going on here. At the same time, I think as a society, we have to understand how comfortable we are with the idea of this concept of empathetic care being assisted by a machine. and this is something that I'm very keen and curious about just in the medical community. And, and that's why I wanted to turn the question back around to you. how do you see this?Eric Topol (14:46):Yeah, I didn't foresee this, but I, and I also recognize that we're talking about a machine vector of it. I mean, it's a pseudo-empathy of sort. But the fact that it can process where it can be improved and it can help foster essentially are features that I think are extraordinary. I, I wouldn't have predicted that. And I've seen now, you know, many good examples in the book and, and even beyond. So it's a welcome thing and it adds another capability which is partly isn't that, that physicians and nurses are lacking empathy, but because their biggest issue, I think is lacking time. Yes. And the fact that someday there's a rescue in the works, hopefully, that a lot of that time of tasks that are, you know, the data clerk functions and other burdens right, will be alleviated the keyboard liberation that has been a fantasy of mine for some years, maybe ultimately will be achieved.(15:52):And the other thing I think that's really special in the book that I wanted to comment, there is a chapter by I think Carey Goldberg. And that was about the patient side, right? And this is what we, we all, the talk is about, you know, doctors and clinicians, but it's the patients who could derive the most. And out of those first billion people that used ChatGPT, many were of course health and medical question conversations. But these are patients, we're  all patients. And the idea that you could have a personal  health advisor, a concept which was developed in that chapter, and the whole idea that that as opposed to a search today, that you could get citations and it would be at the, at the literacy level of the person asking them, making the prompts. Yeah. Could you comment about that? Because that seems to be very much underemphasized, this democratization of this high level capability of getting you know, very useful information and conversation.Peter Lee (16:56):Yeah. And I think also this is also where some of the most difficult societal and regulatory questions might come, because while the medical community knows how to abide by regulations, and there is a regulatory framework, the same is much less true for a doctor in your pocket, which is what GPT-4 and, you know, other large language models that are emerging can, can become. And you know, I think for me personally I have come to depend on GPT-4. I use it through the Bing search engine. sometimes it's simple things that previously weren't mysterious. Like I received an explanation of benefits notice from my insurance company, and it is this notice it has some dollar figures in it. It has some CPT codes, and I have no idea. And sometimes it's things that my son or my wife got treated for.(17:55):It's, it's just mysterious. It's great to have an AI that can decode these things and can answer questions. similarly, when I go for a medical checkup and I get my blood test results just decoding those CBC lab test numbers, it, it's, again, something that is just incredible convenience. But then even more you know, my father recently passed away. He was 90 years old, but he was very ill for the last year or so of his life seeing various specialists. I, my two sisters and I all lived far away from him. And so we were struggling to take care of him and to understand his medical care. and it's a situation that I found all too common in our world right now. And it actually creates stress and phrase of relationships amongst siblings and so on.(18:56):And so just having an AI that can take all of the data from the three different specialists and, you know, have it all summed up and be able to answer questions, be able to summarize and communicate efficiently from one specialist to the next to really provide kind of some sound advice ends up being a godsend. Not so much for my father's health, because he was on a trajectory that was really not going to be changed, but just for the peace of mind and the relationships between me and my two sisters and my mother-in-law. And so it's that kind of empowerment. you know, in corporate speak at Microsoft, we would say that's empowerment of a consumer, but it is truly empowerment. I mean, it's for real. And you know, that kind of use of these technologies, I think is spreading very, very rapidly and I think is is incredibly empowering.(19:57):Now the big question is can the medical community really harness that kind of empowered patient? I think there's a desire to do that. That's always been one of the big dreams, I think in medicine today. and then the other question is, the assistants are fallible. They make mistakes. and so, you know, what is the regulatory or legal or, you know, ethical disposition of that? And so these are still big questions I think we have to answer. But the, you know, overall big picture is that there's an incredible potential to empower patients with a, a new tool and also to kind of democratize access to really expert medical information. and I, I just think it's, you're absolutely right. It doesn't get enough attention even in our book we only devoted one chapter to this, right?Eric Topol (21:00):Right. But at Least it was in there though. That's good. At least you had it because I think it's so critical to figure that out. And as you say, the ability to discriminate bad information, confabulation hallucination among people without medical training is, is, is much more challenging. Yes. but I also liked in the book how you could go to go back to another conversation to audit the first one or a third one, so that if you ever are suspicious that you might not be getting the best information you could do, like double data entry or triple data entry, you know, I thought that was really interesting. Now Microsoft made a humongous investment in open AI yesterday Sam Altman was getting grilled, not again, not really in a much more friendly sense, I'm sure about what should we do. We have this, we have this two edge sword likes of which we've never seen.(21:59):Of course, you get in the book about does it really matter if it's AGI or some advanced intelligence? If it's working well, it's kind of like the explainability-- black box story. But of course, it, it can get off the tracks. We know that. And there isn't that much difference perhaps between ChatGPT and GPT-4 established so far. So in that discussion, he said, well, we got to have regulatory oversight and licensing. And it's very complex. I mean, what, what are your thoughts as to how to deal with the potential limitations that are still there that may be difficult to eradicate that are the worries?Peter Lee (22:43):Right. You know, at, at, at least when it comes to medicine and healthcare. I personally can't imagine that this should not be regulated. it, it just and it just seems also more approachable to think about regulation because the whole practice of medicine has grown up in this regulated space. if there's any part of life and of our society that knows how to deal with regulation and can actually make regulations actually work it is medicine. And so now having said that I do understand coming from Microsoft, and even more so for Sam Altman coming from open eye, it can sometimes be interpreted as being self-serving. You're wanting to set up regulatory barriers against others. I would say in Sam Almond's defense that at back to 2019 prior, just prior to the release of GPT-2 Sam Altman made public calls for thinking about regulation for need for external audit and, you know, for the world to prepare for the possibility of AI technologies that would be approaching AGI..(24:05): and in fact just a month before the release of GPT-4, he made a very public call saying even at greater length, asking for the for the world to, to do the same things. And so I think one thing that's misunderstood about Sam is that he's been saying the same thing for years. It isn't new. And so I think that that should give people who are suspicious of Sam's motives in calling for regulation, that it should give them pause because he basically has not changed his tune, at least going back to 2019. But if we just put that aside you know, what I hope for most of all is that the medical community, and I really look at leading thinkers like you, particularly in our best medical research institutions would quickly move to take assertive ownership of the fundamental questions of whether, when, and how a technology like this should be used would engage in the research to create the foundations for you know, for sensible regulations with an understanding that this isn't about GPT-4 this is about the next three or four or five even more powerful models.(25:31):And so, you know, ideally, I think it's going to take some real research, some real inventiveness. What we explain in chapter nine of the book is that I don't believe we have a workable regulatory framework no, right now in that we need to develop it. But the foundations for that, I think have to be a product of research and ideally research from our best thinkers in the medical research field. I think the race that we have in front of us is that regulators will rightfully feel very bad if large nervous people start to get injured or, or worse because of the lack of regulation. and so there, you know, and, and you can't blame them for wanting to intervene if that starts to happen. And so, so we do have kind of an urgency here. whereas normally our medical research on say, methods for clinical validation of large language models might take, you know, several years to really come to fruition. So there's a problem there. But at the, I think the medical field can very quickly come up with codes of contact guidelines and expectations and the education so that people can start to understand the technology as well as possible.Eric Topol (26:58):Yeah. And I think the tricky part here is that, as you know, there's a lot of doomsayers and existential threats that have been laid out by people who I respect, and I know you do as well, like Geoffrey Hinton who is concerned, but you know, let's say you have a multimodal AI like GPT-4, and you want to put in your skin rash or skin lesion to it. I mean, how can you regulate everything? And, you know, if you just go to Bing and you go to creative mode and you're going get all kinds of responses. So this is a new animal, this is a new alien, the question is that as you say, we don't have a framework and we should move to, to get one. To me, the biggest question that you, you, you really got to in the book, and I know you continue, of course, it was with within two days of your book’s publishing,  the famous preprint came out, the Sparks preprint from all your team at Microsoft Research, which is incredible.(27:54):169 page preprint downloaded. I don't how many millions of times already, but that is a rich preprint we'll, we'll put in the link, of course. But there, the question is, what are we seeing here? Is this really just a stochastic parrot a JPEG with, you know, loose stuff and juxtaposition of word linguistics, or is this a form of intelligence that we haven't seen from some machines ever before? Right. and, you get at that in so many ways, and you point out, does it matter? I I wonder if you could just expound on this, because to me, this really is the fundamental question.Peter Lee (28:42):Yeah. I think I get into that in the book in chapter three. and I think chapter three is my expression of frustration on this, because it's just a machine, right? And in that sense, yes, it is just a stochastic parrot, you know, it's a big probabilistic machine that's making guesses on the next word that it should spit out, or that you will spit out. It, it, and it's making a projection for a whole conversation. And you know, in that, the first example I use in chapter three is the analysis of this poem. And the poem talks about being splashed with cold water and feeling fever. And the machine hasn't felt any of those things And so when it's opining about those lines in the poem, it can't possibly be authentic. And so you know, so we can't say it understands these things.(29:39):It it hasn't experienced these things, but the frustration I have is as a scientist, and here's now where I have to be very disciplined to be a scientist, is the inability to prove that. Now, there has been some very, very good research by researchers who I really respect and admire. I mean, there was Josh Tenenbaum's team, whole team, and his colleagues at MIT or at Harvard, the University of Washington, and the Allen Institute, and many, many others who have just done some really remarkable research and research that's directly relevant to this question of does the large language model, quote unquote, understand what it's hearing and what it's saying? And often times providing tests that are grounded in the foundational theories about why these things can't possibly be understanding what they're saying. And therefore, these tests are designed to expose these shortcomings in large language models. But what's been frustrating is, but also kind of amazing is GPT-3tends to pass most, if not all of these tests!(31:01):And, and so it, it leaves you kind of, if we're really honest, as scientists, it and even if we know this thing, you know, is not sentient, it leaves us in this place where we're, we're without definitive proof of that. And the arguments from some of the naysayers who I also deeply respect, and I've really read so much of their work don't strike me as convincing proof either, you know, because if you say, well, here's a problem that I can use to cause GPT-4 to get tripped up, I, I have no shortage of problems. I, I think I could get you to trip, get tripped up , Eric. And yet that does not prove that you are not intelligent. And so, so I think we're left with this kind of set of two mysteries. One is we see GPT-4 doing things that we can't explain given our current understanding of how a neural transformer operates.(32:09):And then secondly we're lacking a test that's derived from theory and reason that consistently shows a limitation of GPT-4’s understanding abilities. and so in my heart, of course, I, I understand these things as machines and I actively resist anthropomorphizing these machines. But as it, I, maybe I'm fooling myself, but as a discipline scientist, I, I'm, I'm trying to stay grounded in proof and evidence. and right at the moment, I don't believe the world has that I, we'll get there. We're understanding more and more every day, but at the moment we don't have it.Eric Topol (32:55):I think hopefully everyone who's listening is getting some experience now in these large language models and realizing how much fun it is and how we're in a new era in our lives. This is a turning point.Peter Lee (33:13):Yeah. That's stage four of amazement and joyEric Topol (33:16):Yeah. No, there's no question. And you know, I think about you, Peter, because you know, at one point you were in a high level academic post at Carnegie Mellon, one of our leading computer science institutions in the country, in the world, and now you're at this enviable spot of having helped Microsoft to get engaged with a, a risk, I mean a big, big bet. And one that's fascinating, and that is obviously just an iteration for many things to come. So I wonder if you could just give us your sense about where you think we'll be headed over the next few years, because the velocity that this is moving. Not only is it this new technology that is so different than anything previously, but to go, you know, from a few months to get to where things are now and to know that this road is still a long ways in front of us. What, what's your sense of, you know, are we going to get hallucinations under control? Are we going to start to see this pluripotency rollout particularly in the health and medicine arena?Peter Lee (34:35):Yeah. You know, I think first off, I can't say enough good things about the team at OpenAI. You know, I think their dedication and their focus and I think it'll come out eventually also, the, the care that they've taken in understanding the potential risks and, and really trying to create a model for how to cope with those things. I, I think as those stories come out, I think it it will it'll be quite impressive. at the same time, it's also incredibly disruptive, even for us as researchers, it just disrupts everything. Right. You know, I was having interaction after I read Sid Muhkerjee’s's new book, the Song of the Cell. Because in that book on cellular biology one of the prime characters historically Rudolph Virchow who confirmed the cell mitosis and the you know, the thing that was disruptive about Virchow is that well, first off, the whole theory of cell mitosis was debunked.(35:44): that didn't invalidate the scientists who were working on cell mitosis, but it certainly debunks many of their scientific legacies. And the other is after Virchow, to call yourself a biology researcher, you had to have a microscope and you had to know how to use it. and in a way, there's a scientific disruption similar here, where there are now new tools and new computing infrastructure that you need, if you want to call yourself a com, a computer science researcher. And that's really incredibly disruptive. so I, I see kind of two bifurcation, I think that's likely to happen. I, I think the team at Open AI and with Microsoft's support and collaboration will continue to push the boundaries and the frontiers with the idea of seeing how close to AGI can truly be achieved and largely through scale. And you know, there, there will be tremendous focus of attention on improving its abilities in mathematics and in planning and being able to use tools and, and so on there. and in that, there's a strong suspicion and belief that as greater and greater levels of general cognitive intelligence are achieved, that issues around things like hallucination will be, become much more manageable. Or at least manageable to the same extent that they're manageable in human beings.(37:25):But then I, I think there's going to be an explosion of activity in much smaller, more specialized models as well. I think there's going be a gigantic explosion in, say, in open-source smaller models, and those models probably will not be as steerable and alignable, so they might have more uncontrollable hallucination might go off the rails more easily, but for the right applications --integrated into the right settings--that might not matter. And so exactly then how these models will get used and also what dangers they might pose, what negative consequences they might bring is hard to predict. But I, I do think we're going to see those two different flavors of these large AI systems coming really very, very quickly, much less in the next year.Eric Topol (38:23):Well, that's an interesting perspective, an important one in the book you wrote in this sentence that I thought was particularly notable “the neural network here is so large that only a handful of organizations have enough computing power to train it.” we're talking about 20 or 30,000 GPUs, something like that. We're lucky to have two here or four. this is something that I think again, if you were sitting at Carnegie Mellon right now versus sitting with at Microsoft or some of the tech titan companies that have this capabilities, can you comment about this? Because this sets off a very, you know, distinct situation we've not seen before,Peter Lee (39:08):Right?  First off you know, I can't really comment on the size of the compute infrastructure for training these things, but, but it is, as we wrote in the book, is at a size that very, very few organizations at this point. This has got to change at some point in the future. and even on the inference side, forgetting about training you know, GPT-4 is much more power hungry than the human brain. So it is just the human brain is an existence proof that there must be much more efficient architectures for accomplishing the same tasks. So I think there's really a lot yet to discover and a lot of headroom for, for improvement. but you know, what I think is ultimately the, the kind of challenge that I see here is a technology like this could become as essential infrastructure of life as the mobile phone in your pocket.Peter Lee (40:18):And, and so then the question is, can the cost of this technology, how quickly can the cost of this technology, if it should also become as necessary to modern life as the technology's in your pocket how quickly can the costs of this be get to a point where that's, you know, where that is can be reasonably accomplished, right? If we don't accomplish that, then we risk creating new digital divides that would be extremely destructive to society. And what we want to do here is to really empower everybody if it does turn out that this technology becomes as empowering as we think it could be.Eric Topol (41:04):RIght I, I think your point about the efficiency the drain on electricity and no less water for cooling. I mean, these are big, big-ticket things and, you know hopefully simulating the human brain will become, and it's less power-hungry state will become part of the future as well.Peter Lee (41:24):You, well, and hopefully these technologies will solve problems like you know, a clean energy, right? Fusion containment, all better lower energy production of fertilizers, better nanoparticles for more efficient lubricants. There's all a new catalyst for carbon capture. we, if you think about it in terms of making a bet to kind of invent our way out of climate disaster this is one of the tools that you would consider betting on.Eric Topol (42:01):Oh, absolutely. You know, I'm going to be talking soon with Al Gore about that, and I know he's quite enthusiastic about the potential. This is engrossing having this conversation, and I would like to talk to you for many hours, but I know you have to go. But I, I just want to say, as I wrote in my review of the book, talking with you is very different than talking with, you know, somebody with bravado. You're, you know, you have great humility and you're so balanced that when, when I hear something from you or read something that you've written, it's a very different perspective because I don't know anybody who's more balanced, who is more trying to say it like it is. And so, you know, I just, not everybody knows you a lot of people do that might be listening. I just want to add that and just say thank you for taking the effort, not just that you obviously wanted to experiment with GPT-4, but you also, I think, put this together in a great package so others can learn from it, and of course, expand from that as we move ahead in this new era.(43:06):So, Peter, thank you. It's really a privilege to have this conversation.Peter Lee (43:11):Oh thank you, Eric. You're really really too kind. But it, it means a lot to me to hear that from you. So thank you.Thanks for listening and or reading Ground Truths. If you found it as interesting a conversation as I did, please share it.Much appreciation to paid subscribers—you’ve already helped fund many high school and college students at our summer intern program at Scripps Research and all proceeds from Ground Truths go to Scripps Research. Get full access to Ground Truths at erictopol.substack.com/subscribe
undefined
May 5, 2023 • 35min

Straight talk with Michael Osterholm

Transcript Eric (00:00):Okay. Hello, this is Eric Topol and this is a rare privilege for me to interview my favorite epidemiologist, Dr. Michael Osterholm. He is the Regents Professor of the University of Minnesota. He's director of CIDRAP, which is certainly one of the leading entities around the world for public health. And, we've been friends for the last few years, which we'll we'll talk about. So, welcome Michael. Such a great privilege to have you today.Michael (00:31):Well, thank you, the honor, really is mine. As I have shared with you and others know very well--you have been a real mentor to me and many others during this pandemic. And, I could never repay you adequately for all that you've helped teach me throughout these last three years. It's been immeasurable.Eric (00:49):No, if you're too kind, I think it's much different. The opposite way. I've learned so much from you because this isn't my area, as you well know. I thought we'd start with, of course, right now things are relatively good for the pandemic in the United States and mostly around the world, with relatively less cases, less hospitalizations and deaths. But obviously still people are getting infected. And maybe you can tell us about the recent case that you went through that would be enlightening.End of the Pandemic?Michael (01:28):Yeah, I think we're all trying to understand when the pandemic ends. And, as we've discussed many times before, we'll probably know that about a year after it ends, then we'll say, yep, that was the end of it. Don’t for a moment think that at the end means that there won't be cases. You know, for every infectious agent that we think of when causing a pandemic, they still come back, whether it be influenza, or potentially coronaviruses. They will, they will continue to circulate. It's a matter of how many cases occur, how many people die. And I think that's an important point. There isn't really a definition for when a pandemic ends. It's, I guess it's just when you feel like it's over. And clearly the world has come to that conclusion already. You don't need a, an epidemiologist or a politician to tell 'em that the pandemic's over that they feel that we're still seeing about 165 deaths a day in this country from Covid.(02:24):So it's hardly gone away completely. But we do have to acknowledge it. Most of those deaths are older individuals, people who have not been vaccinated recently with bivalent boosters. And in that regard, we could surely even reduce the illnesses further. I don't have any faith right now in the surveillance systems that have been set up to look at cases around the world. We've pretty much dismantled that. We are not testing people that we results in reports being made to public health agencies, whether in this country or anywhere else in the world. So I really look at two other things. One is deaths. And even they're realizing that still is a challenge in terms of how complete death reporting is due to covid. But then the other thing we're looking at, which has been really, you might say, public health revolution during the pandemic, and I say revolution cause it's really changed things.(03:19):And that is the issue of wastewater surveillance. And we've been able to ascertain in many areas of the world, in fact, with using wastewater surveillance, a much better sense of how much virus is in the community. And so, just in following with your very thoughtful comment about case numbers dropping, that's exactly what we're seeing in most locations in this country too. We, for example, here in the Minneapolis St. Paul area, have seen a dramatic decrease in wastewater activity in the last two months. So I think we're in a place right now where I can hope it'll only get better. On the other hand, you know, I have a lot of respect for this virus, and frankly, we all ought to have a lot of humility. We don't know if another variant will emerge that with, given how much immunity we have in our population will somehow break through that and cause increase in surgeon cases or whether this will become kind of the norm and we'll see less and less.On Getting Covid(04:16):Now, you asked me about my case. Yeah. I have to say that, I speak about this with, with really some trepidation in the sense that I was not gonna get this. I had and very faithful throughout the course of the pandemic, where in my N 95 respirator when I went out and about, I had been fit tested. In addition, when we finally did socialize in our home, we had a, what became affectionately known as the Osterholm Home Rule. You could not have had known contact with someone of the, with Covid in the five previous days. You could have no symptoms yourself on the day of, and you had to test negative bilateral flow test within three to four hours of coming. And we would entertain small four, the six party, parties, and it was going wonderful.(05:07):And then on March 10th, the night of March 10th, a colleague from work came over with Fern and myself. Three of us had dinner. We went down our elevator in our building here, which were 31 stories up. No one else is in the elevator. And then we proceeded to go to a very small music venue where we wore N95s. We were some distance from any other people, and we were there for an hour and 45 minutes. And, literally two days later, almost 48 hours later, all three of us developed symptoms. None of us converted for another 24 hours. And then at that point, we all three tested lateral flow positive.. We all three took Paxlovid. I took it and was starting to feel better after that fifth day.(05:59):And then I kind of crashed and at that point, I got a second, , five day course of Paxlovid and started to feel better. And, I'm you know, was very happy to have this behind me. However, over the course of the last 10 days, I have really had significant fatigue. You know I'm not one that sleeps a lot But, I can tell you there are multiple times in a day where I'm doing something like even doing what I'm doing right now where I just feel like I just need to fall asleep. It’s been really a challenge. The other thing that happened, which was in retrospect a little bit more concerning than I realized at the time, there was a period at about day 10 to 14 into my illness, I started losing my memory on many, many things of, you know, importance.(06:53):I couldn't, for example, tell you what was that drink that is: a champagne, orange juice combination. I couldn't find the word mimosa if my life depended on it. If somebody asked me who was in sleepless in Seattle, I had to think about now the movie who was in it. I couldn't remember. And I mean, in retrospect, I wasn't that concerned thinking, ah, it's not that bad. And it was actually quite remarkable. This lasted about two and a half, three weeks. And now I think, I think at least according to those around me, I have gained most of my memory back. But now I have the fatigue picture. So, as much as I don't know where I picked up the virus, all three of us picked it up. And as much asI feel like I have survivor's guilt right now in the sense that, you know, I'm not that concerned about getting infected in a public exposure given I probably have some pretty good protection, at least for a few more weeks. But nonetheless,  I think this potential fatigue issue is really a challenge.Eric (07:52):Yeah. The things that you're bringing up with this, like for example, I know you had had, the initial series and three boosters including the bivalent. Was that sometime in September last year, or,Michael (08:04):Yeah, it was seven and a half months before,Eric (08:07):Yeah. So,Michael (08:07):So, so that was, and I tried to get it at six months in the second. But in Minnesota we actually have a registry. And so it's not just your white card that, you know, you could do it. And it wasn't, I was trying to do something illegal, but you know, this vaccine's just sitting there. So I tried to get another bivalent at six months post my first one, and of course I was turned down. And then, five weeks after that I got covid.Eric (08:33):Yeah. And, and then of course, just recently the FDA and CDC finally came to the conclusion that for people of our age group and immunocompromised, they certainly have the option that you've advocated for. And unfortunately, you weren't able to get at that time. Although I suspect the protection, you might comment on that, Mike, that there is some protection infection for the first few months after a booster.Michael (09:00):Yeah. Yeah, absolutely. I mean, I think the studies that we've seen so far, at least, and particularly from those from other countries where they have remarkable follow up on databases, there is some initial evidence of protection in those first weeks against getting infected and even potential transmission. But that wanes unfortunately, quickly, and it's likely B-cell related immunity. And then I think as we all, at least believe the T-cell immunity, which we're still all trying to understand and characterize, probably kicks in and gives us protection against serious illness, hospitalizations and deaths. But as you and I have looked at even then, at six months out, you start to see some potential waning of that. And I think that's why we have a real challenge right now. I've said many, many times, we can't boost our way out of this pandemic. And I meant that not because some of us wouldn't be willing to get a vaccine every six months, but the vast majority of the population would not. And we've even seen here with the first bivalent booster dose, which we know has provided good protection against serial serious illness, hospitalizations, and deaths. Look at the very small proportion of the [age 65+] population that have taken that less than 40%. So  it's a challenge that how do we get people to keep getting vaccinated? A lot of people say, I'm done. I'm, I'm done with it.Eric (10:22):Right, right. Unfortunately, especially those who are at high risk. It's really unfortunate. Now, one of the things you've done recently among many things, you covered the status of the pandemic today and some liabilities for the future. And you've been working on the future with the blueprint that you put together from people, experts around the world to try to map out, optimally managing this pandemic’s future, preparing for the next pandemic. Could you give us the skinny on that?Michael (10:52):Well, actually this was a report that is relabeled the Covid Wars put out by the Covid Crisis Group, which was a loose affiliation of 34 individuals who had agreed to help out developing basic materials with the hope that that would lead to a post pandemic commission, much like the commission we saw after nine And then the person that headed that up actually was the person who did head up the 9/11 commission also. And there was support from several foundations for this. When it became clear, after almost a year of trying to pull together lessons learned challenges to what we know and don't know, the US government was not gonna support, another commission either at the, in the legislative side of the government or in the executive branch. Both of them basically said, well, we're not really interested.(11:48):I think that's been a major mistake. But this report, which is now out, does address a number of the shortcomings that we have experienced with this pandemic. And again, you know, in a world where it's so partisan and everyone wants to blame someone for something, this was not meant to blame. This was meant to be what we classically call a hotwash, where we go back over an experience to learn from it. What could we have done differently? How could we have done it? What did we do right? How do we have to make sure that that's in place in the future? And so this plan is, is about that very thing. Now, at the same time I'm writing another book, much like the one I did, deadliest Enemies Our War, againsts Killer Germs in 2017, when I laid out what a pandemic might look like.(12:38):And this one is really to address what do we need to learn from this pandemic for the next one? And I go into a bit more in certain topic areas, than our report did much more in depth as it relates to vaccines, public health actions, lockdowns, all of those things. And so I hope that in a, you know, a few months that'll be available so that not only does it lay out what the challenges were, but, you know, given my public health experience of 48 years and having been through these, what do I think the lessons learned should be?A Major Prediction and Being Called IrresponsibleEric (13:17):I can't wait to read it. I mean, the roadmap, though, that you've pulled together, was really extraordinary. And of course, it addressed the things like pan-coronavirus vaccine and, and so many others that we can, pursue hopefully, and be also templates for the future. Now, I want to go back now since we recovering kind of the current future status, but back in March, 2020, you wrote that there would be, this is March, 2020, there would be 800,000 deaths in the next 18 months from Covid. Talk about an oracle, I mean, obviously no one would ever wanted that to see that, be actualized, but how did you, how did you know that, Mike? How did you know we were, we were in that, in, in store for such a dreaded outcome in an imminent period of time?Michael (14:13):Well, you know, let take a step back to December of 2019. You know, our center has a very active news team that basically covers infectious disease news from around the world. Even though it's inside of CDRAP, there's a thick wall between it and me, from an editorial standpoint, so I don't have any control over it. But they notified me that they were picking up information that last week in December, out of Wuhan about this emerging outbreak of unexplained pneumonia. And,  you know, at that point we stayed on top of it. And of course, my first thought was, could this be a, a flu situation with an emerging flu pandemic, or was it just more coronavirus? You know right after 9/11, I spent three years as a special advisor to then Secretary of Health New Services, Tommy Thompson.(15:06):I split my time between the University of Minnesota and the government. And it was during that time that I actually participated actively in the first SARS outbreak that occurred, with regard to the US involvement. And then in 2012, I had been serving as an advisor to the royal family of the United Arab Emorys. And when merged first emerged on the Arabian Peninsula, I went over and worked, on that issue. And then in 2015, when MERS exploded, literally in Samsung Medical Center in South Korea, I was asked to come and I went to over to Seoul and help with that outbreak. So I had a, a pretty good feeling, I thought, for coronaviruses. And of course, influenza is something that I had been working on for 40 years. And so initially I was saying, I hold up, boy, I hope this is a coronavirus, because we know how to control that.(15:55):They're not, it's not that infectious. Even though the case fatality rates may be between 15 and 35%. Well, as you know, by the end of that first week in January, we had the data saying, yep, this was a coronavirus. But it was at that time that we had contacts in Wuhan and in Hong Kong, and we were basically getting information out. And then of course, following up with our colleagues in Singapore, the old flu network that was suggesting that this was a very different kind of coronavirus. This, there appeared to be substantial transmission among those who were asymptomatic as well as those who were symptomatic. And as we saw more and more transmission, outside of, of Wuhan, it reminded me of great deal of what we saw in 2009 with H1N1, where there in the month after it was first discovered in Mexico, it was subsequently found in 128 different countries in just one month.(16:52):And, and it looked like this is what this coronavirus was doing. And so on January 20th, actually our center put out a statement saying, get with it world. This is the next pandemic. It is a coronavirus acting differently than MERS and SARS, my worst fear was that the case totality rate may be as high as that. Well, over the course of the next few weeks, we got more in better information about what was going on. And there was just such a denial at the time. In fact, I went to JAMA, and to the editor's and said, can I do a perspectives piece on why the world has to wake up quickly? This is going to cause a pandemic. They not only turned me down, but the following week they ran a cartoon in JAMA, one pager on one column looking at Covid, and Coronavirus is on the right kind column looking at influenza.(17:43):And they came to the conclusion, don't get distracted by this coronavirus thing, it's about flu. Wow. And so I think at that time, there was such denial that was going on. So when I first made this statement, I actually did it by the kind of the back of the seat estimate. You know, I'm not a black box guy. I, in fact, I find black boxes often, they sort of press the hell outta you with their sophistication. And what they don't tell you is they have no clue what they're talking about. So I just basically did a back to the envelope calculation and not even realizing vaccine might or might not come into place. So, you know, I have to be honest and say it was in some ways luck.Eric (18:29):Yeah, I don’t know, I think it's a lot of wisdom and mixed with that.Michael (18:33):You know, I want to add one, I want to add one of the thing though, Eric, because the thing that I will most remember probably in this pandemic is not all the hate mail that I received from so many as the days went on and even death threats. It was the feedback I got in that month of March from colleagues who thought that I was over the top that I had finally, you know, scared the hell out of people one too many times kind of thing. And it was amazing to me, as much as we're critical of the politicians and what happened, and we surely should be, there were many of our colleagues who were equally in a state of denial not wanting to believe that this was really happening.Eric (19:15):Oh, absolutely.Michael (19:16):Yes. So I think that's what I'll remember is, it's one thing to have some anonymous person tell you, you know, that you should be dead. It's another thing to have one of your colleagues say you're irresponsible.Organizing the “Party Planning Group”Eric (19:29):Yeah. You're not kidding there. And you know, especially with you because you know, everybody who's listening has seen you innumerable times on, you know, CNN, MSNBC, Meet the Press, and, various news networks, and they know you come across with humility, unlike many other experts where, you know, you say we just don't know. And also the master of metaphors, as far as I can tell, like the eye of the hurricane and so many things like that. But the other thing I wanted to get into historically is something that brought us together that a lot of people still, it's been written about, but a lot of people still don't know. So back in the summer of 2020, you said, I'm gonna organize a group, a group that eventually became known as the party planning group that we meet every Friday morning for an hour or so. And we talk about, well, there's a pandemic and related matters. So you again, had this idea to bring this group together. And could you talk about that, because it's amazing here it is. You know, two and a half years later we met today, we are, we're continuing to meet, tell, tell everybody about whether that group, how, how you first saw the need for it. And perhaps, you know, what do you think it's accomplished?Michael (20:43):Well, first of all, let me start out with two caveats, number one, and thank you for your comments. But I realize the older I get, the more vulnerable I am to learn . And so I want to surround myself with people that can teach me. Okay. The second thing is, is that humility should be considered a requirement today of trying to deal with pandemic viruses because we have to acknowledge, we don't know what the next major curve ball is going to be. You know, I can remember a, a a light bulb moment for me early in January of 2021 when vaccines were now flowing. But recall you and I together, we wrote a piece on this. So Alpha variant emerging out of Europe. And remember up until then, that time, we kept being told that, well, these variants, the sub variants are really just nothing more than rings on a tree.(21:36):They're just telling you how old the virus is. And with Alpha, we had clear and compelling evidence, oh no, it had a lot to do with functionality, how infectious it was, et cetera, and that that could very well change the complexion. And I remember very well, being on Meet the Press in January of 2021 and saying, I thought the darkest days of the pandemic were still ahead of us because of the number of people who were not vaccinated. The fact that this viruses was going to continue to change. And of course, again, I caught a lot of heat for that Nate Silver, who gutted me in public media for irresponsible. And of course, as you know, the vast majority of deaths occurred after that time. Right, right. But now to back up to your point and why I think some of the things that I was able to learn occurred was in the summer of 2020, a colleague of mine who, very near and dear came to me,  said that there is someone in the senior level of government that right now is making some major decisions, but really has no one around him he knows he can trust.(22:42):Would you ever talk to him and, and provide what information you can to kind of give him a sense off the record? Well, I thought, you know, actually it would be better cuz there's a team of people I think that could be more helpful. I'm one, I'm one voice and I surely don't proclaim to have the only voice. So I actually literally went to my might say, magical list, who are the people that I most respected and admired, and who did I trust? And trust was huge. Trust was huge. And as you know, you're on that list. it's now been publicly stated. Peggy Hamburg. Peter Hotez, Bruce Gellen, Pennny Heaton and, Ruth Berkelman. And you know, we, we meet on every Friday and our discussions are incredibly, incredibly thoughtful.(23:39):They are honest and there's a trust in that group. You know, what we share stays there. And I, I so appreciate that. And so from that perspective, that will continue and I will continue to learn from all of you. And I think if it was any one lesson that came out of from this pandemic is just the value of having that kind of collective brain trust that can come in, ask questions. Many times we didn't have the answers, but we surely got the questions out, which then gave us opportunities to learn the answers. And the fact that we could do it. And you and I both knew that our comments were gonna stay within the context of that group.Eric (24:20):Yeah. And we had to keep it anonymous with this name of party planning group just because we didn't want people to know what this was.Michael (24:29):Yeah. At that time, it was interesting. I have to tell my administrative assistant was out one day during that time, early time period. And someone else, was sitting in and they saw in my schedule an hour blocked off for party planning. And it was right at the holiday season. So there was an assumption made in my, in our center that I was just planning this big holiday party and that nobody knew about it yet and said party planning. And that rumor got spread got, was spread throughout the entire center. And I had to self-correct, you might say and explain we can still have a party, but that wasn't what this was about.Eric (25:07):Yeah. Well, it, it's been an amazing ride and it continues, but, you know, we were there from well before, there were vaccines all the way through to the current time. And you can imagine all the different things that have been happening in the background and that we were discussing, exchanging ideas, communicating with the public health agencies, the White House and all sorts of other issues along the way. So it's been a privilege for me not just to have this conversation, but over these last two and a half years to work with you on that. It’s been extraordinary and to learn from you and our colleagues. Well, this has been so much fun for me, Mike, I I just am struck by your ability to weave together, you know, the, the wisdom you've drawn from all these experiences over four decades of working in this space with the ability to be humble and know that, you know, you're not the smartest guy in the room.(26:07):No one's the smartest guy in the room that you want to have other people, you know, whether, wherever they come from, like for example, when you put together the Roadmap and you brought together, you know, people from all over the world, to think, to exchange ideas about how we can do better for this and future pandemics, because undoubtedly we're gonna be facing those. So maybe, as we wrap up, could you just give us your sense, there's obviously climate change, there's all the things that have been done to the environment and this pandemic, which we all want, wish to be, you know, put aside, the virus will be here for many years to come. But what are your expectations since unfortunately your predictions have come too close to real, about the next pandemic. Will it be influenza? Will it be in the next few years? What are your thoughts about where we're headed?Michael (27:05):Well, you know, Eric, let me just start out and say thank you for your very kind comments. I think one of the things I learned at CIDRAP a long time ago is the very name, the Center for Infectious Research and Policy. And I knew very early in my career that well designed, well conducted even very important research means nothing if you can't translate that into active policy that makes a difference. At the same time, policy, if it's not informed by good research can be dangerous. And so I think what you're highlighting here is how we try to bring groups of individuals together to merge research and policy together. And you just talked about the Coronavirus vaccine roadmap, where 54 of the world's leading experts, including you, participated in that. And we developed a very, very specific, outline for a roadmap of what needs to be done to get us to new and better coronavirus vaccines, and ones that basically, will be hopefully broadly protective for any future coronavirus activity that occurs.(28:12):So I can never say enough about the ability to bring shareholders together. Collective wisdom will win every time against a wisdom. And I think that that's one thing I learned in terms of where we're going. You know, I, I have to just think back into human history. And when you think about the fact that in 1900 average life expectancy in this country is about 43 years, and today, even with the pandemic, it's about 76, 77 years. For every three days we've lived in the last century and 20 plus years, we've gained one day of life expectancies that takes us all the way back to the 80,000 generations to the caves. And I think what we haven't fully understood is, is that we lived in a world where infectious diseases had major impact on why we didn't live to be as a median age, life expectancy, up into the seventies, but rather, into the forties.(29:13):And I think what we're facing today is a world that's moving us back to those, numbers not forward. For example, if you look at just the situation right now of world population, 8 billion people on the face of the earth you look at, you know, what's happening with mega cities around the world. You know, I, remember early in the days of HIV aids making a trip to Kinshasa, which no longer is of course, where it was a large rural city. Today it's 18 million people. When you look at the median age of Africa, it's 19 years. When you look at what we've done with human population and how we have reached out to every corner of the world seeking food, bush meat, et cetera, I, you know, Ebola has been a problem likely for many, many, many decades. But when it was in very rural, isolated villages of Africa, you know, if 25 or 30 people got infected and died, no one even knew about it.(30:19):Now, today with the organization of Africa, you can see widespread  transmission quickly in these areas. And this is true for all parts of the world. Think about avian influenza and the need today to feed 8 billion people. We have relied on  birds on, on the fastest that as an animal species is the fastest conversion of energy to protein on earth. And so look at the billions of birds we're raising, which now provide for a new reservoir for flu viruses. I can go down the list, look at how climate change is moving, in terms of precipitation levels and temperatures that now move mosquito populations to places of existence we didn't see before. And then added transportation in. Think about all of history to World War II, the four serotypes of Dengue virus existed in four different regions of the world. It wasn't until First World War II that now they all exist virtually where each one exists.(31:19):Why do we have Dengue hemorrhagic fever? It's because of that. And so I think that the final piece I would say is yes, pandemic's gonna happen again. We are going to see more of what we've just experienced. And frankly, it could be a lot worse. We didn't see 15 to 35% mortality rates like you might with SARS or mers, but instead we saw just high transmission levels. There is nothing to stop the next coronavirus from being transmitted like SARS -CoV-2in killing like MERS or SARS. Mm. And so I think we have to be mindful of that. And the final last thing, I would just paint this is our climate change issue in infectious diseases. It's antimicrobial resistance, it's amorphic, people all know it's there, what to do about it. And we are watching ourselves literally devolve back into a pre pandemic era of antibiotic resistance.(32:14):Meaning that, you know, before our grandparents were around, people died often from common in, you know, cuts, bruises, et cetera, because they didn't have antibiotics. Look what's happened since that time, they've played a huge role. Sure. And now we're gonna watch that. You know, we're wildly that. And then last one, at least, I just have to say misinformation, disinformation on vaccines is huge. I think that we're gonna continue to see increasing challenges with populations around the world, no longer willing to take childhood immunizations or even other adult immunizations just because of the disinformation. So when you add that all up, it's job security, unfortunately, for a lot of us. And that's a sad commentary. It's real, yeah.Eric (32:58):Well, and as you pointed out so well, just before we got started with AI, it has a potential to amplify the myth and disinformation to unprecedented levels. And it's already so, you know, horrific as it is.Michael (33:13):You know, it's bad enough that I can just say that there are times I read articles in newspapers, and I'll get halfway through a quote and I'll say, who the hell said that? According to Osterholm? And of course, what? ,Eric (33:28):Right. There you go.Michael (33:30):What are we gonna do when you and I end up on these bots? You know, we're there Is Eric Topol saying, saying to the world, you know,  I was wrong. Vaccines aren't any good. Yeah. And people are gonna see that, and it's not you. Right?Eric (33:44):Right.Michael (33:44):A lot that concerns me a lot.Eric (33:46):No, it, it was deep fakes and now it's going to another ultra level of that. It's pretty scary actually. So with all the things that we've been talking about, whether it's a potent virus or a tech like AI is becoming with generative AI we've always gotta look at both sides of this and, and be prudent, to put it mildly. Well, this has been fun. And I can't thank you enough. I I, I would like to talk to you all day, but we've got a got a lot in there in a half hour, and I know we'll get a lot of interactions from the folks that are listening. Mike, thanks.Michael (34:25):Gift to all of us. You're a gift to all of us. Thank you.Eric (34:28):Oh, thank you. That's much too kind. Get full access to Ground Truths at erictopol.substack.com/subscribe

Get the Snipd
podcast app

Unlock the knowledge in podcasts with the podcast player of the future.
App store bannerPlay store banner

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode

Save any
moment

Hear something you like? Tap your headphones to save it with AI-generated key takeaways

Share
& Export

Send highlights to Twitter, WhatsApp or export them to Notion, Readwise & more

AI-powered
podcast player

Listen to all your favourite podcasts with AI-powered features

Discover
highlights

Listen to the best highlights from the podcasts you love and dive into the full episode