
Tyler Cowen: The Prototypic Polymath
Ground Truths
Intro
This chapter features an engaging conversation with a prominent polymath, exploring their extensive literary contributions and the impact of technology on reading habits. The discussion covers writing trends, collaborations, and the evolving landscape of knowledge sharing in contemporary society.
Audio file, also on Apple and Spotify
Tyler Cowen, Ph.D, is the Holbert L. Harris Professor of Economics at George Mason University. He is the author of 17 books, most recently Talent.: How to Identify Energizers, Creatives, and Winners Around the World. Tyler has been recognized as one of the most influential economists of the past decade. He initiated and directs the philanthropic project Emergent Ventures, writes a blog Marginal Revolution, and a podcast Conversations With Tyler, and also writes columns for The Free Press." He is writing a new book (and perhaps his last) on Mentors.
“Maybe AGI [Artificial General Intelligence] is like porn — I know it when I see it. And I’ve seen it.”—Tyler Cowen
Our conversation on acquiring information, A.I., A.G.I., the NIH, the assault on science, the role of doctors in the A.I. era,, the meaning of life, books of the future, and much more.
Transcript with links
Eric Topol (00:06):
Well, hello. This is Eric Topol with Ground Truths, and I am really thrilled today to have the chance to have a conversation with Tyler Cowen, who is, when you look up polymath in the dictionary, you might see a picture of him. He is into everything. And recently in the Economist magazine 1843, John Phipps wrote a great piece profile, the man who wants to know everything. And actually, I think there's a lot to that.
Tyler Cowen (00:36):
That's why we need longevity work, right?
Eric Topol (00:39):
Right. So he's written a number of books. How many books now, Tyler?
Tyler Cowen:
17, I'm not sure.
Eric Topol:
Only 17? And he also has a blog that's been going on for over 20 years, Marginal Revolution that he does with Alex Tabarrok.
Tyler Cowen (00:57):
Correct.
Eric Topol (00:57):
And yeah, and then Conversations with Tyler, a podcast, which I think an awful lot of people are tuned into that. So with that, I'm just thrilled to get a chance to talk with you because I used to think I read a lot, but then I learned about you.
“Cowen calls himself “hyperlexic”. On a good day, he claims to read four or five
books. Secretly, I timed him at 30 seconds per page reading a dense tract by
Martin Luther. “—John Phipps, The Economist’s 1843
I've been reading more from the AIs lately and less from books. So I'll get one good book and ask the AI a lot of questions.
Eric Topol (01:24):
Yeah. Well, do you use NotebookLM for that?
Tyler Cowen (01:28):
No, just o3 from OpenAI at the moment, but a lot of the models are very good. Claude, there's others.
Eric Topol (01:35):
Yeah, yeah. No, I see how that's a whole different way to interrogate a book and it's great. And in fact, that gets me to a topic I was going to get to later, but I'll do it now. You're soon or you have already started writing for the Free Press with Barri Weiss.
Tyler Cowen (01:54):
That’s right, yes. I have a piece coming out later today. It's been about two weeks. It's been great so far.
“Tyler Cowen has a mind unlike any I've ever encountered. In a single conversation, it’s not at all unusual for him to toggle between DeepSeek, GLP-1s, Haitian art, sacred Tibetan music, his favorite Thai spot in L.A., and LeBron James”—Bari Weiss
Yeah, so that's interesting. I hadn't heard of it until I saw the announcement from Barri and I thought what was great about it is she introduced it. She said, “Tyler Cowen has a mind unlike any I've ever encountered. In a single conversation, it’s not at all unusual for him to toggle between DeepSeek, GLP-1s, Haitian art, sacred Tibetan music, his favorite Thai spot in L.A., and LeBron James. Now who could do that, right. So I thought, well, you know what? I need independent confirmation of that, that is as being a polymath. And then I saw Patrick Collison, who I know at Stripe and Arc Institute, “you can have a specific and detailed discussion with him about 17th-century Irish economic thinkers, or trends in African music or the history of nominal GDP targeting. I don't know anyone who can engage in so many domains at the depth he does.” So you're an information acquirer and one of the books you wrote, I love the title Infovore.
Tyler Cowen (03:09):
The Age of the Infovore, that’s right.
Eric Topol (03:11):
I mean, have people been using that term because you are emblematic of it?
“You can have a specific and detailed discussion with him about 17th-century Irish economic thinkers, or trends in African music or the history of nominal GDP targeting. I don't know anyone who can engage in so many domains at the depth he does.”—Patrick Collison
It was used on the internet at some obscure site, and I saw it and I fell in love with that word, and I thought I should try to popularize it, but it doesn't come from me, but I think I am the popularizer of it.
Yeah, well, if anybody was ingesting more information and being able to work with it. That's what I didn't realize about you, Tyler, is restaurants and basketball and all these other fine arts, very impressive. Now, one of the topics I wanted to get into you is I guess related to a topic you've written about fair amount, which is the great stagnation, and right now we're seeing issues like an attack on science. And in the past, you've written about how you want to raise the social status of scientists. So how do you see this current, I would even characterize as a frontal assault on science?
Tyler Cowen (04:16):
Well, I'm very worried about current Trump administration policies. They change so frequently and so unpredictably, it's a little hard to even describe what they always are. So in that sense, it's a little hard to criticize them, but I think they're scaring away talent. They might scare away funding and especially the biomedical sciences, the fixed costs behind a lot of lab work, clinical trials, they're so high that if you scare money away, it does not come back very readily or very quickly. So I think the problem is biggest perhaps for a lot of the biomedical sciences. I do think a lot of reform there has been needed, and I hope somehow the Trump policies evolve to that sort of reform. So I think the NIH has become too high bound and far too conservative, and they take too long to give grants, and I don't like how the overhead system has been done. So there's plenty of room for improvement, but I don't see so far at least that the efforts have been constructive. They've been mostly destructive.
Eric Topol (05:18):
Yeah, I totally agree. Rather than creative destruction it’s just destruction and it's unfortunate because it seems to be haphazard and reckless to me at least. We of course, like so many institutions rely on NIH funding for the work, but I agree that reform is fine as long as it's done in a very thought out, careful way, so we can eke out the most productivity for the best investment. Now along with that, you started Emergent Ventures where you're funding young talent.
Tyler Cowen (05:57):
That's right. That's a philanthropic fund. And we now have slightly over 1000 winners. They're not all young, I'd say they're mostly young and a great number of them want to go into the biomedical sciences or have done so. And this is part of what made me realize what an incredible influx of talent we're seeing into those areas. I'm not sure this is widely appreciated by the world. I'm sure you see it. I also see how much of that talent actually is coming from Canada, from Ontario in particular, and I've just become far more optimistic about computational biology and progress in biology and medical cures, fixes, whatever you want to call it, extending lives. 10 years ago, I was like, yeah, who knows? A lot of things looked pretty stuck. Then we had a number of years where life expectancy was falling, and now I think we're on the verge of a true golden age.
Eric Topol (06:52):
I couldn't agree with you more on that. And I know some of the people that you funded like Anne Wylie who developed a saliva test for Covid out of Yale. But as you say, there's so many great young and maybe not so young scientists all over, Canada being one great reservoir. And now of course I'm worried that we're seeing emigration rather than more immigration of this talent. Any thoughts about that?
Tyler Cowen (07:21):
Well, the good news is this, I'm in contact with young people almost every day, often from other countries. They still want to come to the United States. I would say I sign an O-1 letter for someone about once a week, and at least not yet has the magic been dissipated. So I'm less pessimistic than some people are, but I absolutely do see the dangers. We’re just the biggest market, the freest place we have by far the most ambitious people. I think that's actually the most significant factor. And young people sense that, and they just want to come here and there's not really another place they can go that will fit them.
Eric Topol (08:04):
Yeah, I mean one of the things as you've probably noted is there's these new forces that are taking on big shouldering. In fact, Patrick Collison with Arc Institute and Chan Zuckerberg for their institute and others like that, where the work you're doing with Emergent Ventures, you're supporting important projects, talents, and if this whole freefall in NIH funding and other agency funding continues, it looks like we may have to rely more on that, especially if we're going to attract some talent from outside. I don't know how else we're going to make. You're absolutely right about how we are such a great destination and great collaborations and mentors and all that history, but I'm worried that it could be in kind of a threatened mode, if you will.
Tyler Cowen (08:59):
I hope AI lowers costs. As you probably know at Arc, they had Greg Brockman come in for some number of months and he's one of the people, well, he helped build up Stripe, but he also was highly significant in OpenAI behind the GPT-4 model. And to have Greg Brockman at your institute doing AI for what, six months, that's a massive acceleration that actually no university had the wisdom to do, and Arc did. So I think we're seeing just more entrepreneurial thinking in the area. There's still this problem of bottlenecks. So let's say AI is great for drug discovery as it may be. Well, clinical trials then become a bigger bottleneck. The FDA becomes a bigger bottleneck. So rapid improvement in only one area while great is actually not good enough.
Eric Topol (09:46):
Yeah, I'm glad you brought up that effect in Arc Institute because we both know Patrick Hsu, who's a brilliant young guy who works there and has published some incredible large language models applied to life science in recent months, and it is impressive how they used AI in almost a singular way as compared to as you said, many other leading institutions. So that is I think, a really important thing to emphasize.
Tyler Cowen (10:18):
Arc can move very quickly. I think that's not really appreciated. So if Patrick Hsu decides Silvana Konermann, Patrick Collison, if they decide something ought to be bought or purchased or set in motion, it can happen in less than a day. And it does happen basically immediately. And it's not only that it's quicker, I think when you have quicker decisions, they're better and it's infectious to the people you're working with. And there's an understanding that the core environment is not a bureaucratic one. So it has a kind of multiplier effect through the institution.
Eric Topol (10:54):
Yeah, I totally agree with you. It's always been a philosophy in your mind to get stuff done, get s**t done, whatever you want to call it. They're getting it done. And that's what's so impressive. And not just that they've got some new funds available, but rather they're executing in a way that's parallel to the way the world's evolving in the AI front, which is I think faster than most people would ever have expected, anticipated. Now that gets me to a post you had on Marginal Revolution just last week, which one of the things I love about Marginal Revolution is you don't have to read a whole lot of stuff. You just give the bullets, the juice, if you will. Here you wrote o3 and AGI, is April 16th AGI day? And everybody's talking about artificial general intelligence is here. It's going to be here five years, it's going to be seven years.
Eric Topol (11:50):
It certainly seems to be getting closer. And in this you wrote, “I think it is AGI, seriously. Try asking it lots of questions, and then ask yourself: just how much smarter was I expecting AGI to be? As I’ve argued in the past, AGI, however you define it, is not much of a social event per se. It still will take us a long time to use it properly. Benchmarks, benchmarks, blah blah blah. Maybe AGI is like porn — I know it when I see it. And I’ve seen it.” I thought that was really well done, Tyler. Anything you want to amplify on that?
Tyler Cowen (12:29):
Look, if I ask at economics questions and I'm trained as an economist, it beats me. So I don't care if other people don't call it AGI, but one of the original definitions of AGI was that it would beat most experts most of the time on most matters, say 90% or above, and we're there. So people keep on shifting the goalposts. They'll say, well, sometimes it hallucinates or it's not very good at playing tic-tac toe, or there's always another complaint. Those are not irrelevant, but I'll just say, sit down, have someone write at a test of 20 questions, you're a PhD, you take the test, let o3 take the test, then have someone grade, see how you've done, then form your opinion. That's my suggestion.
Eric Topol (13:16):
I think it's pretty practical. I mean, enough with the Turing test, I mean, we've had that Turing test for decades, and I think the way you described it is a little more practical and meaningful these days. But its capabilities to me at least, are still beyond belief eke out of current, not just the large language models, but large reasoning models. And so, it's just gotten to a point where and it's accelerating, every week there's so many other, the competition is good for taking it to the next level.
Tyler Cowen (13:50):
It can do tasks and it self improves. So o3-pro will be out in a few weeks. It may be out by the time you're hearing this. I think that's obviously going to be better than just pure o3. And then GPT-5 people have said it will be this summer. So every few months there are major advances and there's no sign of those stopping.
Eric Topol (14:12):
Absolutely. Now, of course, you've been likened to “Treat Tyler like a really good GPT” that is because you're this information meister. What do you ask the man who you can ask anything? That's kind of what we have when we can go to any one of these sites and start our prompts, whatever. So it's kind of funny in some ways you might've annotated this with your quest for knowledge.
Tyler Cowen (14:44):
Well, I feel I understand the thing better than most people do for that reason, but it's not entirely encouraging to me personally, selfishly to be described that way, whether or not it's accurate. It just means I have a lot more new competition.
Eric Topol (14:59):
Well, I love this one. “I'm not very interested in the meaning of life, but I'm very interested in collecting information on what other people think is the meaning of life. And it's not entirely a joke” and that's also what you wrote about in the Free Press thing, that most of the things that are going to be written are going to be better AI in the media and that we should be writing books for the AI that's going to ingest them. How do you see this human AI interface growing or moving?
Tyler Cowen (15:30):
The AI is your smartest reader. It's your most sympathetic reader. It will remember what you tell it. So I think humans should sit down and ask, what does the AI need to know? And also, what is it that I know that's not on the historical record anywhere? That's not just repetition if I put it down, say on the internet. So there's no point in writing repetitions anymore because the AI already knows those things. So the value of what you'd call broadly, memoir, biography, anecdote, you could say secrets. It's now much higher. And the value of repeating basic truths, which by the way, I love as an economist, to be clear, like free trade, tariffs are usually bad, those are basic truths. But just repeating that people will be going to the AI and saying it again won't make the AI any better. So everything you write or podcast, you should have this point in mind.
Eric Topol (16:26):
So you obviously have all throughout your life in reading lots of books. Will your practice still be to do the primary reading of the book, or will you then go to o3 or whatever or the other way around?
Tyler Cowen (16:42):
I've become fussier about my reading. So I'll pick up a book and start and then start asking o3 or other models questions about the book. So it's like I get a customized version of the book I want, but I'm also reading somewhat more fiction. Now, AI might in time become very good at fiction, but we're not there now. So fiction is more special. It's becoming more human, and I should read more of it, and I'm doing that.
Eric Topol (17:10):
Yeah, no, that's great. Now, over the weekend, there was a lot of hubbub about Bill Gates saying that we won't need doctors in the next 10 years because of AI. What are your thoughts about that?
Tyler Cowen (17:22):
Well, that's wrong as stated, but he may have put it in a more complex way. He's a very smart guy of course. AI already does better diagnosis on humans than medical doctors. Not by a lot, but by somewhat. And that's free and that's great, but if you need brain surgery for some while, you still need the human doctor. So human doctors will need to adjust. And if someone imagines that at some point robots do the brain surgery better, well fine. But I'm not convinced that's within the next 10 years. That would surprise me.
Eric Topol (17:55):
So to that point, recently, a colleague of mine wrote an op-ed in the New York Times about six studies comparing AI alone versus doctors with AI. And in all six studies, the AI did better than the doctors who had access to AI. Now, you could interpret that as, well they don't know how to use AI. They have automation bias or that is true. What do you think?
Tyler Cowen (18:27):
It's probably true, but I would add as an interpretation, the value of meta rationality has gone up. So to date, we have not selected doctors for their ability to work with AI, obviously, but some doctors have the personal quality, it's quite distinct from intelligence, but if just knowing when they should defer to someone or something else, and those doctors and researchers will become much more valuable. They're sufficiently modest to defer to the AI and have some judgment as to when they should do that. That's now a super important quality. Over time, I hope our doctors have much more of that. They are selected on that basis, and then that result won't be true anymore.
Eric Topol (19:07):
So obviously you would qualify. There's a spectrum here. The AI enthusiasts, you and I are both in that group, and then there's the doomsayers and there's somewhere middle ground, of course, where people are trying to see the right balance. Are there concerns about AI, I mean anything about that, how it's moving forward that you're worried about?
Tyler Cowen (19:39):
Well, any change that big one should have very real concerns. Maybe our biggest concern is that we're not sure what our biggest concern should be. One simple effect that I see coming soon is it will devalue the status of a lot of our intellectuals and what's called our chattering class. A lot of its people like us, we won't seem so impressive anymore. Now, that's not the end of the world for everyone as a whole, but if you ask, what does it mean for society to have the status of its elites so punctured? At a time when we have some, I would say very negative forces attacking those elites in other ways, that to me is very concerning.
Eric Topol (20:25):
Do you think that although we've seen what's happening with the current administration with respect to the tariffs, and we've already talked about the effects on science funding, do you see this as a short-term hit that will eventually prevail? Do you see them selectively supporting AI efforts and finding the right balance with the tech companies to support them and the competition that exists globally with China and whatnot? How are we going to get forward and what some people consider pretty dark times, which is of course, so seemingly at odds with the most extraordinary times of human support with AI?
Tyler Cowen (21:16):
Well, the Trump people are very pro AI. I think that's one of the good things about the administration, much pro AI and more interested than were the Biden people. The Biden people, you could say they were interested, but they feared it would destroy the whole world, and they wanted to choke and throttle it in a variety of ways. So I think there's a great number of issues where the Trump people have gone very badly wrong, but at least so far AI's not one of them. I'd give them there like an A or A+ so far. We'll see, right?
Eric Topol (21:44):
Yeah. As you've seen, we still have some of these companies in some kind of a hot seat like Meta and Google regarding their monopolies, and we saw how some of the tech leaders, not all of them, became very supportive, potentially you could interpret that for their own interests. They wanted to give money to the inauguration and also get favor curry some political favor. But I haven't yet seen the commitment to support AI, talk about a golden age for the United States because so much of this is really centered here and some of the great minds that are helping to drive the AI and these models. But I wonder if there's more that can be done so that we continue to lead in this space.
Tyler Cowen (22:45):
There's a number of issues here. The first is Trump administration policy toward the FTC, I think has not been wonderful. They appointed someone who seems like would be more appropriate for a democratic or more left-leaning administration. But if you look at the people in the Office of Science and Technology Policy in the White House, they're excellent, and there's always different forces in any administration. But again, so far so good. I don't think they should continue the antitrust suit against Google that is looking like it's going against Google, but that's not really the Trump administration, that's the judiciary, and that's been underway for quite some while. So with Trump, it's always very hard to predict. The lack of predictability, I would say, is itself a big problem. But again, if you're looking for one area where it's good, that would be my pick.
Eric Topol (23:35):
Yeah, well, I would agree with that for sure. I just want to see more evidence that we capitalize on the opportunities here and don't let down. I mean, do you think outlawing selling the Nvidia chips to China is the way to do this? It seems like that hurts Nvidia and isn't China going to get whatever they want anyway?
Tyler Cowen (24:02):
That restriction, I favored when it was put in. I'm now of the view that it has not proved useful. And if you look at how many of those chips get sold, say to Malaysia, which is not a top AI performer, one strongly suspects, they end up going to China. China is incentivized to develop its own high-quality chips and be fully independent of Western supply lines. So I think it's not worked out well.
Eric Topol (24:29):
Yeah, no, I see that since you've written so much about this, it's good to get your views because I share those views and you know a lot more about this than I would, but it seems like whether it's Malaysia or other channels, they're going to get the Blackwell chips that they want. And it seems like this is almost like during Covid, how you would close down foreign travel. It's like it doesn't really work that well. There's a big world out there, right?
Tyler Cowen (25:01):
It’s an interesting question. What kind of timing do you want for when both America and China get super powerful AI? And I don't think you actually want only America to have it. It's a bit like nuclear weapons, but you don't want China to have it first. So you want some kind of staggered sequence where we're always a bit ahead of them, but they also maybe are constraining us a bit. I hope we're on track to get that, but I really, really don't want China to have it first.
Eric Topol (25:31):
Yeah, I mean I think there's, as you're pointing out aptly is a healthy managed competition and that if we can keep that lead there, it is good for both and it's good for the world ideally. But getting back, is there anything you're worried about in AI? I mean because I know you're upbeat about its net effective, and we've already talked about amazing potential for efficiency, productivity. It basically upends a lot of economic models of the past, right?
Tyler Cowen (26:04):
Yes. I think it changes or will change so many parts of life. Again, it's a bit difficult to specify worries, but how we think of ourselves as humans, how we think of our gods, our religions, I feel all that will be different. If you imagine trying to predict the effects of the printing press after Gutenberg, that would've been nearly impossible to do. I think we're all very glad we got the printing press, but you would not say all of it went well. It's not that you would blame the printing press for those subsequent wars, but it was disruptive to the earlier political equilibrium. I think we need to take great care to do it better this time. AI in different forms will be weaponized. There's great potential for destruction there and evil people will use it. So of course, we need to be very much concerned.
Eric Topol (26:54):
And there's obviously many of these companies have ways to try to have efforts to anticipate that. That is alignments and various safety type parallel efforts like Ilya did when he moved out of OpenAI and others. Is that an important part of each of these big efforts, whether it's OpenAI, Google, or the rest of them anthropic that they put in resources to keep things from going off the tracks?
Tyler Cowen (27:34):
That's good and it's important, but I think it's also of limited value because the more we learn how to control AI systems directly, the bad guys will have similar lessons, and they will use alignment possibly to make their AIs bad and worse and that it obeys them. So yeah, I'd rather the good guys make progress on what they're trying to do, but don't think it's going to solve the problem. It creates new problems as well.
Eric Topol (28:04):
So because of AI, do you think you'll write any more books in the future?
Tyler Cowen (28:11):
I'm writing a book right now. I suspect it will be my last. That book, its title is Mentors. It's about how to mentor individuals and what do the social sciences know about mentoring. My view is that even if the AI could write the book better than I can, that people actually want to read a book like that from a human. I could be wrong, but I think we should in the future, restrict ourselves to books that are better by a human. I will write every day for the rest of my life, but I'm not sure that books make sense at the current moment.
Eric Topol (28:41):
Yeah, that's a really important point, and I understand that completely. Now, when you write for the Free Press, which will be besides the Conversations with Tyler podcast and the Marginal Revolution, what kind of things will you be writing about in the Free Press?
Tyler Cowen (28:56):
Well, I just submitted a piece. It's a defense of elitism. So the problem with our elites is that they have not been elitist enough and have not adhered strictly enough to the scientific method. So it's a very simple point. I think to you it would be pretty obvious, but it needs to be said. It's not out there enough in the debate that yes, sometimes the elites have truly and badly let us down, but the answer is not to reject elitism per se, but to impose higher elitist standards on our sometimes supposed elites. So that's the piece I just sent in. It's coming out soon and should be out by the time anyone hears this.
Eric Topol (29:33):
Well, I look forward to reading that. So besides a polymath, you might be my favorite polymath, Tyler you didn't know that. Also, you're a futurist because when you have that much information ingested, and now of course with a super performance of AI to help, it really does help to try to predict where we're headed. Have I missed anything in this short conversation that you think we should touch on?
Tyler Cowen (30:07):
Well, I'll touch on a great interest of yours. I like your new book very much. I think over the course of the next 40 years working with AI, we will beat back essentially every malady that kills people. It doesn't mean you live forever. Many, many more people will simply die of what we now call old age. There's different theories as to what that means. I don't have a lot of expertise in that, but the actual things people are dying from will be greatly postponed. And if you have a kid today to think that kid might expect to live to be 97 or even older, that to me is extremely plausible.
Tyler Cowen (30:45):
I won't be around to see it, but that's a phenomenal development for human beings.
Eric Topol (30:50):
I share that with you. I'm sad that I won't be around to see it, but exactly as you've outlined, the fact that we're going to be able to have a huge impact on particularly the age-related diseases, but also as you touched on the genetic diseases with genome editing and many other, I think, abilities that we have now controlling the immune system, I mean a central part of how we get into trouble with diseases. So I couldn't agree with you more, and that's a really good note to finish on because so many of the things that we have discussed today, we share similar views and we come at it from totally different worlds. The economist that has a very wide-angle lens, and I guess you'd say the physician who has a more narrow lens aperture. But thank you so much, Tyler for joining me today.
Tyler Cowen (31:48):
My pleasure. Let me close by telling you some good news. I have AI friends who think you and I, I’m 63 will be around to see that, I don't agree with them they don't convince me, but there are smart people who think the benefits from this will come quite soon.
Eric Topol (32:03):
I sure hope they're right.
Tyler Cowen (32:05):
Yes.
*******************************************
SUPER AGERS, my new book, was released on May 6th. It’s about extending our healthspan, and I introduce 2 of my patients (one below, Mrs. L.R.) as exemplars to learn from. This potential to prevent the 3 major age-related diseases would not be possible without the jumps in the science of aging and multimodal A.I.
My op-ed preview of the book was published in The NY Times last week. Here’s a gift link. I did a podcast with Mel Robbins on the book here. Here’s my publisher ‘s (Simon and Schuster) site for the book. If you’re interested in the audio book, I am the reader (first time I have done this, quite an experience!)
The book was reviewed in WSJ. Here’s a gift link
There have been many pieces written about it. Here’s a gift link to the one in the Wall Street Journal and here for the one in the New York Times .
**********************
Thanks for reading and subscribing to Ground Truths.
If you found this interesting please share it!
That makes the work involved in putting these together especially worthwhile.
All content on Ground Truths— newsletters, analyses, and podcasts—is free, open-access.
Paid subscriptions are voluntary and all proceeds from them go to support Scripps Research. They do allow for posting comments and questions, which I do my best to respond to. Please don't hesitate to post comments and give me feedback. Many thanks to those who have contributed—they have greatly helped fund our summer internship programs for the past two years.
Get full access to Ground Truths at erictopol.substack.com/subscribe