AI-powered
podcast player
Listen to all your favourite podcasts with AI-powered features
What are the problems facing academic journals today? What changes to the system could be made to address them? How could being more open about studies that aren’t successful actually be a success strategy overall?
Faye Flam is a science and medical journalist, a columnist for Bloomberg, host of the podcast Follow the Science, and the author of The Score: The Science of the Male Sex Drive.
Greg and Faye discuss the importance and challenges of science journalism. Their conversation touches on the role of science journalists in translating and evaluating scientific data, the replication crisis, the influence of fraudulent research, the dynamics of public trust in science, and the impact of the COVID-19 pandemic on public health communication. They also examine the issue with the growing proliferation of deepfakes, ‘fake news,’ and the importance of maintaining journalistic integrity in an increasingly digital age.
*unSILOed Podcast is produced by University FM.*
Recommended Resources:
Guest Profile:
Her Work:
Science journalism and the challenge of neutrality
38:23: I think that it's harder these days to sell the kind of story that I used to think was, that I still think is, kind of the heart and soul of science journalism, which is to try to separate the science from the values, try to understand why people are disagreeing, try to understand where the science has evolved, where the science might have been wrong in the past. So even something as fraught as whether sex is binary, I think at least in the past, that's something you could tackle as a journalist without taking sides, but just adding clarity and adding context and saying, you know, these people disagree because they have different values and they want to use different language. They're interpreting things differently. But there are certain aspects of biology that everybody agrees on.
Rethinking failure in science
10:08: People have to rethink the meaning of failure. If you have a hypothesis that's kind of a long shot, and you test it, and you do a really good experiment, and you find out the hypothesis didn't hold up, well, you've tested that. Maybe that's something you can rule out. That should be an acceptable, perfectly normal part of science. It's not a failure per se. It's just that sometimes you have to rule something out that's a long shot.
On the confidence trap of AI
49:01: One of the hazards of AI is that people—it's so confident—it answers questions with so much confidence, and it sounds so smart that people just assume it's right. And it's often not right. People call them hallucinations, but it can just be, with some subtle thing in your prompt, right? I think there is going to be a period where people are seduced into believing AI because it can be so incredibly smart, and it makes these statements with so much confidence. But a lot of it—there is this kind of chaos to it. Little changes in the prompt will completely change the answer.