
How Modern Medicine Has Failed Women with Elinor Cleghorn
Factually! with Adam Conover
The Relationship Between Men and Women in the 21st Century
The idea that women are governed ciplly by their bodies, and that they also have a very strong emotional connection to their bodies. They're not associated with the mind in the way that men have been. And so that attitude also clouded the understanding of diseases. We see when certain chronic diseases are named, such as multiple scorosis,. which now we know to day is a disease, possibly of automine origin, affects more women than men. But when it was first being documented, when women were presented with the symptoms that we know a characteristic of this disease, they were assumed to be hysteric because that was the precedent.
00:00
Transcript
Play full episode
Transcript
Episode notes
Speaker 3
Yeah, I completely lost my sense of time. But, but, you know, it just kills me that you'll jump into the comment section and people are like, I bet this guy's never heard of edema. I'm going to go ahead and enlighten him on that. I just, if you could buy that level of confidence, put me on the list. I'll pay anything. And the people who criticize it, they never measure hypertrophy.
Speaker 1
They don't know how to measure, but hey, they crack the code and we are doing something wrong. Yes, it's the effective reps model, it's the ijima, they keep vomiting. Sorry, pardon my French, but it's off sometimes. I'm not, yeah, that's why I don't join those discussions. You're a better man for it. Yeah. No,
Speaker 3
and it's, you know, I'll throw Helms of Bone here. It is, in totality, the net impact, it is good that there is vigorous, robust debate about the stuff we're doing. That is, the most you can ask for is to be doing research that anyone cares about. Because I can tell you what, I know a lot of people who do research I do not care about. I've been to some talks on campuses where I go, I cannot believe a person spends at least 40 hours a week on this topic, but more power to you. But we really are lucky to be studying exercise, nutrition, the kind of thing when you walk into a party, you walk into a gathering, people want to know what you're finding. You know what I mean? Like that really is a privilege. And if the small downside is sometimes people on the internet call us an idiot and tell us we don't know things we do know, that's okay. We can kind of, you know, either opt out of that conversation or take the Helms route and diplomatically explain, well, that could be the case, but it probably isn't because of this, but we're more inclined to think it's the following. So at the end of the day, it's good that people care. It's good that people are staying up to date with the research, and it's good that people like Helms respond to more comments than I do, because I might be less diplomatic. I try to do my part.
Speaker 1
I just want to echo you both. Yes, it's great to making questions. We scientists, we are doubters. We do make questions. We submit the work. We get questions. We review other people's work. We make questions that's totally fine but but but like eric said right the problem is is like helman said the problem is when people are making questions because it must be a jima because i don't like the the outcome of the study so then i'm gonna throw something in and talk about the scientist integrity or something like that attacking it makes me mad especially when they're attacking young talented people that again they're trying to learn and grow. And I, yeah, it's, but they crack the code and you're here doing everything wrong.
Speaker 3
Yeah. And I will say that that's the one element that I, at this point in my career, cannot empathize with. So I have been called an idiot. Actually, my undergraduate thesis, there were at least two blog posts about how stupid I am because it was about a kind of a controversial topic. the situation where I was mentoring a doc student or a master student and they finally published their first lead author paper. And then people on the internet are calling them a fraud or a dummy or whatever the case is. So I feel like I would take that a lot worse than if someone's calling me an idiot, because frankly, I tend to agree with the people who call me an idiot. But yeah, when you have like a young person who they're, you know how vulnerable they are because it's their first paper and they're so excited, you know, they did a great job and people are, you know, attacking their integrity or, you know, integrity, attacking their ability level. I could see how that would really, really be much, much tougher to handle. Yeah.
Speaker 1
I keep telling there are so many similarities between being a dad and mentor professor it it hurts you if they talk about your students man like about your kids and uh yes it is i yeah i can emphasize empathize with that as well. Everyone, again, human beings and any studies, they all deserve grace. And we can't, yeah. And they don't care attacking people just to prove they're right. And we scientists are wrong. Yeah, I don't like this part of social media. Yeah. Well,
Speaker 2
now that we've done the scientist apology tour and martyrdom, we've martyred ourselves so well. I feel pretty special. Let's talk about some things related to your findings. So the general perception now is that more volume, if you can recover from it, and it doesn't seem to be negatively impacting performance or leading to injury or burnout, is probably a good thing for hypertrophy. Now, you have conducted studies not only comparing groups that are higher versus lower volume, but also the progression of volume. And you've also compared this in both women and men. And you've also been able to get kind of the anecdotal word on the ground, the qualitative experiences from these participants. Because one thing that I sometimes think is worth commenting on is that we rarely do like exit surveys of studies. And I have done this in a, in a few studies. We did a, uh, actually my very first student that I mentored who I was on, uh, his master's, uh, thesis committee did a case series on weightlifters and powerlifters and had them go on a borderline ketogenic diet. And then it was mixed methods. One of the things we asked them on the way out was, would you do this on your own? And if so, how would you modify it? And despite some people getting what would be described as pretty decent results in terms of body composition change while maintaining performance, all of them said they would do something to change it. So it wasn't necessarily worth it to them. And the difference between looking more jacked and getting a 10% improvement in muscle thickness are miles apart. So I would love to hear from your perspective, despite what may be reported in the papers or reported in the current metaregression, what is your take on volume and how to apply it considering all those factors dr de souza oh man that's
Speaker 1
question it is like uh everything that's linear in human physiology correct but wrong is linear until searching point right it plateaus after a while can you say? I would agree. I think everything you talk about and that's how now I keep approaching volume. Even
Speaker 2
blood pressure, it'll go up until something ruptures and then it'll plateau and you will flatline.
Speaker 1
And again, when I see the studies, again, without sampling variants considering everything else considering as we're talking as a group more, and I know the way you are not saying you're doing better, but the way you communicate and the way we are looking at the data. For example, the absolute change in the 52 weekly sex is similar to the all-basis study with a way fewer sex, for example. That indicates to me, but there are the factors, and then I'm going to answer you just to give you some context. And that tells me that, yeah, maybe until 30 points we increase volume, we get out more benefit. But when I have studies with 12 to 24 with a similar effect size or absolute change compared to 52 weekly sets progression, that tells me something. And that was the way I look at the Josh data because it shows that those response when, and again, we only talk about only studies for trained people. They analyze everybody together, but it shows that you get a more hypertrophy until a certain point and then it plateaus almost. It's always linear, right? anyway so when it comes to training people i think above searching volume you you don't have a there is a diminution with terms for sure and maybe the difference the question i keep making and again might be subjective for someone being naked flexing on the podium yes that point, whatever centimeters is important. And again, I value an adult, I value experience, but just trying to understand that dose response relationship. So again, that always calls my attention that when I look at our 40 studies individually, we have a huge variance. Our training people in, again, women versus more or less trained. And the effect sizes and absolute chains are not so different. It's like, okay. So should people, I think it's a feasible option. Like if you have tried something, I think I would put some stock on, okay, I haven't tried so many different approaches. Why not have a specialization phase for that durable muscle, whatever? That's an option. I keep telling, and I want to touch on something that my opinion may come up with a design to try to understand that. But again, let's go back to the... So I think that when you look at those studies, like, for example, Brigato and Schofield, they clearly show those responses. They show more volume is better. When you look at the absolute, yes, Brad is the one that's a little higher, but it's still feasible, my opinion compared to effect sizes and absolute change. But those two guys, they were weaker. If you look at one rep max body mass ratio, they have a way lower compared to Annesi Sturdy, Albay, and Andrew Barshhan. So does baseline strength impact the data I'm presenting on European Conference said something about it? And then like I should start to think about that? That's a big question, right? And again, as I keep telling you already scratching surface, we are holding everything constant and just manipulating weekly sets. What about the distribution, the volume allocation, the baseline strength? There are so many things we still are not able to account for when we are studying volume. And again, when I look at ANSI study, the 52 weekly sets, the one rep max body mass ratio, 1.7. Alba, 2.05. Andrew, 1.9 something. And the magnitude for those studies is smaller compared to some other studies with weaker counterparts. And if you look at, again, if you look at the moderator analysis with all the issues, we know all the issues for moderator analysis, yes. But if you want to look at from a more subject, not subjective, I don't know if that's the right word, but when you compare the graph for everybody combined with trained individuals, it kind of aligns with my data. The magnitude effect gets smaller. Yeah. So now answer your question is, in my opinion, every study that comes out when it's well conducted does not prove someone is right or wrong, does not prove other study is wrong, it gives you another possibility, another opportunity, another way to apply a science. You have a toolbox. I keep using that analogy. Each study when well conducted is a tool for your toolbox. And then with a more study, the toolbox are getting bigger. You don't have or you don't have a room for put a new box. You're not going to throw away. You buy a new toolbox and keep adding. So the studies give us perspectives, general possibilities. And I think based on the data, I would not say it's a waste of time going high volume. I think it's an option there. My biggest question is when you talk about what is significant from what is relevant from a practical standpoint is where I keep... Because that's subjective. I agree. Right? Your example. That 0.3 centimeters makes a difference a competitive athlete. It's fine. But what about the cost benefit to get there and spend way more time for a dad like me with a full time job? based on the data we have at this point, high volume, yes, it can be one more tool in your box. I think it's worth it. And if I would apply high volume intervention to myself, would it be a slow progression rather than going from 10 weekly sets to 52, right? And some people do that. Again, I think the data tells and consider another big problem with our field, right? Studies are underpowered. Everybody knows that. Studies are very short, right? It's just a snap. It's a short snapshot from someone like how long i've been training eric two decades right or one and a half something like that so i'm old two decades because you look way younger than me but anyway so that's because i've been training for those two decades fortunately i'm training for two that come on don't say. Well,
Speaker 2
and I'm also maybe a
Speaker 3
little bit younger. I've actually been training for over two decades as well, because I got obsessed with lifting when I was 12. So my decade number is always going to be like, how does that work? But you don't get a lot of 12-year who are like pounding the gym watching Arnold Schwarzenegger videos. But that's why we love you, don't I mean? Now,
Speaker 2
these are fantastic points, Dr. D'Souza. And I think one thing I wanted to highlight out of what you said is the length of the studies as well as the difference between trained, quote unquote trained, and also untrained. Because untrained is clear, but trained is a huge spectrum from, yeah, last semester I started lifting weights and I barely meet the entry criteria for your study versus, and look at that, Trexler, you know, volunteered for your study because he's on campus too, just working across the hall. And those people would both be in the same group and they're going to get randomized to two different groups. And if it's a small sample size, here we go. So one of the things I think we always have to ask ourselves when interpreting research is what should I expect? Or what do I expect with my current understanding of the data? And I think sometimes people look a little too dichotomously, kind of one versus the other when they think about trained versus untrained. And they think, oh, well, does that mean that trained people don't respond as well to high volume? And to channel my inner James Steele, no, it just means trained people don't respond. At least not in the eight to 12 week time period we anticipate. Even when you do everything right and you stack the chips as well as you can, we're often looking at 50% of the change in terms of just the raw percentage change in many hypertrophy studies. And when you look at observational research on trained individuals, they're not getting any potential benefit of the lab effect, the change in variables. And you think about the mind state that a trained person is in when they typically participate in this study, they're ready to go. Maybe they're coming off a deload. Most of the time, they're going to be like, no, things are going well right now. I'm not going to go to your lab and train. And I think those things probably do create a systematic difference. So I often ask myself, over what timeframe do I think doing everything right, I would anticipate that the average quote unquote well-trained person would have a measurable change in this outcome. And then I have to think back to, OK, then is this study going to capture it? And if it doesn't, that doesn't mean that there's no effect of the protocol. size might be potentially too small for us to measure with the measurement error, which goes right back to why it's important to do these multi-site data collections to establish not only what is a significant change, but what is a measurable change, and then what might be a meaningful change, because those may be three distinct numbers. So those are all really, really good points. And I think I'm excited for the future, because I don't think we're far off. Trex, imagine this, that instead of having just these meta-analyses and meta-regressions that come out that kind of tend to shift the perspective on things, that maybe like some other fields, we had these constantly updated meta-regressions where we had like grade level, you know, like criteria to it. And there was a working group that would update them on a regular basis. So you could just cite the 2025 or 2026 or 2027 version. And we watch those uncertainty values shrink. I think we're not far off from that. And I think that's going to become easier and easier with code and the availability of it. And now, you know, because right now I do think if three more studies were published on volumes above, say, 25 sets, it could change, you know, some of the kind of the far right aspects of that metaregression, you know, graph. And I'm not sure which way it'll go, but that's kind of the point. Yeah.
A deeply embedded idea in our culture is the sexist notion that men are the “default” human, and women the unknowable “other". Nowhere is this more visible than in the history of medicine, with disastrous consequences for women’s’ health. On the show this week to discuss her new book is Elinor Cleghorn, author of Unwell Women: Misdiagnosis and Myth in a Man-Made World. You can check out her book at factuallypod.com/books.
Learn more about your ad choices. Visit megaphone.fm/adchoices
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.