Lots of Hospitals Are Using AI. Few Are Testing For Bias
Feb 27, 2025
auto_awesome
In this discussion, Paige Nong, an Assistant Professor at the University of Minnesota specializing in AI's influence on healthcare, reveals the current landscape of AI use in hospitals. She highlights the concerning lack of bias testing in predictive algorithms, particularly those affecting marginalized patients. The conversation emphasizes the urgent need for consistent governance to ensure equitable treatment. Nong also addresses challenges faced by safety net hospitals and calls for robust evaluations of AI tools to enhance patient experiences and support health equity.
While many hospitals are adopting AI technologies to enhance patient care, the lack of monitoring for bias significantly jeopardizes healthcare equity.
Research reveals alarming bias in predictive algorithms used in healthcare, favoring certain demographics and raising urgent questions about patient welfare.
Deep dives
The Rapid Rise of AI in Healthcare
Artificial intelligence (AI) is increasingly integrated into healthcare, enabling programs to assist in answering patient queries and diagnosing illnesses. Despite the hype surrounding AI's capabilities, many hospitals have just begun to adopt and evaluate its implications deeply. Current data on AI usage indicates that while a significant portion of hospitals employs these technologies, the monitoring for bias and effectiveness remains inadequate. With over 350 gigabytes of data per patient being processed, there is rising concern about the potential risks to patient care and the reliability of these AI tools.
Bias in Predictive Algorithms
Research by Paige Nong highlights alarming discrepancies in how predictive algorithms are utilized within healthcare systems, particularly their impact on minority patients. A notable example involves an algorithm that inadvertently favored white patients by using cost as the defining metric for healthcare allocation instead of focusing on health outcomes. This systemic bias elevates health risks for marginalized groups who may require more intensive medical care than what algorithms currently provide. The lack of systematic checks and evaluation mechanisms across many hospitals means that bias can easily permeate these predictive models, raising urgent questions about healthcare equity.
Governance and Accountability in AI Use
Nong's findings underscore a crucial lack of governance in hospitals regarding the assessment of AI tools, with many institutions failing to implement effective oversight mechanisms. Interviews conducted across various healthcare settings revealed a wide variance in practices, from robust committees assessing AI tools to individual decision-making without equity considerations. A call for comprehensive policy intervention is evident, with suggestions for enhancing transparency and evaluation processes in AI applications. Addressing these governance gaps will be vital in ensuring that AI in healthcare not only improves operational efficiency but also safeguards patient welfare.
New research sheds light on how many hospitals are using artificial intelligence, what they’re using AI for, and what it means for patients and policymakers.
Guest:
Paige Nong, PhD, Assistant Professor, University of Minnesota School of Public Health
Learn more and read a full transcript on our website.
Want more Tradeoffs? Sign up for our free weekly newsletter featuring the latest health policy research and news.
Support this type of journalism today, with a gift.