Jess Morley, a postdoctoral research associate at Yale's Digital Ethics Center, shares her expertise on integrating ethics into digital health. She discusses the ethical dilemmas faced by health tech leaders and the challenges of reconciling good intentions with business pressures. Morley critiques the UK's AI action plan and emphasizes the importance of inclusive data practices. She advocates for embedding ethics in procurement processes and product development, urging stakeholders to prioritize social responsibility amid a growing focus on economic growth.
Ethics in digital health must transition from a theoretical concept to an actionable framework that informs decision-making processes.
The dialogue surrounding digital health is increasingly emphasizing economic growth over ethical considerations, risking equitable health outcomes.
Implementing AI in healthcare requires careful consideration of cultural contexts and motivations to ensure appropriate and beneficial applications.
Deep dives
Actionable Ethics in Digital Health
The discussion emphasizes the necessity of translating ethical concepts into actionable practices within digital health. There is a concern that ethics often becomes a perfunctory exercise, where organizations may outwardly claim adherence to ethical principles without taking substantial action. The goal should be to elevate ethical considerations from a secondary priority to a core aspect of decision-making processes, especially in a climate where business pressures frequently override ethical concerns. By framing ethics as an integral part of product design rather than an additional layer, organizations can make more meaningful strides toward creating equitable healthcare solutions.
The Shift in Industry Focus
A notable trend discussed is the shift in focus from ethical considerations to economic growth in the digital health domain. Observers note that conversations around ethics had gained traction a few years ago, portraying ethical practices as a strategic advantage. However, recent developments indicate a regression, with a growing emphasis on the economic opportunities presented by AI without a balanced discussion of the associated risks. This repositioning could undermine ethical frameworks and the potential for achieving equitable health outcomes, leading to concerns about prioritizing profits over patient welfare.
Defining Ethical Frameworks
Several key ethical principles within digital health are delineated to enhance understanding and application. Concepts such as beneficence (doing good), non-maleficence (avoiding harm), autonomy (individual rights), justice (fairness), and explainability (understanding AI functions) are foundational. These principles are crucial for guiding the responsible use of AI and ensuring that technologies serve the interests of diverse populations. The conversation underlines that ethical clarity can help organizations navigate complex decision-making and empower them to create systems that genuinely benefit society.
Cultural Context and AI Adoption
When considering the implementation of AI in various healthcare settings, particularly in low- and middle-income countries, the importance of cultural context and social acceptability is paramount. Any assertion that AI will resolve issues of access or bias must be critically examined, as cultural perceptions of health and technology vary significantly. Stakeholders are encouraged to engage in difficult conversations to discern whether AI truly represents the best solution to identified problems or if alternative approaches may be more effective. Scrutinizing the motivations behind adopting AI technologies can lead to more thoughtful and beneficial implementations.
Integrating Ethical Practices into Business Models
For vendors and product managers, integrating ethical considerations into everyday operations can redefine the approach to digital health solutions. Rather than viewing ethics as a burdensome requirement, organizations should embed ethical practices into their standard operational guidelines. Practical steps include ensuring diverse representation in data sets, conducting rigorous evaluations, and maintaining transparency regarding product development. This proactive approach not only fosters trust and credibility but also aligns with long-term success by producing effective and socially responsible health solutions.
What can the UK’s AI Action plan reveal about the state of ethics in the digital health industry?
If you are a health system leader or government entity, how do you elevate ethical approaches in your ecosystem? What levers are available?
And for the builders: product leaders, founders, clinicians at health tech companies, software engineers, designers, QA folk: how to negotiate and advocate for ethical approaches against business realities in the current climate?
We cover all of this, as well as Jess’ super hot takes on the UK govt AI action plan, Ethics 101 for vendors, researchers and policy folk, AND what leaders in LMIC settings can take away from all of this.
Jess Morley of the Digital Ethics Center at Yale also previously worked with NHSX and also on the Goldacre review in 2022. She has deep expertise on ethics and policy and some really unique insights.