How Artificial Intelligence Has Evolved and the Implications for Health Care
Jan 16, 2024
auto_awesome
Dr. Michael Howell discusses the evolution of AI in healthcare, from symbolic AI to AI 3.0. The limitations of AI 1.0 and the impact of biases in AI 2.0 are explored. Advancements in AI 3.0, such as foundation models and generative AI, are discussed. The importance of clinicians understanding different AI models and gaining first-hand experience is emphasized.
AI 1.0 focused on rule-based decision logic flow but was limited in capabilities and faced challenges in real-world complexities.
AI 2.0 excelled in predicting future outcomes and classifying unstructured data, but faced limitations in conducting tasks beyond prediction and classification and encountered biases inherent in the data and algorithmic design choices.
Deep dives
The Three Epochs of Artificial Intelligence in Healthcare
The podcast episode discusses three distinct epochs of artificial intelligence (AI) in healthcare. AI 1.0, which started in the 1950s, focused on symbolic AI and probabilistic models. It involved encapsulating human knowledge into computer code, such as clinical pathways in electronic health records. However, AI 1.0 was limited in its capability to handle complex scenarios and was susceptible to biased data. In the 2000s, AI 2.0 revolutionized healthcare with deep learning, enabling the prediction of future events and the classification of unstructured data. However, AI 2.0 was restricted to performing one task at a time and inherited biases from biased data. The recent emergence of AI 3.0 brings foundation models or large language models that can perform multiple tasks and generate new content. Though promising, AI 3.0 introduces the challenges of building an evidence base, addressing semantic bias, and ensuring fairness and equity.
Capabilities and Risks of AI 1.0 and AI 2.0
AI 1.0, representing rule-based decision logic flow, helped clinicians adhere to best practices in healthcare. Yet, AI 1.0 was limited in its capabilities and encountered challenges when applied to real-world complexities. On the other hand, AI 2.0 excelled in predicting future outcomes and classifying unstructured data, such as images and electronic health records. However, AI 2.0 also faced limitations in conducting tasks beyond prediction and classification. Additionally, biases inherent in the data and algorithmic design choices posed new challenges, impacting the fairness and accuracy of the AI system.
The Promise and Challenges of AI 3.0
AI 3.0, also known as foundation models or large language models, demonstrates the ability to perform diverse tasks and generate new content based on a simple prompt. These models excel in understanding context, creating human-like responses, and generating relevant answers. However, their novelty requires building an evidence base and addressing the risk of semantic bias embedded in human language. To ensure fairness, equity, and safety, new techniques like red teaming and adversarial question sets are needed to detect and mitigate bias in AI 3.0 systems as they enter clinical practice.
The capabilities and risks of various types of artificial intelligence (AI) are markedly different. JAMA Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, interviews author Michael Howell, MD, MPH, chief clinical officer at Google, to discuss how AI has evolved and how to understand the problems and possibilities of each iteration. Related Content: