Last Week in AI

AI Fails to Diagnose COVID-19, Difficulties with AI Regulation, and more on Surveillance

May 2, 2020
The hosts dive into the significant shortcomings of AI in diagnosing COVID-19, highlighting issues with data quality and representation. They critique a recent review revealing flaws in models that overlooked lingering symptoms. Best practices for predictive models in medicine are emphasized, advocating for thorough evaluation and documentation. The discussion also touches on the troubling ethics of AI in healthcare and surveillance, including connections between Seattle's AI industry and Israeli technology, raising important questions about privacy and human rights.
Ask episode
AI Snips
Chapters
Transcript
Episode notes
INSIGHT

Flawed AI COVID Diagnosis

  • Many COVID-19 diagnosis AI models are flawed due to biased data and lack of proper benchmarking.
  • These models often don't represent diverse populations and only consider outcomes of recovered or deceased patients.
ADVICE

TRIPOD Checklist Recommendation

  • Machine learning researchers should use the TRIPOD checklist for developing diagnostic prediction models.
  • This 22-point checklist helps avoid bias and improve the quality of models for medical use.
INSIGHT

Fairness in AI and Law

  • Automating fairness in AI is difficult due to law's contextual nature and flexibility.
  • Legal systems often prioritize contextual equality, adapting to societal changes, which makes automation challenging.
Get the Snipd Podcast app to discover more snips from this episode
Get the app