Episode 21: The True Meaning of 'Open Source' (feat. Sarah West and Andreas Liesenfeld), November 20 2023
Nov 30, 2023
auto_awesome
Sarah West and Andreas Liesenfeld join hosts Alex and Emily to discuss the true meaning of 'open source' in software companies. They explore the need for transparency in AI systems, challenges in finding and evaluating open source alternatives, limitations of AI capability indexes, the importance of regulating technology, and the debate on fair use of copyrighted material.
The case of UnitedHealth using a flawed AI algorithm highlights the need for transparency and accountability in AI systems that impact human lives.
The disbanding of Meta's responsible AI team raises concerns about the company's commitment to ethical AI practices and emphasizes the need for independent oversight in AI development and deployment.
Deep dives
UnitedHealth uses flawed AI model to deny care, lawsuit alleges
UnitedHealth, the largest healthcare company in the US, is accused of using a deeply flawed AI algorithm to wrongfully deny critical health coverage to elderly patients. The algorithm, called NH Predict, estimates the post-acute care needed by a patient on a Medicare Advantage plan based on data from six million patients. The lawsuit claims that the algorithm has a 90% error rate, resulting in patients being discharged from care facilities prematurely and having to pay for care out of their own pockets. The case highlights the need for transparency and accountability in AI systems that impact human lives.
Meta disbanded its responsible AI team
Meta (formerly Facebook) has disbanded its responsible AI team, raising concerns about the company's commitment to ethical and accountable AI practices. The team, which was tasked with addressing the impact of AI technologies on society, has been reassigned to other AI teams within the company. This move has garnered criticism, as a dedicated responsible AI team is crucial for ensuring ethical decision-making, transparency, and addressing potential biases or harms caused by AI systems. The disbanding of the team raises questions about Meta's commitment to responsible AI and highlights the need for robust and independent oversight in AI development and deployment.
Resignation from Stability AI over fair use dispute
Ed Newton-Rex, former lead of the audio team at Stability AI, has announced his resignation due to a disagreement over the company's belief in fair use of generative AI models trained on copyrighted works. Stability AI submitted a 23-page comment to the US Copyright Office arguing that training AI models on copyrighted works falls under fair use, claiming it is a transformative and socially beneficial use. Ed Newton-Rex disagrees, stating that AI models can directly compete with copyrighted works and that using copyrighted materials to train AI models cannot be considered fair use. This case reflects the ongoing debate over the use of copyrighted works in AI development and the need for clear guidelines and regulations in this area.
Researchers Sarah West and Andreas Liesenfeld join Alex and Emily to examine what software companies really mean when they say their work is 'open source,' and call for greater transparency.
This episode was recorded on November 20, 2023.
Dr. Sarah West is the managing director of the AI Now Institute. Her award-winning research and writing blends social science, policy, and historical methods to address the intersection of technology, labor, antitrust, and platform accountability. And she’s the author of the forthcoming book, "Tracing Code."
Dr. Andreas Liesenfeld is assistant professor in both the Centre for Language Studies and department of language and communication at Radboud University in the Netherlands. He’s a co-author on research from this summer critically examining the true “open source” nature of models like LLaMA and ChatGPT – concluding.