AI is dangerous, but not for the reasons you think | Sasha Luccioni
Oct 31, 2023
auto_awesome
Sasha Luccioni, an AI ethics researcher, discusses the current negative impacts of AI, such as carbon emissions, copyright infringement, and biased information. She offers practical solutions to regulate AI for inclusivity and transparency. Topics include environmental costs, bias in facial recognition, and addressing biases in AI systems.
AI models have significant environmental impacts, emitting large amounts of carbon dioxide and consuming massive amounts of energy.
Artists and authors have faced challenges in proving that their work has been used to train AI models without their consent.
Deep dives
The Impacts of AI on Sustainability
AI models have significant environmental impacts, emitting large amounts of carbon dioxide and consuming massive amounts of energy. The growth in size of AI models over the years has contributed to their increasing environmental costs. Tools like Code Carbon can help estimate the energy consumption and carbon emissions of AI models, allowing for informed choices and the deployment of models on renewable energy sources.
Unauthorized Use of Artists' and Authors' Work
Artists and authors have faced challenges in proving that their work has been used to train AI models without their consent. Tools like Have I Been Trained help individuals search large datasets to identify if their work has been used in AI training. This evidence has been crucial for artists to file copyright infringement lawsuits against AI companies. Efforts are being made to create opt-in and opt-out mechanisms to protect the intellectual property of artists and authors.
Addressing Bias in AI Systems
Bias in AI models can lead to discriminatory outcomes, such as false accusations and wrongful imprisonment. Existing facial recognition systems have shown significant biases, particularly against women of color. Tools like the Stable Bias Explorer allow for exploration of biases in image generation models. It is crucial to understand and address biases in AI systems to ensure fair and equitable outcomes.
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.