AI is dangerous, but not for the reasons you think | Sasha Luccioni
Dec 15, 2023
auto_awesome
Sasha Luccioni, an AI ethic researcher, discusses the current negative impacts of AI, including carbon emissions, copyright infringement, and biased information. She offers practical solutions for regulation to ensure inclusivity and transparency.
Artificial intelligence (AI) models emit significant amounts of carbon dioxide and consume vast amounts of energy during training, highlighting the need to measure and mitigate their environmental impact.
Unauthorized usage of artworks for training AI models can be detected and addressed with tools like 'Have I Been Trained', ensuring that artists and authors are properly credited and compensated for their work.
Deep dives
Measuring AI's Environmental Impact
The podcast explores the environmental impact of artificial intelligence (AI) models and the need to measure and mitigate their carbon emissions. Large language models, like Bloom, consume vast amounts of energy and emit significant amounts of carbon dioxide during training. Tools like Code Carbon can estimate energy consumption and emissions, allowing informed choices for more sustainable models and deployment on renewable energy.
Artist Consent and Copyright Infringement
The podcast discusses the challenge faced by artists and authors in proving their work's use for training AI models without their consent. Organizations like Sponning AI and their tool 'Have I Been Trained' help search datasets to detect unauthorized usage, providing essential evidence for legal actions. Recent collaborations offer opt-in and opt-out mechanisms for creating datasets, ensuring that human-created artworks are not exploited without proper consent and attribution.
Addressing Bias in AI
The podcast highlights the issue of bias encoded in AI models, resulting in stereotypical representations and discriminatory outcomes. The biased nature of facial recognition systems, for example, disproportionately affects women of color and can lead to false accusations and wrongful imprisonment. Tools like the Stable Bias Explorer help explore and understand the bias present in image generation models, and their insights can be used to develop legislation, governance mechanisms, and more trustworthy AI models.
AI won't kill us all — but that doesn't make it trustworthy. Instead of getting distracted by future existential risks, AI ethics researcher Sasha Luccioni thinks we need to focus on the technology's current negative impacts, like emitting carbon, infringing copyrights and spreading biased information. She offers practical solutions to regulate our AI-filled future — so it's inclusive and transparent.