AI Tools Are Secretly Training on Real Images of Children
Jun 11, 2024
auto_awesome
Ethical concerns arise as AI is trained on images of Brazilian children without consent. Privacy violations, deepfakes, and combatting abuse material are highlighted in this eye-opening discussion.
AI training datasets like Lion 5b are using children's images without consent, risking privacy violations.
Efforts are being made to address unauthorized use of children's images in AI datasets and potential legislation is being considered.
Deep dives
Unauthorized Use of Children's Images in AI Training Data
A popular AI training dataset, Lion 5b, has been accused of using over 170 images of Brazilian children without their consent. These images were taken from various sources including mommy blogs and YouTube videos, violating the children's privacy rights. The AI models trained on this data can potentially generate realistic imagery of children, posing a significant risk of misuse and exploitation.
Efforts to Address the Issue of Unauthorized Data Use
Efforts are being made to address the unauthorized use of children's images in AI training datasets like Lion 5b. Organizations like Internet Watch Foundation and Human Rights Watch are collaborating to remove illegal content links from datasets. However, the scale of the issue remains substantial, with concerns that similar unauthorized images from around the world may be present in the dataset. Legislation and regulations, such as those proposed in Brazil and the US, are being considered to tackle the challenges posed by deepfake technology and protect individuals' rights.
1.
Training AI on Real Images of Children Without Consent
A popular AI training dataset is “stealing and weaponizing” the faces of Brazilian children without their knowledge or consent, human rights activists claim.