The podcast discusses how AWS trains AI models on user data and the challenges of opting out. It highlights the lack of transparency in AWS's practices and raises concerns about their terms of service restrictions.
AWS is training its AI services on user data generated by various services, raising concerns about transparency and user consent.
Opting out of AWS's AI training is a convoluted process that lacks simplicity and transparency, raising questions about AWS's customer-centric approach.
Deep dives
Training AI on Usage of Subset of AWS Services
AWS is training its AI services on the usage of a subset of their own cloud services, which is not explicitly disclosed to users. While AWS has long treated user data as sacrosanct, it is now utilizing the data generated by services such as Amazon Code Guru Profiler, Amazon Code Whisper Individual, Amazon Comprehend, Amazon Lacks, Amazon Poly, Amazon Recognition, Amazon Textract, Amazon Transcribe, and Amazon Translate to train its AI models. This hidden practice raises concerns about transparency and user consent.
Opting out of AWS's AI Training
Opting out of AWS's AI training involves a convoluted process. Users first need to enable AI opt-out policies for their organization and create a custom policy for opting out. AWS does not provide a straightforward switch to opt out and requires users to attach the crafted policy to the organization's root OU (Organizational Unit). Additionally, validating whether the policies effectively opt out of AI training is complex. The lack of simplicity and transparency in the opt-out process raises questions about AWS's customer-centric approach and aligns more with obfuscation and self-interest.
1.
AWS's Training of AI Models on User Data and the Difficulty of Opting Out