Decoder with Nilay Patel cover image

Microsoft CTO Kevin Scott on AI copilots, disagreeing with OpenAI, and Sydney making a comeback

Decoder with Nilay Patel

00:00

Microsoft CTO Kevin Scott's Insights on Training Data Feedback Loop Problem in AI

Training AI models using large volumes of data from the web and other sources can lead to a feedback loop problem, where the AI models generate output that is trained against and may result in strange outcomes. To address this, techniques to assess the quality of data are used to avoid training on low-quality data. In addition, there is a need for industry-wide conventions or regulatory requirements to mark AI-generated content. Microsoft has been working on a media provenance system that allows for the identification of AI-generated content, which is useful for disinformation detection and controlling what content is used for training AI systems. Conversations are ongoing with other companies such as Google and Adobe, and Microsoft is open to adopting existing standards to solve the problem of content authenticity.

Transcript
Play full episode

Remember Everything You Learn from Podcasts

Save insights instantly, chat with episodes, and build lasting knowledge - all powered by AI.
App store bannerPlay store banner