

Why AI Still Can't Be Trusted ...And Neither Can We
Mar 21, 2025
Discover the controversial new regulation in China that mandates labeling all AI-generated content. Learn how this could shape the landscape of misinformation and public trust. The conversation explores unsettling experiments revealing how easily AI biases can be manipulated. Could our reliance on AI be dulling our cognitive skills? Plus, hear about the latest AI Dumpster Fire, showcasing how unreliable AI search engines can be. Together, they unpack the ethical dilemmas of AI and the critical need for awareness in navigating this complex technology.
AI Snips
Chapters
Books
Transcript
Episode notes
China's AI Labeling Mandate
- China will require explicit AI labels on all AI-generated content by September 2025.
- This includes text, images, videos, audio, and virtual scenes, impacting service providers and app stores.
Manipulating LLM Opinions
- Large language models (LLMs) can be manipulated by understanding their weights and biases.
- Data poisoning and strategic text sequences can alter an LLM's perception of individuals or topics.
Kevin Roose vs. Chatbots
- Kevin Roose had negative online reviews from chatbots due to his reporting on Microsoft's Sydney.
- He used an AI reputation management firm to improve his image, demonstrating LLM manipulation.