
AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic AI Safety & Trust Amid Rising Scams
8 snips
Oct 29, 2025 Dive into the pressing issues of AI safety as experts discuss how to build trust amid rising scams. Discover the risks posed by fake imagery and context errors that threaten AI reliability. Learn about Meta's new fraud detection tools for messaging apps. Explore how convincing scams, including spoofed emails, are becoming harder to spot, and hear practical tips for verifying suspicious messages. The discussion emphasizes the need for human oversight in AI applications to combat misinformation.
AI Snips
Chapters
Transcript
Episode notes
Real Scams Hit Close To Home
- Jaeden describes an employee who received a scam text pretending to be him to illustrate real-world risks.
- Conor recounts elderly parents being vulnerable to voice and video deepfakes that sound and look like family members.
Three Root Causes Of AI Distrust
- Conor breaks trust problems into three buckets: fake imagery, hallucinations, and lack of updated context.
- He argues humans must remain quality control because AI is a statistical predictor that still makes mistakes.
Always Verify AI Output
- Be the quality control: verify and correct AI outputs rather than blaming the model for hallucinations.
- Treat AI outputs like a draft produced by a junior employee and always validate before using them.
