
Cyber Crime Junkies Deepfake Attacks, Voice Cloning, and Why AI Social Engineering Works
Jan 10, 2026
Perry Carpenter, a strategic security leader and author of the book Fake, dives into the alarming rise of deepfake technology. He reveals how AI can clone voices in mere seconds, making it increasingly difficult to identify fraud. The discussion touches on the weaponization of synthetic media for scams, including romance and HR fraud. Carpenter emphasizes that understanding intent is more crucial than spotting visual artifacts. He offers practical advice on slowing down decision-making and verifying information to combat these sophisticated threats.
AI Snips
Chapters
Transcript
Episode notes
Human Psychology Drives Deepfake Risk
- Deepfakes are exploding because of human psychology, not just technical novelty.
- Cheap, easy, and high-quality tools let attackers weaponize trust and urgency quickly.
Small Samples Can Clone Voices
- Voice cloning now works with very small samples, sometimes as little as four seconds.
- Longer samples (30s–minutes) reproduce cadence and disfluencies, improving believability.
Deepfake Is Synthetic Media By Design
- 'Deepfake' technically means synthetic media produced by machine learning.
- The term often carries a negative connotation because people focus on deceptive uses.
