Fooled by a Voice: How a Fake Jennifer Aniston Romance Lasted Five Months
Jul 10, 2025
Image Credit: Sangfor Technologies
A 43-year-old man in Southampton, UK, Paul Davis, was targeted in a deepfake romance scam using AI-generated audios and videos showing Jennifer Aniston.
The scammers:
Sent convincing deepfake videos and audios and messages calling him “my love” and professing love
Produced a fake CA driver’s license as a “proof” of identity
Maliciously guided him to buy £200 non- refundable Apple gift cards for an “Apple subscription”
Also created AI-generated videos and audio of several other public figures, including Elon Musk and Mark Zuckerberg
This scam continued for about five months. While the financial loss was relatively small, the emotional toll was substantial. For nearly half a year, Paul believed he was in a personal, private relationship with a celebrity he admired. And when the truth unraveled, it left behind real emotional harm. This incident is a powerful reminder: audio deepfakes don’t just steal money—they steal trust, connection, and emotional safety.
Why Audio Deepfakes Are Especially Dangerous
Nearly Impossible to Detect by Ear: Advances in AI make it easy to create hyper-realistic voice recordings that sound completely natural to the human ear. They are extremely difficult to distinguish from the real thing, especially over phone calls or voice messages.
Easy to Generate Without Consent: High-quality fake voices can be produced using just a small sample of someone’s real voice, often without their knowledge or permission.
Public Figures Are Prime Targets: Anyone with abundant audio or video content online, such as public figures or business leaders, faces a higher risk of being impersonated through AI-generated audio.
Emotionally Convincing: Hearing the voice of someone you trust triggers an immediate emotional response. This makes people more likely to believe and act on fraudulent messages.
How to Spot, Stop, and Avoid AI-Powered Deepfake Scams
Be skeptical of voice messages or calls from unknown contacts, even when they sound genuine.
Never comply with requests for financial needs in response to emotionally loaded voice calls.
Report suspicious content to platform moderators
Verify through a second channel: If you get an urgent or surprising request, confirm it using a trusted number or app you already know.
This case highlights how AI and social engineering are merging to create much more manipulative frauds. In this era where generative AI becomes increasingly accessible, awareness is the first line of defense—and with Karna Red’s deepfake simulation capabilities, organizations can generate awareness before real attacks happen.
Follow us on Linkedin to keep up with the latest developments in deepfake detection and digital security.
2024 © Project Karnā Inc.