AI Security CEO Falls for Deepfake Job Scam - Then Exposes the Tactics
#Cybersecurity

AI Security CEO Falls for Deepfake Job Scam - Then Exposes the Tactics

Privacy Reporter
4 min read

An AI security startup CEO who specializes in deepfake detection was targeted by North Korean scammers using AI-generated video during a job interview, revealing how sophisticated these attacks have become.

When Jason Rebholz, CEO of an AI security startup, posted a job opening on LinkedIn, he never expected to become the target of the very scam his company protects against. The irony is thick - a deepfake detection expert getting fooled by a deepfake during a job interview.

Featured image

The Setup

Rebholz, who previously worked as an incident responder and CISO, has researched deepfakes for years and even uses them in his presentations. Yet when a candidate applied for a security researcher position at his firm, the red flags started appearing almost immediately.

The first warning sign was the candidate's profile picture - an anime character rather than a real person. But in the security community, Rebholz noted, people often use aliases or non-real photos due to privacy concerns, so he gave the benefit of the doubt.

More suspicious elements followed: the resume was hosted on Vercel, a cloud platform that integrates with AI tools. Rebholz's co-founder immediately recognized this as potentially AI-generated content, since Claude Code typically deploys resumes and portfolios on Vercel. Despite this, Rebholz justified it as the candidate being a developer who would naturally use coding tools.

The Interview That Broke Reality

The interview itself was where everything unraveled. The candidate joined with video off, then took 30 seconds to turn it on - during which Rebholz knew he was about to see a deepfake. When the camera finally activated, the video showed multiple telltale signs: a virtual background, blurry and plastic-looking facial features, a green screen reflection in the glasses, and facial features that appeared and disappeared as the person moved.

"At this point, I know I'm definitely talking to a deepfake. But again, I tried to justify it," Rebholz said. This internal conflict - what he calls "inner turmoil" - became the most surreal part of the experience. Despite being 95% certain it was a scam, he couldn't bring himself to confront the candidate because of the 1% chance he might be wrong and ruin someone's job prospects.

The candidate also repeated interview questions back before answering and gave responses that were almost verbatim quotes from things Rebholz had said or written online. "It was almost an out-of-body experience where I felt like I was talking to myself," he recalled.

The Bigger Picture

This incident highlights how North Korean IT worker scams have evolved from simple text-based deception to sophisticated AI-powered fraud. Amazon recently reported blocking over 1,800 suspected North Korean scammers from its workforce since April 2024, with a 27% quarter-over-quarter increase in DPRK-affiliated applications.

These scams cost American businesses tens of millions of dollars. When successful, the fraudsters can steal proprietary source code, extort employers with threats to leak corporate data, or simply collect paychecks while providing no real value.

Prevention Strategies

Rebholz emphasizes that companies of all sizes are vulnerable - not just tech giants. His key recommendations include:

Trust your instincts: "The biggest learning for me is: trust your gut. Moving forward, the rule I have is forget about the social awkwardness. It's more important to just challenge upfront and have that awkward conversation than it is to waste your time."

Technical safeguards: Mandate cameras stay on during interviews, require virtual backgrounds to be turned off, and if suspicious, ask candidates to pick up and show objects from their background. Modern deepfakes can defeat simple hand-waving tests.

Physical verification: Require new hires to work on-site for the first week, even for remote positions. Some scammers have even hired different people to show up for the first day before going remote.

Multi-layered approach: Combine both high-tech solutions like deepfake detection software with low-tech methods like direct confrontation of suspicious behavior.

The Human Element

The most striking aspect of Rebholz's experience is the psychological manipulation at play. Even someone with deep expertise in the field found himself doubting his own judgment, concerned about the social consequences of being wrong. This hesitation is exactly what scammers count on.

As deepfake technology becomes more accessible and convincing, the line between real and artificial continues to blur. For security professionals and hiring managers alike, the lesson is clear: when something feels off, it probably is - and it's better to have an awkward conversation than to become the next victim of AI-powered fraud.

Comments

Loading comments...