Sunday, March 22, 2026
Veriff: Deepfakes Distract from Real IDV Crisis in Hiring
The identity verification industry is in the grip of a dangerous fixation. Boardrooms, vendor pitches, and security conferences are consumed by a single word: deepfakes. And while synthetic video and AI-generated faces are real threats, Veriff CTO Hubert Behaghel offered a sobering counterpoint in a March 2026 PYMNTS interview that every CISO in a tech company should read twice. His message: the deepfake conversation is a distraction from a far more systemic identity crisis unfolding inside your hiring pipeline right now.
For security teams battling AI hiring fraud and workforce IDV challenges in 2026, that reframing is not just intellectually interesting — it is operationally urgent.
The Deepfake Distraction Is Real — and Costly
Behaghel's warning is pointed: "Deepfake is only attacking one layer." The computer vision layer, specifically. And while that layer matters, obsessing over it allows fraudsters to walk through the front door using methods that have nothing to do with synthetic video.
Think about how identity fraud actually unfolds in remote hiring:
- A candidate submits a genuine government ID — belonging to a real person who sold their credentials.
- They pass a one-time liveness check at onboarding.
- They receive access credentials, which they promptly share with a third party who performs the actual work.
- Months later, that individual has deep access to codebases, internal systems, and client data.
No deepfake was needed. The attack succeeded because identity verification was treated as a checkbox at the door rather than a continuous security posture. This is the identity verification crisis that Behaghel is pointing to — and it is far more prevalent than boardroom deepfake anxiety suggests.
Gartner has projected that 25% of job applicants in 2026 will use AI-assisted tools to misrepresent themselves during hiring. That statistic alone should reorient any security team's threat model. The problem is not just sophisticated synthetic faces — it is weak passwords, credential reuse, siloed HR and IT systems, and identity chains that decay the moment onboarding ends.
Identity Is Dynamic — So Your Verification Must Be Too
One of Behaghel's most actionable insights is deceptively simple: "Identity is dynamic." A verified identity at 9 AM on the day of hire is not the same verified identity operating your production environment six months later.
This is where most enterprise identity programs fail. They invest heavily in point-in-time verification — a document scan, a selfie match, a background check — and then treat the identity as settled. But fraudsters understand that the real opportunity lies in the gap between onboarding and ongoing access.
The layered signals Behaghel recommends — device intelligence, network data, biometrics, and human review — are not just about catching deepfakes at the front door. They are about building a continuous signal chain that tracks whether the verified human and the active account remain congruent over time. When that congruence breaks, your system should know.
Where Identity Decay Actually Happens
Security teams focused on the workforce IDV 2026 landscape are identifying several high-risk inflection points:
- Password resets and account recovery flows — Among the most exploited vectors. Once a fraudster is inside, they use recovery mechanisms to lock out the legitimate identity and entrench their own access.
- Privilege escalation requests — Unverified or poorly verified employees requesting elevated permissions represent a significant internal threat.
- Offboarding failures — Siloed HR and IT systems mean departed employees — or their credentials — remain active far longer than intended.
- AI agent interactions — As agentic AI becomes embedded in enterprise workflows, these agents often inherit human credentials without the human verification layer attached to them.
Each of these moments is a crack in the identity chain. And each is exploited not through elaborate deepfake attacks, but through the mundane failures of poor lifecycle implementation.
The AI Agent Problem Is Coming Faster Than You Think
If 2025 was the year of AI-assisted fraud, 2026 is shaping up to be the year of AI agent fraud. As enterprises deploy agentic AI tools — systems that can autonomously execute tasks, access APIs, make decisions, and interact with internal systems — the question of who authorized this agent becomes critical.
The zero-trust principle of "never trust, always verify" was built for humans authenticating into systems. It now needs to extend to non-human identities: AI agents acting on behalf of verified employees. If an agent is operating under a compromised human credential, it can do damage at machine speed — accessing data, moving files, executing transactions — before any human analyst notices.
For security teams, this creates a non-negotiable requirement: the verified identity must be cryptographically linked not just to a person at onboarding, but to every account, agent, and access privilege that person's identity authorizes. The chain of trust must be continuous and auditable.
Zero-Trust Hiring: What It Actually Requires in Practice
The phrase "zero-trust" has become as overused as "deepfake," but in the context of remote hiring fraud, it has a precise meaning. Zero-trust hiring means:
- No identity is presumed valid after initial verification. Continuous signals must confirm congruence throughout the employee lifecycle.
- Access is tied to verified identity, not just credentials. A password is not a person. Biometric re-verification, device binding, and behavioral baselines are required.
- Recovery flows are treated as high-risk moments. Every password reset, MFA bypass, or account recovery event should trigger re-verification against the original onboarding identity.
- AI agents are identity-scoped. Any automated system operating under a human's authorization must be bound to that human's verified identity with the same rigor as a direct login.
This is not theoretical security architecture. These are the concrete controls that block the fraud patterns Behaghel is describing — synthetic fraud in recovery flows, credential-sharing schemes in remote work, and impersonation attacks that exploit identity lifecycle gaps.
The HR Platform Vulnerability Nobody Is Talking About
HR platforms sit at an uncomfortable intersection: they hold sensitive identity data, they are used by non-security-trained personnel, and they are deeply integrated with payroll, access provisioning, and benefits systems. Yet their identity assurance standards often lag significantly behind financial services or healthcare.
AI hiring fraud increasingly targets these platforms precisely because the stakes are high and the verification standards are low. A fraudulent hire who successfully passes a one-time document check at onboarding can route salary payments, access internal tools, and exfiltrate data — all while the HR system registers them as a fully verified employee.
The fix is not more sophisticated deepfake detection at onboarding. It is linking verified identity to the downstream systems that matter: payroll, provisioning, data access, and communication platforms. When a change is made in any of these systems, the verified identity should be re-confirmed.
How IDChecker AI Addresses the Full Identity Lifecycle
IDChecker AI was built on the premise that Behaghel articulates: a single verification event is not identity assurance. The platform delivers zero-trust hiring through continuous verification that extends well beyond the onboarding document check.
Key capabilities that directly address the systemic failures identified above:
- Cryptographic identity binding — Verified identities are cryptographically linked to accounts, access privileges, and recovery flows, preventing the credential-sharing and impersonation attacks that plague remote hiring.
- Re-verification triggers on high-risk events — Account recovery, privilege escalation, and unusual access patterns automatically trigger re-verification against the original verified identity.
- Multi-signal verification — Document verification, biometric matching, device intelligence, and network signals are combined into a layered assurance model — not a single point-of-failure computer vision check.
- Lifecycle-aware identity management — The platform tracks identity congruence from hire to offboard, closing the gap where fraudsters currently thrive.
For tech companies operating distributed, remote-first hiring pipelines, this means your verified identity does not expire the moment your new hire's onboarding call ends. It remains an active, continuously confirmed anchor throughout their tenure.
Conclusion: Stop Fighting the Last War
The deepfake conversation is not wrong — it is incomplete. Yes, synthetic media is evolving rapidly. Yes, liveness checks must improve. But if your security posture in 2026 is centered on detecting AI-generated faces while credential reuse, weak recovery flows, and siloed identity systems go unaddressed, you are fighting the last war.
Behaghel's insight deserves to become a strategic mandate for every CISO at a tech company managing remote hiring risk: build for the full identity lifecycle, not just the onboarding moment. Implement layered signals. Treat recovery flows as attack surfaces. Bind verified identities to every downstream system that matters. And as AI agents become workforce participants, extend the same cryptographic identity assurance to every non-human actor operating under a human's authorization.
The identity verification crisis in hiring is systemic, not cinematic. Solving it requires continuous zero-trust IDV — not better deepfake filters.