AI Deepfakes: Coming to Your Next Job Interview?

by John Jenkins

February 5, 2026

Here’s a scary scenario laid out in the intro to this Jones Walker blog:

The scenario sounds like science fiction: a candidate aces a video interview, clears a background check, and starts work only to deploy malware on day one. But it’s already happening. Gartner projects that by 2028, one in four candidate profiles worldwide will be fake. The FBI has documented over 300 US companies that unknowingly hired North Korean operatives using stolen identities and AI-generated personas. And the tools enabling this fraud are getting cheaper and more convincing by the month.

For employers, the question is no longer whether synthetic identity fraud will affect hiring. It’s whether your current verification processes can detect it, and what liability you face when they don’t.

The blog discusses the risks employers face from negligent hire claims as well as the risks that the verification tools they use to protect against these claims may give rise to disparate impact discrimination claims. It then draws lessons from the FTC’s 2023 enforcement action against Rite Aid to come up with a list of recommended controls that companies should implement to reduce these risks: