AI Job Screening Tool Sued By Applicants
by
January 28, 2026
The use of AI in employment decisions continues to be risky territory. We’ve written previously about algorithmic discrimination lawsuits, but a new kind of litigation is emerging in California. Two job applicants have sued AI company Eightfold. The Plaintiffs allege that the screening tool provided by the service constitutes a prohibited “consumer report.” A recent Fisher Phillips memo breaks down the core arguments of the lawsuit:
“According to the complaint, Eightfold’s technology allegedly:
- Gathered information from third-party sources including LinkedIn, GitHub, Stack Overflow, and other public databases
- Analyzed data from “more than 1.5 billion global data points” including profiles of over 1 billion workers
- Created inferences about applicants’ “preferences, characteristics, predispositions, behavior, attitudes, intelligence, abilities, and aptitudes”
- Ranked candidates on a 0-5 scale based on their predicted “likelihood of success” in the role
- Provided these assessments to employers who used them to filter candidates before any human review
The core legal theory is that these assessments constitute ‘consumer reports’ under the federal Fair Credit Reporting Act (FCRA) and California’s Investigative Consumer Reporting Agencies Act (ICRAA), and that Eightfold failed to comply with the longstanding requirements these laws impose on companies that provide such reports. Eightfold responded to the allegations in a media statement saying that they ‘do not scrape social media and the like,’ so we’ll see more information as the case unfolds.”
While we don’t know how this lawsuit will unfold, there are a couple of major takeaways here. For one, it highlights the continued risks of using AI in HR capacities. Companies using tools like these should be extremely careful and understand exactly what the AI tool does and how it does it. Additionally, this underlines a point we’ve made on this blog before: unlawful acts are still unlawful even if they are automated. People often erroneously believe AI practices are lawful if they aren’t directly banned. However, the law still applies regardless of an AI’s involvement. Don’t just evaluate your AI compliance in the context of AI laws, but consider the broader legal impacts. If an action is unlawful for a human, chances are it’s equally unlawful for an AI.