Using AI in HR May Expose You to Legal Risk
by
May 12, 2025
I’ve written before about the broad legal risks that companies can face from using AI. Companies can face particularly heightened risk when using AI to perform Human Resources (HR) functions. HR is often the final say in hiring, firing, promotion, and compensation. These areas are sensitive to litigation risks because of federal and state antidiscrimination laws. Additionally, new state laws are coming into effect that limit how employers can use AI in HR functions. A recent memo from McDermott examines some of these risks:
“Using these burgeoning technologies without understanding how their algorithms work and the data they rely on to produce certain outputs can expose employers to potential class actions based on privacy, AI regulations, and employment claims – specifically, alleged disparate impact discrimination and wage and hour infractions.”
Algorithmic discrimination occurs when AI systems are trained on biased or discriminatory datasets. Often, the developers of these AI systems are unaware of this latent bias. However, the disparate impacts stemming from algorithmic discrimination can be legally actionable. Companies may unknowingly use AI systems that violate antidiscrimination laws, opening significant legal and reputational risks. If you’re using AI in your HR decision-making processes, it’s vital to understand the datasets your AI is trained on and to test for discriminatory results. This can be especially difficult if you’re using a third-party AI software provider because the training data is not within your direct control. This underscores the importance of having a good relationship with your software vendors that welcomes questions and transparency. If you’re using systems that function in ways you don’t understand, based on data you have no access to, then you may be running a major risk.