Tips for Managing AI in the Hiring Process
by
February 10, 2026
We’ve written a lot about the risks of using AI in employment decisions. Algorithmic discrimination, data privacy, and AI errors are all risk factors on the employer side. Then there’s the applicant’s use of AI to worry about. Companies must account for things like deepfakes and AI-generated references. Whether you’re using AI tools in the hiring process or setting policy for AI use among applicants, a recent Fisher Phillips memo gives 5 steps for managing AI in the hiring process:
“1. Develop Comprehensive AI Policies. While many organizations rely on a single, high-level AI policy, a more effective governance framework typically includes multiple, complementary policies tailored to different aspects of AI use. At a minimum, you should establish a comprehensive program to address three areas: organizational AI governance, ethical use of AI, and tool-specific acceptable use policies. If you are not sure where to begin, our AI Governance 101 Guide provides a helpful starting point and can be found here.
2. Ensure Ongoing Vendor Oversight. You should treat AI interview vendors as an extension of the hiring process rather than as standalone technology providers. Managing risk requires clear contractual guardrails, transparency into how tools function, and ongoing monitoring to ensure compliance and fairness. For guidance on key considerations to consider during your vendor selection process, review our AI Vendor Resource here.
3. Adopt Measures to Identity and Prevent Deepfakes. Adopting identity verification measures for candidates, particularly in asynchronous interviews, and establishing review protocols to flag irregular or suspicious interview behavior can help mitigate the use of deepfakes. For video interviews in particular, you should implement tools that support human review and train employees to recognize indicators of manipulated or synthetic content. For guidance about practical steps to take, review our Hiring with Confidence in the AI Era Insight here.
4. Audit AI Interview Tools and Systems. You should regularly audit AI interview tools to assess whether they rely on signals such as speech patterns, accents, tone, facial expressions, or eye contact, and limit or disable features that may disadvantage candidates with disabilities, neurodivergent traits, or culturally distinct communication styles. You should also ensure that alternative interview formats are available to help prevent qualified candidates from being screened out based on how AI systems interpret communication rather than job-related qualifications. FP has partnered with analytics firm BLDS and AI fairness software provider SolasAI to deliver an integrated suite of bias audit services – learn more here.
5. Establish Clear and Balanced Policies on Applicant AI Use. Your approach to applicant use of AI during interviews can present reputational risk if perceived as inconsistent, overly restrictive, or misaligned with the employer’s own use of AI tools. Prohibiting applicant AI use while deploying AI interviewers may be viewed as a double standard, potentially affecting employer brand, candidate trust, and overall recruitment outcomes. Accordingly, you should address applicant use of AI during interviews through transparent, balanced policies rather than blanket prohibitions. This includes clearly communicating what types of AI use are acceptable, such as accessibility tools or interview preparation support, and what uses are not permitted, such as real-time response generation intended to misrepresent a candidate’s abilities.”
AI is changing the way candidates apply and how companies review those candidates. Employment decisions are critical to a company’s success and employers must take care to ensure talent is effectively recruited and retained. These tips can help you mitigate the risks of AI while ensuring compliance with relevant employment laws.