Chatbots Provide New Compliance Policy Opportunities

by Zachary Barlow

September 30, 2025

AI is transforming legal and compliance tasks. Companies are experimenting with AI in ways that can provide employees with valuable resources, while reducing the burden on legal and compliance teams. A recent Radical Compliance article discusses a case study where a company built and trained an AI chatbot on its compliance policies. The company then allowed employees to direct compliance-related questions to the AI. While the chatbot saw a high volume of compliance-related questions, the article poses an interesting question:

“I also wonder whether employees engage with the chatbot simply so they can get a definitive answer about what to do. That is, perhaps they’re using the chatbot to cover their rear ends. They ask it a question, it gives them an answer, and now they have documentation — so that if something goes wrong in the future anyway, they can print out the original answer and say, ‘See? This is what the chatbot told me! You can’t blame me, I was just doing what the AI said!'”

The unintended possibility of employees using chatbots like this one to avoid responsibility is a risk that companies cannot ignore. We’ve written before about the importance of accountability in AI applications. A machine cannot be held accountable. When designing programs like this one, companies should set clear expectations and ownership of responsibilities in case of failure. Whether that responsibility lies with the employee user or a “human in the loop” depends largely on the application and specific circumstances. AI has many functions, but scapegoat should not be one of them.