Communicating Your AI Policy Internally
by
August 18, 2025
Internal policies are critical tools for AI risk management. We’ve talked about the importance of policies and what should go in them. However, there’s a critical flaw in many AI policies: communication. To explain, let me defer to an age-old thought experiment: If a tree falls in the middle of the forest and no one is around to hear, does it make a sound? Or for our purposes: If your organization has an AI policy and no one cares to read it, does it make a difference? Your policy is only as effective as your workforce is at understanding and following it. A recent memo from Korn Ferry discusses the uncomfortable truth about AI policies.
“The good news is that more and more employees are using AI. The bad news is that most of them are probably using it in a way that violates company policy—meaning that they’re potentially exposing the firm to security breaches, legal liability, and other risks. Nearly 60% of US workers surveyed admit to using AI in unapproved ways, though many blame the firm for it, with half saying that their company’s AI policies are unclear.”
AI policies are useless in a vacuum. Organizations only reap the benefits of good policy if they are effectively communicated and followed. One of the biggest barriers is the policy language itself. We lawyers love using legalese to reach maximum precision in our policies. This may be effective when communicating with other lawyers, but jargon can be confusing to the general public. AI policies should be precise, but approachable. Additionally, policy adoption and implementation require more than just an email or simple electronic acknowledgment sent to employees. You may need to engage your workforce directly. Utilizing tools like training sessions to teach employees about acceptable AI use. Employees should be encouraged to ask questions and seek clarity when they are unsure of whether or not an AI application is in line with company policy. AI is brand new, and there are a lot of people who don’t understand it. Your workforce may need more direct engagement to understand what is and isn’t acceptable for AI use.