Want to Get the Most Out of AI? Don’t Stop at Drafting a Policy
by
April 22, 2026
AI policies are a company’s first line of defense against AI-related risks. They set the ground rules for employee and organizational use of emerging technologies. However, you can’t just rely on a list of prohibitions and expect to get the most out of AI. Employee training is necessary. A good program should teach employees specifics and provide concrete examples of how AI can be incorporated into their workflows. In a recent episode of Ropes & Gray’s AI at Work podcast, partner Jennifer Cormier gave some great practical advice on how to conduct effective AI training outside your policy framework:
“You can have a really well-drafted policy, but that’s not going to be enough on its own. Employees need to understand not just what the rules are, but why they exist and how to apply them in practice. Effective training should include concrete examples and scenarios that employees are likely to encounter, not just abstract principles. I think a lot of times when we are talking about AI, it can be very high level or in vague terms, but laying out specific types of tools that are permitted or off limits, specific use cases that may be permitted or off limits will be really helpful.”
The point above is well taken. In the past, we’ve discussed the need to effectively communicate policies. That communication must also extend beyond policy language. Training employees is critical. Not only should training cover prohibited uses under your policy, but it should also provide concrete examples of encouraged uses. As lawyers, we’re often risk-averse, and we put risk management first. This isn’t wrong, but it’s only half the picture. Education on opportunities must follow education on risks. Companies are spending big on AI tools, but that’s money wasted if those tools go unused. Communicating effective and acceptable use cases to employees empowers them to incorporate AI tools and maximize value.