AI Risk Management: Key Considerations When Developing an AI Policy

by John Jenkins

January 8, 2026

Companies implementing AI tools need a targeted AI policy that provides clear but flexible guardrails that can be incorporated into firm-wide training programs while also allowing companies to pivot quickly to address rapid technological change.  This Arent Fox Schiff blog offers some insights into the key things to consider when developing such a policy.  This excerpt addresses the role a good policy plays in ensuring appropriate adoption of AI tools:

An effective AI policy should set your organization’s approach, name approved tools and how to access them, enumerate prohibited uses, and specify when extra approvals are required. Because the market changes quickly, treat the policy as a living document and align it with training and communications that drive adoption. Mandatory training, even brief and online, should remind personnel that they are ultimately responsible for their AI-generated outputs and decisions, even if this means reading every word, confirming accuracy, and complying with document retention limitations.

Forcing compliance can be tricky. For example, blocking public AI tools is rarely effective and can backfire by depriving teams of useful research resources. Instead, provide a capable enterprise option, reinforce it with just‑in‑time warnings if someone visits a public tool, and consider data loss prevention to flag risky exfiltration.

Companies may consider piloting two or three tools on a time‑boxed basis, measure real usage and outcomes, and then pick one to scale. Long procurement cycles struggle in a market where models and features change every few months.

The blog also addresses the preferability of enterprise tools over free offerings, recordkeeping and retention issues, and dealing with sensitive workflows.  It also includes an implementation checklist highlighting key considerations for each stage in the implementation process.