What Should be in Your AI Use Policy?
by
April 9, 2025
AI use policies are becoming increasingly common, but what should be in those policies? A recent CSOonline article has some thoughts on that topic. This excerpt discusses the need for the policy to include clear responsible use guidelines:
Can employees use public AI chatbots or only secure, company-approved tools? Can business units create and deploy their own AI agents? Can HR switch on and use the new AI-powered features in their HR software? Can sales and marketing use AI-generated images? Should humans review all AI output, or are reviews only necessary for high-risk use cases?
These are the kinds of questions that go into the responsible use section of a company’s AI policy and depend on an organization’s specific needs.
For example, at Principal Financial, code generated by AI needs review, says Kay. “We’re not just unleashing code into the wild. We will have a human in the middle.” Similarly, if the firm builds an AI tool to serve customer-facing employees, there will be a human checking the output, she says.
Taking a risk-based approach to AI is a good strategy, says Rohan Sen, data risk and privacy principal at PwC. “You don’t want to overly restrict the low-risk stuff,” he says. “If you’re using Copilot to transcribe an interview, that’s relatively low risk. But if you’re using AI to make a loan decision or decide what an insurance rate should be, that has more consequences and you need to provide more human review.”
The article also discusses, among other topics, the importance of a clear definition of the term “AI” in the policy, the need to solicit input from key stakeholders in developing the policy, the importance of aligning the policy with the organization’s core principles and with regulatory requirements, and the need to establish a clear governance structure.