Should You Disclose AI Use?
by
February 24, 2026
We’ve written before about the importance of maintaining robust AI policies. One AI policy that should be considered is the disclosure of AI use, both internally and externally. A recent Debevoise & Plimpton memo makes the case these disclosures:
“Based on our experience representing over 100 companies with their AI adoption over the last year, we think companies should consider adding an AI disclosure requirement like this one to their AI policies:
To the extent you share any work product, either internally or externally, and (1) a substantial portion of the document was generated by GenAI, (2) the work product may be relied on in making decisions, and (3) mistakes or omissions in the work product could impact those decisions, then you must identify which parts were generated using GenAI.”
The memo argues that disclosures can increase visibility and strengthen AI governance. Additionally, flagging work as AI-generated allows for thorough vetting from human colleagues and managers. Disclosures also make it easier to attribute thoughts and opinions to either human authors or AI systems. Some may be hesitant to adopt disclosure practices out of fear that it undermines their brand or expertise. If this is a concern, then the underlying use of AI may need to be questioned. While disclosure policies may not be the right fit for every company, there are compelling benefits to adopting them.