Ensuring AI Compliance Through Contract Provisions

by Zachary Barlow

October 16, 2025

With AI regulations popping up on a state-by-state basis, regulators expect companies to be in compliance with multiple regulatory schemes. Whether you’re a user or developer of AI, new obligations are coming into force regulating your activity. Managing AI use internally might be relatively straightforward. However, how do you ensure that your vendors or customers are not exposing you to AI regulatory risk? A recent memo from Mayer Brown addresses this question by giving best practices for AI compliance contractual provisions:

“Putting these requirements together when contracting for AI systems, a broad mutual compliance with laws obligation may suffice for low-risk use cases. However, particularly for high-risk AI use cases, it is important to include clear contractual requirements regarding developer and deployer obligations in the AI value chain. In particular, a deployer may want the developer to warrant that it developed the AI system in a responsible manner, including through appropriate data governance, risk mitigation measures (e.g., NIST AI RMF or ISO/IEC 42001 standard), documentation and instructions for use, transparency notices (e.g., latent and manifest disclosures in AI-generated content), cybersecurity, and algorithmic discrimination, accuracy, and robustness testing.”

AI regulation works by targeting two groups: AI developers and AI deployers. An AI developer creates and maintains an AI model. An AI deployer uses that model in practical applications. The memo notes that contract provisions flow both ways. AI deployers need contractual provisions ensuring the models they use are compliant. At the same time, developers need to bind deployers to guarantee that their AI models are used lawfully. Contractual provisions in AI vendor contracts can reduce risks, establish clear expectations, and shift liability. This makes such provisions a valuable tool for risk management and regulatory compliance.