General Service Administration Issues Draft AI Use Clause

by Zachary Barlow

March 25, 2026

Amid tensions between the U.S. government and AI company Anthropic, the General Services Administration issued a draft AI use clause. If finalized, this clause would not only govern AI procurement from the U.S. government but also impact how contractors working with the government are allowed to use AI. A recent Thompson Hine memo discusses some of the compliance obligations to contractors and AI service providers:

  • “The contractor must disclose all AI systems used in the performance of the contract.
  • The contractor and service provider must use only “American AI systems” (defined as AI systems developed and produced in the United States) and are prohibited from using any foreign AI system (including any AI components manufactured, developed or controlled by non-U.S. entities).
  • The contractor must provide a means for the government to implement human oversight,  intervention and traceability of the AI system, and the AI system must include summarized intermediate processing actions and decision points, model routing decisions with accompanying rationale and data retrieval methods employed.
  • The contractor and service provider must notify the government of applicable security incidents within 72 hours and provide daily status updates.
  • At the government’s request, the contractor must provide existing commercial documentation or disclosures that are sufficient to demonstrate the AI system’s compliance with specified requirements, including AI system decision-making processes, logic and operational parameters, the NIST AI Risk Management Framework and “Unbiased AI Principles.”

These requirements are likely to have cascading effects across the AI industry. AI companies will need to market compliant models if they want to do business with the government or government contractors. When considering that roughly 30% of U.S. GDP is government spending, or contracting, not adhering to these rules leaves a lot of money on the table. Unfortunately, the rules are ill-defined. The “Unbiased AI Principles” require AI systems not to “manipulate responses in favor of ideological dogmas such as Diversity, Equity, Inclusion.” However, these terms are not defined by the government in any meaningful way.