Federal AI Framework Introduced to Senate

by Zachary Barlow

March 23, 2026

Last week, Senator Marsha Blackburn (R-TN) introduced “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act.” I’ll let you put the acronym for that together on your own. In the interest of word count, we’ll call it “the Act.” The Act would constitute a major shift in federal regulatory policy for AI. Despite invoking the President’s namesake, the bill actually deviates substantially from intentions the administration has signaled through executive orders. We were expecting any federal AI legislation to be very light on rules. However, the Act introduces several compliance obligations for AI developers and website owners. Chief among them is the proposed AI systems liability framework. A recent Fox Rothschild memo provides details:

  • “Creates a federal products liability framework for AI systems with a private right of action. Under the bill developers can be held liable for harm caused by defective design, failure to warn, express warranty, or an unreasonably dangerous product. The bill also bars unconscionable liability limitations in AI product contracts
  • Requires every provider of a high-risk AI system to conduct an annual independent third-party audit to detect viewpoint discrimination or discrimination based on political affiliation.  In addition, all covered entities must provide annual ethics training to all personnel using an FTC-established curriculum.
  • Imposes a (vague) duty on chatbot developers to exercise reasonable care in the design, development, and operation of such chatbot to prevent and mitigate reasonably foreseeable harms to users of such chatbots mitigated by an instruction to the FTC to promulgate rules for minimum safeguards for compliance with enforcement authority for the FTC and AG’s.
  • Risk-Based Framework for AI Systems: Requires systems trained using more than 10²⁶ integer or floating-point operations (a threshold that captures only the most powerful frontier models) to participate in the newly established Advanced Artificial Intelligence Evaluation Program within the Department of Energy with additional mandatory disclosure obligations.”

Surprisingly, the Act does not attempt to preempt state laws and common law rulings on AI. Up to now, the administration has focused on preemption as a means of culling “onerous AI laws” at the state level. In another unexpected move, the Act would statutorily omit unauthorized AI training of copyrighted materials from being considered “fair use.” The political future of the law is uncertain. It also comes with the politically thorny repeal of Section 230 of the Communications Decency Act, which has long shielded websites from liability related to user-hosted content. Additionally, the Kids Online Safety Act (KOSA) is rolled into the legislation. We’ll be monitoring this bill as it moves through the legislature. Amendments are likely, and the final bill may look very different from the proposal.