Will Agency Law Apply to Agentic AI?

by Zachary Barlow

June 11, 2025

Agentic AI, or AI that acts semi-autonomously to accomplish tasks, is the next frontier in the AI world. While exciting and potentially groundbreaking, agentic AI it is the most risk-laden AI implementation to date. John previously talked about the possibility of AI agents reporting out rather than up, and gave some tips on keeping AI agents from “going rogue.” But what happens when your AI agent engages in unauthorized tortious conduct? Normal agency law provides tests to determine when principals are vicariously liable for the actions of their agents. However, those standards may not apply to AI.  In a recent memo discussing agentic AI, DLA Piper notes that:

“AI agents will not be bound by traditional principal-agent law: Companies can assert defenses when human agents act outside the scope of their authority. But the law of AI agents is undefined, and companies may find themselves strictly liable for all AI agent conduct, whether or not predicted or intended. Contractual arrangements with AI developers can assign accountability between in-scope and out-of-scope agentic behavior. Recent actions like FTC v. Rite Aid Corporation & Rite Aid Headquarters Corporation show that large companies may not be able to shift blame to vendors.”

When an AI system “goes rogue,” don’t be surprised when you’re left to clean up the mess. Agentic AI is risky for the same reason it is potentially valuable: it can complete tasks with minimal human involvement. While that might mean higher efficiency, it also means that mistakes can spiral out of control before a human can intervene. Good governance is the cornerstone of AI risk mitigation, and that goes double for agentic AI. Processes should be built with system failure in mind and provide human oversight to prevent AI agent errors.