AI Risk Management: A Standards Profile for Agentic AI

by John Jenkins

March 18, 2026

Berkeley’s Center for Long-Term Cybersecurity recently published an “Agentic AI Risk Management Standards Profile.”  This excerpt from the Profile’s executive summary discusses what it’s intended to accomplish:

This paper introduces the Agentic AI Risk-Management Standards Profile (“Agentic AI Profile”), which aims to provide a targeted set of practices and controls for identifying, analyzing, and mitigating risks specific to agentic AI. The Agentic AI Profile is designed to complement the NIST AI Risk Management Framework (AI RMF) (NIST, 2023a) and functions as a specialized extension of the UC Berkeley General-Purpose AI Risk-Management Standards Profile (“GPAI Profile”).

While the GPAI Profile focuses on the risks inherent to large-scale models, the Agentic AI Profile addresses the risks that emerge when AI-based systems are granted the agency to act on behalf of users. It also draws on a growing body of technical, policy, and security research on AI agents, autonomy, and AI control.

The Agentic AI Profile is primarily for use by developers and deployers of agentic AI systems, including both single-agent and multi-agent systems built on general-purpose and domain-specific models. Policymakers, evaluators, and regulators can also use the Agentic AI Profile to assess whether agentic AI systems have been designed, evaluated, and deployed in line with leading risk-management practices.

The guidance provided by the Agentic AI Profile is intended to identify best practices to help govern the risks specific to agentic AI, and recognizes that due to agentic AI systems ability to make independent decisions and operate independently of human guidance, they require governance practices that are specifically tailored to manage their unique capabilities.