Agentic AI: “I’m Sorry Dave, I’m Afraid I Can’t Do That”
by
March 12, 2025
“Agentic AI” is probably the most buzzworthy development in the commercialization of AI to date. If you search the internet, you’ll find all sorts of efforts to define this term. Most of these definitions use words like “autonomy” and describe AI agents’ ability to perform complex tasks without human intervention to capture what this concept is all about. As someone who is relatively new to the world of AI, I tend to simplify things in my effort to wrap my head around the new AI terms I encounter. So, when I started reading these definitions, I immediately pictured the last thing that AI evangelists would like me to use as an example of Agentic AI – the HAL 9000 computer from Stanley Kubrick’s classic 1968 film, 2001: A Space Odyssey.
HAL 9000’s chilling “I’m sorry Dave, I’m afraid I can’t do that” response to its human collaborator’s instructions represents everyone’s deepest fears about the dark side of AI technology. This Woodruff Sawyer blog on the use cases for Agentic AI and the risks associated with the technology suggests some questions that management should be asked in order to assess whether companies are appropriately managing Agentic AI risks. This excerpt addresses what I like to call the “HAL 9000 problem”:
How will we ensure that our AI agents remain aligned with the objectives we set?
Why this question matters: AI agents operate autonomously, but they should stay within clearly defined guardrails to avoid unauthorized actions or unintended behavior. It’s important to confirm that those guardrails are revisited from time to time. While it’s presumably too soon for AI agents to run million-dollar transactions, AI has arguably already made life-and-death decisions—refer to that UnitedHealth case discussed above. Ultimately, companies should establish rules, oversight mechanisms, and human intervention points to keep AI decision-making in check.
Woodruff Sawyer says that if a company lacks clear constraints on its AI agent’s authority, it runs a serious risk of having the agent make unauthorized decisions, such as altering business processes without proper review. Other questions the blog suggests that management should be asked include:
– What specific roles and functions will AI agents perform?
– What are the risks of AI agents acting unpredictably, and how do we mitigate them?
– How do we ensure AI agents are transparent and accountable?
The blog poses these questions as ones that should be asked by directors, but it seems to me they should be on the table for anyone involved in the risk management process for the development or deployment of Agentic AI systems.