Agentic AI: Tips for Secure Deployment
by
July 30, 2025
Agentic AI is on the march, with a recent survey indicating that 29% of organizations surveyed are already using agentic AI and that 44% plan to implement it over the next year. This Risk Management Magazine article by three Lowenstein Sandler professionals offers some tips for companies on how to deploy agentic AI tools in a secure manner. Here are a few of them:
Task Minimization – Ensure agents are subject to proper IT and security processes for every source through which the agent is deployed (SaaS platform, browser or operating system). Only give the agents the minimum level of access permissions to resources to perform its task and limit the scope to only what is required. Charge agents with smaller tasks that, when combined, achieve the larger goal.
Governance Policies and Procedures – Ensure that the application adheres to cybersecurity frameworks or standards, such as the NIST Cybersecurity Framework (CSF) or ISO 27001. Perform extensive testing in a safe environment before releasing it into production. Develop cross-functional teams including IT, management and legal to develop protocols for safe use.
Task Accountability – Every action the agentic AI performs should be logged, traceable and provide an explanation of why it made certain decisions. Use fraud protection tools to minimize the vulnerability of agents to hackers and scammers, and use behavioral testing tools to confirm that the AI agent executed tasks ethically and legally.
The article also highlights the need to maintain human oversight of agentic AI tools and the importance of contractual risk allocation provisions.