The Challenges of 3rd Party AI Risk Management
by
August 26, 2025
Over on Radical Compliance, Matt Kelly has posted an interesting discussion of the challenges associated managing risks associated with 3rd party AI tools incorporated into your own systems. Matt points out that one of the biggest challenges is actually determining how many of these systems you’re using. To complicate things further, this process is one that will necessarily be done “on the fly.” This excerpt explains why:
The task here isn’t just about taking an inventory of AI systems running within your enterprise; it’s knowing when artificial intelligence is added to applications already running within your enterprise— because every SaaS vendor and their uncle now seems to be saying, “We just added AI into our latest upgrade!” Well, you need to be aware of those instances of AI too.
So yes, [Governance, Risk & Compliance] teams need to know where third-party AI systems are living within your enterprise, but that’s more than just taking an inventory of AI systems. It requires ongoing monitoring of the SaaS vendors you use, so you’re aware of when they add AI into their applications. Whether you do that through contract terms requiring disclosure, controls to evaluate all SaaS “upgrades” before allowing them into the live environment, or some other approach, that’s the task.
The blog also highlights the fact that AI risks enter businesses from two directions. Senior management and the IT team make decisions to implement AI across the enterprise. Those people understand the need for good risk assessment and compliance practices. However, front line personnel are ready to connect with any AI vendor that solves a specific business need. Those AI vendors might offer tools and apps that meet that need, but the risk management and compliance team may not have any information about their AI practices and controls.
The blog goes on to offer ideas on how to conduct a risk assessment with respect to these various AI tools once you’ve corralled them. This process begins with basic questions like “is this new AI tool necessary?” It then moves on to focus on ensuring that you’re building appropriate AI-centric questions into the risk assessment process.
Matt cautions that adjustments will need to be made when incorporating AI risks into traditional third-party risk management systems. Those systems incorporate traditional privacy and security frameworks that emphasize access control and data classification. In contrast, AI risks are more related to data quality and data usage.