AI Risk Management: Selecting GenAI Providers, Models & Use Cases
by
June 4, 2025
A recent Katten blog discusses what companies should keep in mind when selecting GenAI model providers, models and use cases. The blog says that not all providers are created equal, and that due diligence is necessary before selecting the appropriate provider for a particular business’s needs:
Before selecting a provider and model, it is important to learn where the provider is located; where data is transferred and stored; where and how the training data was sourced; compliance with the NIST AI risk management framework, ISO/IEC 42001:2023, and other voluntary standards; impact or risk assessments under the EU AI Act, Colorado AI Act, and other laws; guardrails and other safety features built into the model; and performance metrics of the model relative to planned use cases.
This information may be learned from the “model card” and other documentation for each model, conversations with the provider, and other research. And, of course, the contractual terms governing the provider relationship and model usage are critical. Key issues include IP ownership, confidentiality and data protection, cybersecurity, liability, reps and warranties, and indemnification.
Once a provider is selected, the appropriateness of a particular model for various uses must be scrutinized. The blog notes that a particular GenAI model may be relatively low risk for one purpose, such as summarizing documents, but may involve greater risk when used for another purpose, such as resume screening. Companies need to calibrate their risk tolerance for various AI use cases, and the blog suggests using a cross-functional AI advisory committee to assist in this process. After that calibration process has been completed, appropriate AI usage policies and employee training programs need to be implemented.