Generative AI: Where to Look for Risks
by
April 14, 2026
MIT’s Sloan School of Management recently issued a report on “Mapping the Generative AI Risk Space” that focuses on where to look for risks associated with generative AI. To illustrate where risks might be located, the report uses the hypothetical of a hiring manager using AI tools to draft a new job description. A task like this involves multiple AI components, each of which raises a distinct set of risk management issues. This excerpt from an MIT publication summarizing the report explains:
Training data: Foundation models are trained on massive datasets culled from the internet. While training data includes millions of job descriptions and resumes, it could also incorporate outdated HR practices, biased language, or inaccurate information. That means there’s a chance that the output won’t map to the hiring manager’s industry or region or that it reflects outdated norms rather than current industry best practices.
Foundation models: The large language models that form the backbone of generative AI don’t generate the same responses, even when given the exact same inputs. These models are also capable of generating hallucinations, or plausible-sounding content that is factually incorrect. Lack of transparency into model behavior can cause additional problems for a hiring manager if they can’t discern why a particular output was generated, let alone easily diagnose and correct errors.
User prompts: An LLM’s response is only as good as the prompt. Without clear directions — in this case, examples of what a good job description looks like — the output likely won’t meet expectations. The hiring manager could also introduce risks if they unknowingly include confidential data, proprietary strategies, or personally identifiable information in their prompts.
System prompts: Enterprise-grade generative AI tools are architected with a hidden system prompt that enforces organizational context and safety guardrails and sets tone. A poorly engineered system prompt can create a single point of failure that can lead to errors and security vulnerabilities. Conversely, a system prompt that is too rigid can result in a boilerplate job description that is a turn-off to top candidates.
To manage these risks, the authors recommend that organizations inventory every generative AI tool in use. For each such tool, documentation should be prepared identifying the relevant foundation model, the process for how system prompts are designed and maintained, the data assets that are connected, and where human review is required. Accountability and permission structures should then be established.
Different approaches should be implemented to manage embedded risks (i.e., those associated with the technology itself) and enacted risks (i.e., those associated with how the organization uses the technology). However, in each case, clear ownership of ongoing risk assessment needs to be assigned and audit trails established that log the prompts, outputs, and human interventions at each step where generative AI influences decisions.