Risk Management: AI Usage in Employer-Sponsored Health Care Plans

by John Jenkins

September 23, 2025

AI tools are becoming ubiquitous in every aspect of business life, including in employer sponsored health care plans.  This McDermott Will memo reviews how AI tools are being used in connection with those plans and the legal issues and risks associated with that use. This excerpt zeros in on concerns that the use of AI tools create for plan fiduciaries:

For fiduciaries of group health plans, any use of AI technology presents two fundamental problems. First, AI models are by their nature black boxes. And, second, there are limits to AI competence and accuracy that cannot be eliminated.

The black-box nature of AI technology poses a significant problem for fiduciaries of group health plans. The opacity of the models makes ERISA-required monitoring and oversight extremely challenging. Robust third-party standards are needed to establish measurement science and interoperable metrics for AI systems. Further, independent certification of vendor AI systems would help fiduciaries meet ERISA standards. Ideally, the US Department of Labor (DOL) would issue guidance as it has done in related contexts, such as with cybersecurity threats involving ERISA-covered pension and later welfare plans.

Questions about AI competence and accuracy are equally daunting to plan fiduciaries. The current crop of AI models relies heavily on a process of back propagation to continuously refine reliability. Even the most advanced AI models achieve, at best, an asymptotic approach to reliability. This raises a threshold legal question: whether and to what extent fiduciaries can prudently rely on AI technology. The practical answer is that they likely already do, sometimes unknowingly, which means that some validation of AI models is essential for a fiduciary to ensure they can meet ERISA’s requirements.

The memo points out that the Department of Labor requires ERISA fiduciaries to exercise diligence and prudence to ensure that plan decisions are made “solely in the interest of participants and beneficiaries,” and says that delegating critical claim denials to opaque AI models risks violating this duty.  If used, those tools should be limited to serving in a decision-support role, with final authority for claims adjudication resting in human hands.