Responsible AI Use: A Checklist

by John Jenkins

January 20, 2026

Here’s something that my colleague Meaghan Nelson recently published on “The Mentor Blog”:

Recently, the ABA published a practical checklist for managing your firm’s use of AI. The article recognizes that folks are using AI for a variety of tasks—from help with emails to summarizing long documents to help with contract review. The key is how to do so while managing risk for the firm:

The difference comes down to tool choice, supervision, and review.

The ABA recommends a 10-step checklist to help:

1. Use firm-approved tools

Avoid public AI models for client work. Use secure, legal-specific tools designed for law firms, such as Clio Work, that keep work inside your firm’s existing systems.

2. Confirm security and confidentiality

Verify that the tool guarantees Zero Data Retention and meets recognized security standards like SOC 2 Type II. Never assume privacy. Confirm it.

3. Check for factual accuracy

Manually verify all facts, citations, and assertions. Treat AI output the same way you would an associate’s draft.

4. Cross-check sources

Rely only on verified legal databases such as Clio Library. Confirm that cited cases, statutes, and regulations actually exist and are current.

5. Analyze reasoning quality

Review the logic, not just the conclusion. Confirm proper use of IRAC/CRAC, correct doctrinal tests, and coherent legal reasoning.

6. Confirm the correct jurisdiction

General AI frequently blends jurisdictions. Ensure the content reflects the correct federal, state, provincial, or local law, including terminology and standards.

7. Look for bias or mischaracterization

Check that cases are accurately described and not selectively framed. Generative AI can reflect bias from training data if left unchecked.

8. Verify formatting and procedural rules

If the content is headed to court, confirm formatting requirements, captions, headings, signatures, and local procedural rules.

9. Ensure ethical compliance

Follow ABA Model Rules, state rules of professional responsibility, or Law Society guidance on AI-assisted work.

10. Require final human sign-off

Every piece of AI-generated legal work must be reviewed and approved by a lawyer before release. No exceptions.

Some of these are no-brainers, but it’s definitely helpful to have them in one spot to create a repeatable process. You could also imagine building on this checklist like the Honorable Allison H. Goddard has for her chambers (e.g., perhaps to define how a human might sign-off, including what criteria would be important depending on the output). I also don’t think the benefit of a checklist like this is limited to law firms. I can see some practical value in following these steps in-house as well.