Knowing When Not to Use AI

by John Jenkins

February 26, 2025

In an environment where everyone seems to be dreaming up new use cases for AI tools, those focused on managing the risks of AI may find it helpful to contemplate situations where the use of AI tools doesn’t make a lot of sense, in order to identify potential trouble spots before overenthusiastic proponents of “AI for everything” find themselves & their companies in hot water.  This recent post by Wharton’s Ethan Mollick on his “One Useful Thing” Blog  lists five areas where using AI can be counterproductive:

1. When you need to learn and synthesize new ideas or information. Asking for a summary is not the same as reading for yourself. Asking AI to solve a problem for you is not an effective way to learn, even if it feels like it should be. To learn something new, you are going to have to do the reading and thinking yourself, though you may still find an AI helpful for parts of the learning process.

2. When very high accuracy is required. The problem with AI errors, the infamous hallucinations, is that, because of how LLMs work, the errors are going to be very plausible. Hallucinations are therefore very hard to spot, and research suggests that people don’t even try, “falling asleep at the wheel” and not paying attention. Hallucinations can be reduced, but not eliminated. (However, many tasks in the real world are tolerant of error – humans make mistakes, too – and it may be that AI is less error-prone than humans in certain cases)

3. When you do not understand the failure modes of AI. AI doesn’t fail exactly like a human. You know it can hallucinate, but that is only one form of error: AIs often try to persuade you that they are right, or they might become sycophantic and agree with your incorrect answer. You need to use AI enough to understand these risks.

4. When the effort is the point. In many areas, people need to struggle with a topic to succeed – writers rewrite the same page, academics revisit a theory many times. By shortcutting that struggle, no matter how frustrating, you may lose the ability to reach the vital “aha” moment.

5. When AI is bad. This may seem obvious, but AI is bad at things you wouldn’t expect (counting the number of r’s in the word “strawberry”) and good at things you wouldn’t expect (writing a Shakespearean sonnet about how hard it is to count the number of r’s in the word strawberry where the first letter of every line spells out two fruits). Unfortunately, there is no general manual to tell you the shape of the Jagged Frontier of AI abilities, which are constantly evolving. Trial and error, and sharing information with peers, is vital to figuring this out.