AI Implementation: Don’t Count the Humans Out Just Yet!
by
October 9, 2025
While productivity enhancements are part of the promise of generative AI tools, many people fear that those enhancements will be accompanied by significant job losses. However, a recent article posted on The International Association of Privacy Professionals’ website cautions that when companies implement AI-based systems without ensuring adequate human backup, they run significant business continuity risks:
As repetitive and analytical tasks are handed over to AI, many organizations may consider reducing their staff, gradually eroding their pool of human expertise. This phenomenon, described in the literature as skill decay, forms part of the so-called automation paradox. The more reliable and pervasive an automated system becomes, the less frequently humans are required to intervene, but when they do, it is usually under rare and critical circumstances for which their skills have atrophied.
In effect, reliance on AI without sufficient human practice and oversight can turn such systems into potential single points of failure. Should an AI application supporting a critical business process crash or behave unpredictably, the result may be an immediate operational shortfall with too few skilled staff to bridge the gap. Heavy reliance on AI without well-planned human or system backups can, therefore, leave organizations vulnerable when disruptions occur.
The article says that although businesses in unregulated industries may not have legal obligations to identify critical business processes, they will still need to identify internal processes and procedures that must have sufficient human resources to support AI-based solutions. While businesses in regulated industries are already required to maintain robust business continuity programs, the article says that those programs will need to be reassessed as AI tools are integrated into their core workflows.