AI Risk Management: Four Critical Ethical Challenges

by John Jenkins

December 22, 2025

This WTW article identifies four critical ethical challenges that companies should consider when assessing their AI governance practices. This excerpt addresses privacy compliance and the hidden risks of personalization:

If you use AI to personalize services based on customer behavior and financial history, you could be using sensitive personal data, which can introduce significant privacy and security risks.

If you haven’t properly protected this data, it can be misused or stolen, leading to regulatory penalties and loss of trust. Compounding the challenge is how some AI systems operate as ‘black boxes;’ you can see the inputs and the outputs, but can’t understand how the system arrived at those results, making it difficult to explain decisions or demonstrate compliance with regulations like General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA).

You need to ensure transparency in how AI models collect, process and store data, using privacy-preserving techniques, such as differential privacy, and robust governance frameworks to maintain compliance and protect customer trust.

Other ethics-related topics addressed by the article include ownership of intellectual property related to AI-generated content, AI bias in recruitment, and AI’s environmental impact.