New AI Framework Tackles Algorithmic Bias

by John Jenkins

February 12, 2025

One of the major risks associated with AI tools is unintended bias against individuals based on race, gender, or other protected characteristics. This can result from, among other things, flaws in training datasets, faulty decision criteria, or model drift.  In order to help organizations mitigate the risk of unintended bias, the Institute of Electrical and Electronics Engineers recently issued IEEE 7003-2024, “Standard for Algorithmic Bias Considerations,” This DLA Piper memo on the new standard summarizes the key actions that organizations that want to conform to it should take:

1. Establishing a bias profile: The standard emphasizes the creation of a “bias profile” to document all considerations regarding bias throughout the system’s lifecycle. This information repository tracks decisions related to bias identification, risk assessments, and mitigation strategies.

2. Identifying stakeholders and assess risks: It is encouraged that companies identify stakeholders – both those who influence the system and those impacted by the system – early in the development process. Comprehensive risk assessments will account for the potential adverse impacts of bias on different groups of stakeholders and will be updated as the system evolves.

3.Ensuring data representation:Poor data quality is a leading cause of algorithmic bias. The standard calls for evaluating datasets to confirm they sufficiently represent all stakeholders, particularly marginalized groups. Organizations are encouraged to document decisions related to data inclusion, exclusion, and governance.

4. Monitoring for drift: Algorithmic systems are susceptible to “data drift” (ie, changes in the data environment) and “concept drift” (ie, shifts in the relationship between input and output). Continuous monitoring and retraining are important to ensure fairness over time.

5. Promoting accountability and transparency: To foster trust, organizations are encouraged to communicate the intended purpose, limitations, and acceptable use of their AI systems using documentation that is clear, accessible, and tailored to stakeholders – including end users and regulators.