IEEE Standard for Algorithm Bias Considerations (P7003)

As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene.


This standard will provide a framework to help developers of algorithmic systems and those responsible for their deployment to identify and mitigate non-operationally justified biases in the behaviour of the algorithmic system. Algorithmic systems in this context refers to the combination of algorithms and data that together determine the outcomes which effect end users.

P7003 will describe specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithmic systems, where “negative bias” infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc); or with instances of bias against groups not necessarily protected explicitly by legislation, but  otherwise diminishing stakeholder or user well-being and for which there are good reasons to be considered inappropriate.

Possible elements will include (but are not limited to): benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs by users (e.g. correlation vs. causation).


The purpose of this standard is to provide individuals or organizations creating algorithmic systems with certification oriented methodologies to provide clearly articulated accountability and clarity around how algorithms are targeting, assessing and influencing the users and stakeholders of said algorithmic system. Certification under this standard will allow creators to communicate to users, and regulatory authorities, that up-to-date best practices were used in the design, testing and evaluation of the algorithm to avoid unjustified differential impact on users.

Field of Application

P7003 should be used when new algorithmic systems are designed, existing systems are updated or systems are deployed in new contexts.


P7003 does not deny the role of operationally justified bias in algorithmic processing as a fundamental element in information classification and decision making. P7003 seeks to help with distinguishing and communicating the difference between justified and unjustified bias, and thereby clarify the limits for appropriate use of that algorithmic systems.


A call for participation in the P7003 working group has been issued with a general invitation to join the first working group meeting on May 5th 2017. Upcoming events are announced in the side panel on the project website.

As part of the wider collaboration with the IEEE initiatives on ethical technological development, there will also be a short presentation and Q&A about P7003 during the webinar “The Human Standard: Why Ethical Considerations Should Drive Technological Development” on April 18th.

Publications related to P7003

Democratisation of Usable Machine Learning in Computer Vision, Raymond Bond, Ansgar Koene, Alan Dix, Jennifer Boger, Maurice Mulvenna, Mykola Galushka, Bethany Waterhouse-Bradley, Fiona Browne, Hui Wang and Alexander Wong, presented at FATE/CV Workshop at CVPR 2019, 17th June 2019.

Regulatory frameworks relating to data privacy and algorithmic decision making in the context of algorithmic bias, Adam Leon Smith, Abhik Chaudhuri, Allison Gardner, Linda Gu, Malek Ben Salem and Maroussia Levesque, presented at AI Ethics Workshop at NIPS, 7th December 2018.

Ansgar Koene, Adam Leon Smith, Takashi Egawa, Sukanya Mandal, and Yohko Hatada. 2018. IEEE P70xx, Establishing Standards for Ethical Technology. In Proceedings of KDD, ExCeL London UK, August, 2018 (KDD’18),2 pages.

Ansgar Koene, Liz Dowthwaite, and Suchana Seth. “IEEE P7003TM Standard for Algorithmic Bias Considerations.” In 2018 IEEE/ACM International Workshop on Software Fairness (FairWare), pp. 38-41. IEEE, 2018.

On December 9th 2017 an article was published in TechEmergence, with the title “The Ethics of Artificial Intelligence for Business Leaders – Should Anyone Care?“, summarizing the eleven P70xx series standards currently in development.

This was followed on 12 September 2017 by a ‘blog’ article “Keeping Bias From Creeping Into Code“, based on an interview about P7003 with Ansgar Koene.

A brief paper outlining the aims of P7003 and its relationship with the other P700x-series standards working groups was published in IEEE Technology and society Magazine (vol 36, No 2, June 2017, pp31-32).

On May 5th 2017 The Institute published a ‘resources’ article “Seven IEEE Standards Projects Provide Ethical Guidance for New Technologies“, summarizing the P7000 series standards activities that were active at that time, including P7003.

Related contributions to activities of the IEEE Global Initiative on Ethical Design in AI and AS

In response to the public Request for Input, issued in December 2016, for the Ethically Aligned Design document that is being developed by the Global Initiative, Ansgar Koene submitted a  number of suggestions.

On September 14th 2017 Ansgar Koene participated in the US ACM panel on Algorithmic Accountability and Transparency on behalf of the IEEE Global Initiative and the P7000 series Standards activities.

On December 7th 2017 the IEEE Internet Initiative held a webinar on “Algorithmic Decision Making: Impacts and Implications” that was presented by the authors of the forthcoming Algorithmic Decision Making white paper, Pamela Pavliscak and Jared Bielby.

On May 30th 2019, IEEE TechEthics held a webinar on “Bias in the Age of the Algorithm”. The discussion centred on the issue that “Algorithms are in use all around us, every day. Online shopping. Job applications. Real estate transactions. Search results. Public safety. Social media. Digital photo albums. You name it, there’s probably an algorithm involved with it somehow. These decision-making processes are driven by large amounts of data…data that has been shown to possess inherent biases (racial, gender-based, economic, and more). How do those algorithmic biases impact us? How can they be addressed? And what do they say about us as a society?”. Panel members for the  panel were:
Erin LeDell of
Cathy O’Neil of ORCAA
Mathana Stender, Tech Ethicist and member of the P7003 working group
Mark A. Vasquez, IEEE (moderator)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: