IEEE Standard for Algorithm Bias Considerations (P7003)

As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene.

Scope

This standard will provide a framework to help developers of algorithmic systems and those responsible for their deployment to identify and mitigate non-operationally justified biases in the behaviour of the algorithmic system. Algorithmic systems in this context refers to the combination of algorithms and data that together determine the outcomes which effect end users.

P7003 will describe specific methodologies to help users certify how they worked to address and eliminate issues of negative bias in the creation of their algorithmic systems, where “negative bias” infers the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc); or with instances of bias against groups not necessarily protected explicitly by legislation, but  otherwise diminishing stakeholder or user well-being and for which there are good reasons to be considered inappropriate.

Possible elements will include (but are not limited to): benchmarking procedures and criteria for the selection of validation data sets for bias quality control; guidelines on establishing and communicating the application boundaries for which the algorithm has been designed and validated to guard against unintended consequences arising from out-of-bound application of algorithms; suggestions for user expectation management to mitigate bias due to incorrect interpretation of systems outputs by users (e.g. correlation vs. causation).

Purpose

The purpose of this standard is to provide individuals or organizations creating algorithmic systems with certification oriented methodologies to provide clearly articulated accountability and clarity around how algorithms are targeting, assessing and influencing the users and stakeholders of said algorithmic system. Certification under this standard will allow creators to communicate to users, and regulatory authorities, that up-to-date best practices were used in the design, testing and evaluation of the algorithm to avoid unjustified differential impact on users.

Field of Application

P7003 should be used when new algorithmic systems are designed, existing systems are updated or systems are deployed in new contexts.

Limitations

P7003 does not deny the role of operationally justified bias in algorithmic processing as a fundamental element in information classification and decision making. P7003 seeks to help with distinguishing and communicating the difference between justified and unjustified bias, and thereby clarify the limits for appropriate use of that algorithmic systems.

Activities

A call for participation in the P7003 working group has been issued with a general invitation to join the first working group meeting on May 5th 2017. Upcoming events are announced in the side panel on the project website.

As part of the wider collaboration with the IEEE initiatives on ethical technological development, there will also be a short presentation and Q&A about P7003 during the webinar “The Human Standard: Why Ethical Considerations Should Drive Technological Development” on April 18th.

A brief paper outlining the aims of P7003 and its relationship with the other P700x-series standards working groups was published in IEEE Technology and society Magazine (vol 36, No 2, June 2017, pp31-32).

Related contributions to activities of the IEEE Global Initiative on Ethical Design in AI and AS

In response to the public Request for Input, issued in December 2016, for the Ethically Aligned Design document that is being developed by the Global Initiative, Ansgar submitted a  number of suggestions.

Leave a Reply

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: