On September 14th the US ACM organized a panel on Algorithmic Transparency and Accountability in Washington DC to discuss the importance of the Statement on Algorithmic Transparency and Accountability and opportunities for cooperation between academia, government and industry around these principles. Also part of this panel was Ansgar, representing the IEEE Global Initiative on Ethical Considerations for Artificial Intelligence and Autonomous Systems, its P7000 series of Standards activities, and UnBias.
As part of our work to contribute to the development of the IEEE P7003 Standard for Algorithm Bias Considerations we are reaching out to the community of stakeholders to ask for use cases highlighting real-world instances of unjustified and/or inappropriate bias in algorithmic decisions.
The goal of this Standard Project is to describe specific methodologies that can help users certify how they worked in order to address and eliminate issues of negative bias in the creation of their algorithms. “Negative bias” refers to the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc.); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user wellbeing and for which there are good reasons to be considered inappropriate.
Who should participate:
Programmers, manufacturers, researchers or other stakeholders involved in creating an algorithm along with any stakeholders defined as end users of the algorithm, and any non-user affected by the use of the algorithm, including but not limited to customers, citizens or website visitors
How to Participate:
If you wish to participate in the IEEE P7003™ Working Group, please contact the Working Group Chair, Ansgar Koene.
Meeting Information:
The first IEEE P7003™ Working Group meeting will be held online via (WebEx) on Friday, 5 May from 9:00 AM – 11:00 AM (EST)