For algorithm based systems, as with many other topics, 2016 turned out to be an eventful year. As we close the year and look back on events, the course of 2016 brought many of the issues we intend to address in the UnBias project to the attention of people and organizations who previously perhaps had not considered these things before.
At the start of the year there were some debates in the margins of popular media about the risks and consequences of hypothetical future artificial super intelligence. For the most part these debates were treated as ‘interesting philosophical exercises’ at best, but more frequently as pure Science Fiction. Soon however, while we were waiting to hear the outcome of our UnBias project EPSRC grant application, the focus of the global debate stated to shift from future AI to algorithms that are already affecting the lives of people in the present society [e.g. algorithms for risk assessments in criminal sentencing]. By the time our funding was approved and the new research associate positions were staffed, the White House had published a report on “Preparing for the future of artificial intelligence”, Facebook had gone through introducing human editors to balance possible weaknesses in their Trending Topics algorithm to removing the human editors because of accusations of human bias. In the same month when UnBias officially launched, Amazon, Facebook, Google, Microsoft, and IBM publically announced the formation of their “Partnership on AI to benefit people and society”. Since then, UnBias has participated in a meeting on “Algorithms transparency and accountability in the digital economy” at the European Parliament where it was confirmed that the European Commission has been asked to launch a two-year investigation in the impact of algorithms on EU citizens (launching in 2017). Finally, the tumultuous US elections made concerns about lack of editorial responsibility in algorithmic news recommendations/filtering into a central issue for politicians across the globe. Clearly we have a lot of work ahead of us in 2017.
(The examples and links in this summary are a merely the ‘tip of the iceberg’ of the ‘algorithms’ related news in 2016)
So far, we are on track. Preliminary Youth Juries and user observation pilot studies in November 2016 showed promising results. The stakeholder engagement panel has some 30+ confirmed partners ranging from industry, academia, NGOs to regulatory organizations.
In parallel, but very much complimentary to our work, the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems released the first version of their framework for “Ethically Aligned Design – a vision for prioritizing human wellbeing with artificial Intelligence and autonomous Systems” with a call for public discussion and submission of comments. To quote from the introduction of the document:
“We need to make sure that these technologies are aligned to humans in terms of our moral values and ethical principles. AI/AS [Artificial Intelligence/ Autonomous Systems] have to behave in a way that is beneficial to people beyond reaching functional goals and addressing technical problems. This will allow for an elevated level of trust between humans and our technology that is needed for a fruitful pervasive use of AI/AS in our daily lives.
By aligning the creation of AI/AS with the values of its users and society we can prioritize the increase of human wellbeing as our metric for progress in the algorithmic age.”
As we move into 2017 and UnBias, hopefully starts to bear fruit, we will be looking to contribute to this and similar initiatives wherever possible as well as contributing to public and the professional thought leadership around the impact of algorithms on society.
One thought on “2016, an eventful year for algorithms”