On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.
The session featured presentation from our report on “A governance framework for algorithmic accountability and transparency” as well as a sister report on ” Understanding algorithmic decision-making: opportunities and challenges” that was prepared by Dr. Claude Castelluccia and Dr. Daniel Le Metayer from the Institute national de recherche en informatique et en automatique – inria). Claude and Daniel’s presentation on the technical background was given first, followed by our presentation on governance (starting at time point 10:11:20, see slides below).
The report was commissioned by the European Parliament Research Service at the start of 2018 through a direct request to us. The final draft of the report was completed as a collaboration involving the following co-authors (ranked alphabetically by affiliation) :
- Rashida Richardson, AI Now Institute
- Dillon Reisman, AI Now Institute
- Yohko Hatada, EMLS RI
- Helena Webb, University of Oxford
- Menisha Patel, University of Oxford
- Jacob LaViolette, University of Oxford
- Caio Machado, University of Oxford
- Chris Clifton, Purdue University
- Ansgar Koene (project lead), University of Nottingham
Abstract
Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refers to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. Big Data), which may be paired with Machine Learning methods for inferring statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have considerable impact on people’s human rights (e.g. safety critical decisions in autonomous vehicles; allocation of health and social service resources etc.).
This report develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on our review and analysis of existing proposals for governance of algorithmic systems, we propose a set of four policy options each of which addresses a different aspect of algorithmic transparency and accountability.
- Awareness raising: education, watchdogs and whistleblowers
- Accountability in public sector use of algorithmic decision-making
- Regulatory oversight and Legal liability
- Global coordination for algorithmic governance