European Parliament Science Technology Options Assessment

A governance framework for algorithmic accountability and transparency

Study Specification:

Algorithms are widely employed throughout our economy and society to make decisions that have far-reaching impacts, including their applications for access to credit, healthcare, human welfare and employment. At the same time, there is growing evidence that, due to a variety of technical, economic and social factors, some algorithms and analytics can be opaque, making it impossible to determine when their outputs may be biased or erroneous. There is also a risk that automated systems can lead to more effective cartels, for example through their ability to monitor prices.

The study is expected to draft policy options that could help the European Parliament improve the accountability and/or the transparency of the algorithms that underpin many business models and platforms in the digital single market and to prevent bias. The policy options should include a governance framework, which should be in a position to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached.

Deliverables:

  1. Interim report;
  2. Final report and options Brief;
  3. Presentation to the STOA Panel;
  4. Presentation to one of more relevant EP Committees.

Published report available

The report was published on April 4th 2019, and is available to download from the European Parliament Research Services website.

Co-authors:

Ansgar Koene –              University of Nottingham
Chris Clifton –                 Purdue University
Yohko Hatada –              EMLS RI
Helena Webb –               University of Oxford
Menisha Patel –              University of Oxford
Jacob LaViolette –        University of Oxford
Caio Machado –             University of Oxford
Rashida Richardson – AI Now Institute
Dillon Reisman –            AI Now Institute

Abstract

Algorithmic systems are increasingly being used as part of decision-making processes in both the public and private sectors, with potentially significant consequences for individuals, organisations and societies as a whole. Algorithmic systems in this context refers to the combination of algorithms, data and the interface process that together determine the outcomes that affect end users. Many types of decisions can be made faster and more efficiently using algorithms. A significant factor in the adoption of algorithmic systems for decision-making is their capacity to process large amounts of varied data sets (i.e. Big Data), which may be paired with Machine Learning methods for inferring statistical models directly from the data. The same properties of scale, complexity and autonomous model inference however are linked to increasing concerns that many of these systems are opaque to the people affected by their use and lack clear explanations for the decisions they make. This lack of transparency risks undermining meaningful scrutiny and accountability, which is a significant concern when these systems are applied as part of decision-making processes that can have considerable impact on people’s human rights (e.g. safety critical decisions in autonomous vehicles; allocation of health and social service resources etc.).

This report develops policy options for the governance of algorithmic transparency and accountability, based on an analysis of the social, technical and regulatory challenges posed by algorithmic systems. Based on our review and analysis of existing proposals for governance of algorithmic systems, we propose a set of four policy options each of which addresses a different aspect of algorithmic transparency and accountability.

  1. Awareness raising: education, watchdogs and whistleblowers
  2. Accountability in public sector use of algorithmic decision-making
  3. Regulatory oversight and Legal liability
  4. Global coordination for algorithmic governance

Presentation of report at the European Parliament

On October 25th we presented the report to the European Parliament.

The session featured presentation from our report on “A governance framework for algorithmic accountability and transparency” as well as a sister report on ” Understanding algorithmic decision-making: opportunities and challenges” that was prepared by Dr. Claude Castelluccia and Dr. Daniel Le Metayer from the Institute national de recherche en informatique et en automatique – inria). Claude and Daniel’s presentation on the technical background was given first, followed by our presentation on governance (starting at time point 10:11:20, see slides below).


Stage 1: Literature Survey

For this survey we would like to ask you to list the five articles you consider to be the most important (regardless of academic discipline), for any/all of the following topics that you feel qualified to respond to:

  1. Algorithmic Transparency (e.g. Technical challenges for reducing opacity (types and causes of opacity); Technical solutions for reducing opacity; UX challenges/solutions for providing greater transparency; Tension/solutions to providing algorithmic transparency without impinging on Intellectual Property rights)
  2. Algorithmic Accountability (e.g. Technical challenges/solutions for identifying responsibility for algorithmic decisions; Mechanisms to enable questioning and redress for individuals and groups; Methods to verify algorithmic system behaviour (especially in relation to legal/standards compliance))
  3. Governance frameworks for algorithmic systems (e.g. Framework to insure proper inspection of algorithmic system; development/deployment – does it reflect the values of fairness set by lawmakers, judges and the public?; Frameworks for allocating responsibility and/or liability for algorithmic decisions; Creation of ethical framework for transparent processing of personal data and automated decision making)
  4. Algorithmic Fairness (social justice) (e.g. Classification of level of significant social impact from algorithmic decisions; Compliance with standards of legal fairness; Potential for bias/discrimination by algorithmic decisions – causes/solutions; Impact of algorithmic systems on Data Subject Privacy (e.g. inference of privacy sensitive factors); Potential for algorithmic systems to manipulate the democratic process)
  5. Algorithmic Fairness (business practices) (e.g. Algorithmic tools for ‘cartels’, implicit collusion on pricing through algorithmic ‘synchronizing’; Price manipulation by algorithmic personalization; Key issues relating to algorithmic system Intellectual Property rights)
  6. Technological and societal needs for: Algorithmic literacy; Algorithmic transparency; Algorithmic oversight





Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: