European Parliament Science Technology Options Assessment

A governance framework for algorithmic accountability and transparency

Study Specification:

Algorithms are widely employed throughout our economy and society to make decisions that have far-reaching impacts, including their applications for access to credit, healthcare, human welfare and employment. At the same time, there is growing evidence that, due to a variety of technical, economic and social factors, some algorithms and analytics can be opaque, making it impossible to determine when their outputs may be biased or erroneous. There is also a risk that automated systems can lead to more effective cartels, for example through their ability to monitor prices.

The study is expected to draft policy options that could help the European Parliament improve the accountability and/or the transparency of the algorithms that underpin many business models and platforms in the digital single market and to prevent bias. The policy options should include a governance framework, which should be in a position to verify and demonstrate compliance with key standards of legal fairness for automated decisions without revealing key attributes of the decision or the process by which the decision was reached.

After review by the European Parliament the final report will be published at http://www.europarl.europa.eu/stoa/cms/home/studies

Deliverables:

  1. Interim report;
  2. Final report and options Brief;
  3. Presentation to the STOA Panel;
  4. Presentation to one of more relevant EP Committees.

Stage 1: Literature Survey

For this survey we would like to ask you to list the five articles you consider to be the most important (regardless of academic discipline), for any/all of the following topics that you feel qualified to respond to:

  1. Algorithmic Transparency (e.g. Technical challenges for reducing opacity (types and causes of opacity); Technical solutions for reducing opacity; UX challenges/solutions for providing greater transparency; Tension/solutions to providing algorithmic transparency without impinging on Intellectual Property rights)
  2. Algorithmic Accountability (e.g. Technical challenges/solutions for identifying responsibility for algorithmic decisions; Mechanisms to enable questioning and redress for individuals and groups; Methods to verify algorithmic system behaviour (especially in relation to legal/standards compliance))
  3. Governance frameworks for algorithmic systems (e.g. Framework to insure proper inspection of algorithmic system; development/deployment – does it reflect the values of fairness set by lawmakers, judges and the public?; Frameworks for allocating responsibility and/or liability for algorithmic decisions; Creation of ethical framework for transparent processing of personal data and automated decision making)
  4. Algorithmic Fairness (social justice) (e.g. Classification of level of significant social impact from algorithmic decisions; Compliance with standards of legal fairness; Potential for bias/discrimination by algorithmic decisions – causes/solutions; Impact of algorithmic systems on Data Subject Privacy (e.g. inference of privacy sensitive factors); Potential for algorithmic systems to manipulate the democratic process)
  5. Algorithmic Fairness (business practices) (e.g. Algorithmic tools for ‘cartels’, implicit collusion on pricing through algorithmic ‘synchronizing’; Price manipulation by algorithmic personalization; Key issues relating to algorithmic system Intellectual Property rights)
  6. Technological and societal needs for: Algorithmic literacy; Algorithmic transparency; Algorithmic oversight





Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy

%d bloggers like this: