Publication of 1st WP4 workshop report

We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.

The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.

On the issue of defining algorithm fairness, most stakeholders rated as a good or reasonable starting point: “a context-dependent evaluation of the algorithm processes and/or outcomes against socio-cultural values. Typical examples might include evaluating: the disparity between best and worst outcomes; the sum-total of outcomes; worst-case scenarios; everyone is treated/processed equally without prejudice or advantage due to task-irrelevant factors”. To this, additional suggestions were made focusing on criteria relating to social norms and values, system reliability, and the non-interference with user control/agency (see page 7). When thinking of issues related to algorithm design, participant recommendations focused on transparency and duty of care to society as well as target users/customers (see page 9).

Fake news case study: regarding fake news, the focus of the discussion was on the nature of fake news, not a new phenomenon and lack of evidence of actual impacts, paired with a focus on education, critical reading skills, trustmark/branding and breaking the link with financial profit as main solutions. To the extent that algorithms might play a role, it was noted that market research is suggesting that people don’t want personally tailored news (see page 10).

Personalisation algorithms case study: regarding personalisation, it was suggested that services marketed as personalisation should really be called task based channelling since the ‘user type categorizations’ don’t really address personal goals. It was proposed that the use of personalisation can be useful for object and commercial purposes but that it is not appropriate when applied to dissemination of socially important information like news coverage. This led to discussion about levels of control that users should have (see page 10).

Gaming the system case study: the impact of gaming of systems like search rankings was closely linked to a general lack of awareness about how such rankings are determined and how to interpret the meaning of a ranking. It was highlighted that regulation regarding censorship focuses on removal of content, but not on placement within a ranking even though a very low ranking can often mean that something is effectively removed from peoples’ awareness. In terms of solutions one idea that came up was the use of greater user input, through ratings feedback, to help signal the difference between search ranking and content validity (see page 10).

Algorithm transparency case study: discussion on his topic started from observations about need to clarify meaningful transparency (posting source code vs. understanding the process). This in turn requires clarity about the purpose of transparency, is it about fairness or trust? The importance of the data as integral part in determining algorithm bias was raised as well as the need to understand the users. Avoiding ‘gaming the system’ while still providing transparency was discussed with solutions focusing on intermediate auditing organisations and certification (see page 11).

Plenary discussion and conclusions: discussion of the four case studies raised a number of recurrent key points that were returned to in the plenary discussion that closed the workshop. Debate amongst the participants was very productive and highlighted that: 1) the ‘problem’ of potential bias and unfairness in algorithmic practice is broad in scope, has the potential to disproportionately affect vulnerable users (such as children), and may be worsened by the current absence of effective regulation and market pluralism of online platforms; 2) the problem is also nuanced as the presence of algorithms on online platforms can be of great benefit to assist users achieve their goals; so it is important to avoid any implication that their role is always harmful; and 3) that finding solutions to the problem is highly complex. The effective regulation of algorithmic practice would appear to require accountability and responsibility on the part of platforms and other agencies combined with the meaningful transparency of the algorithms themselves.

The next steps for the UnBias stakeholder engagement work-package are to run further workshops. These will increasingly focus on issues of regulation and how it might be possible to identify practices to support algorithmic fairness that are both technically feasible and socially, ethically and legally valid. We will also run a parallel online questionnaire panel to seek the informed opinion of stakeholders who are unable to attend the workshops in person.

One thought on “Publication of 1st WP4 workshop report”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.