On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.
Tag Archives: Menisha
Unbias Hackathon
On the week-end of June 30th and July 1st, the UnBias team hosted a two-day hackathon at Codebase in Edinburgh, with support from local outfit Product Forge, whose experience organizing such events is unrivalled in Scotland.
The hackathon challenge was formulated as follows:
“Artificial Intelligence shapes digital services that have become central to our everyday lives. Online platforms leverage the power of AI to monetize our attention, with often unethical side-effects: our privacy is routinely breached, our perception of the world is seriously distorted, and we are left with unhealthy addictions to our screens and devices. The deep asymmetry of power between users and service providers, the opacity and unaccountability of the algorithms driving these services, and their exploitation by trolls, bullies and propagandists are serious threats to our well-being in the digital era.
UnBias Algorithmic Preference Survey
We are conducting a survey on algorithm preferences for solving resource-allocation problems. The survey consists of two case studies with 5 options to determine the allocation algorithm. Completing this task should take between 10 – 20 minutes.
UnBias takes part in European Researchers’ Night!
How do you take care on the Internet? What are the dangers of online fake news and filter bubbles? What are appropriate punishments for hate speech and trolling?
These are questions we asked members of the public during the Curiosity Carnival at the University of Oxford on September 30th. The Curiosity Carnival formed part of European Researchers’ Night, celebrated in cities across Europe. Oxford ran a city wide programme of activities across its universities, libraries, gardens and woods to give members of the public a chance to find out about real research projects and meet the people who conduct them.
Continue reading UnBias takes part in European Researchers’ Night!
A Month of Conferences and Workshops
2nd UnBias stakeholders workshop
It is our great pleasure to welcome you to the 2nd UnBias stakeholder workshop this June 19th (2017) at the Wellcome Collection in London, UK.
In this workshop we will build on the outcomes of the previous workshop, moving from the exploration of issues to a focus on solutions.
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations and the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
Structure of the 2nd stakeholders workshop
The workshop will consist of two parts.
- In the first part we will present a challenge to choose which out of four possible algorithms is most fair for a limited resources allocation task. We will do this under two transparency conditions: 1. when only observations of outcomes are known; 2. when the rational behind the algorithm is know. we will conclude this part with a discussion about the reasoning behind our algorithm choices.
- Having been primed with some of the challenges for designing fair algorithmic decision systems, the second part will explore ideas and frameworks for an ’empathy’ tool to help algorithmic system designers identify possible sources of bias in their system design.
Workshop schedule:
- 12:00-1:00pm Lunch/informal networking
- 1:00 – 1:15 Brief introduction with update about the UnBias project & outline of the workshop
- 1:15 – 2:45 Fair resource allocation algorithm selection task
- 2:45 – 3:00 Coffee break
- 3:00 – 4:30 Empathy tool for algorithm design
- 4:30 – 5:00 Wrap up and open discussion
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law
The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.
Publication of 1st WP4 workshop report
We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.
The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.
Algorithm Workshop, University of Strathclyde. February 2017
What are algorithms and how are they designed? Why are they used in commercial practice and what kinds of benefits can they bring? What are the potential harmful impacts of using algorithms and how can they be prevented?
On Wednesday 15th February 2017 some UnBias consortium members had the pleasure of attending an Algorithm Workshop hosted by the Law School, University of Strathclyde. During the workshop, we had the opportunity to consider, discuss and begin to address key issues and concerns surrounding the contemporary prevalence of algorithms. The workshop was also attended by students from the host University and an interdisciplinary group of experts from areas including Law, Computer Science and the Social Sciences. This mix of expertise made for a really great afternoon of talks and discussions surrounding the design, development and use of algorithms through various disciplinary perspectives.
Continue reading Algorithm Workshop, University of Strathclyde. February 2017
1st UnBias Stakeholder workshop
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
The case studies
We have prepared four case studies concerning key current debates around algorithmic fairness. These relate to: 1) gaming the system – anti-Semitic autocomplete and search results; 2) news recommendation and fake news; 3) personalisation algorithms; 4) algorithmic transparency.
The case studies will help to frame discussion in the first stakeholder workshop on February 3rd 2017. Participants will be divided into four discussion groups with each group focusing on a particular case study and questions arising from it. There will then be an opportunity for open debate on these issues. You might like to read through the case studies in advance of the workshop and take a little time to reflect on the questions for consideration put forward at the end of each one. If you have a particular preference to discuss a certain case study in the workshop please let us know and we will do our best to assign you to that group.
Definitions:
To aid discussion we also suggest the following definitions for key terms:
Bias – unjustified and/or unintended deviation in the distribution of algorithm outputs, with respect to one, or more, of its parameter dimensions.
Discrimination (should relate to legal definitions re protected categories) – unequal treatment of persons on the basis of ‘protected characteristics’ such as age, sexual identity or orientation, marital status, pregnancy, disability, race (including colour, nationality, ethnic of national origin), religion (or lack of religion). Including situations where the ‘protected characteristics’ is indirectly inferred via proxy categories.
Fairness – a context dependent evaluation of the algorithm processes and/or outcomes against socio-cultural values. Typical examples might include evaluating: the disparity between best and worst outcomes; the sum-total of outcomes; worst case scenarios.
Transparency – the ability to see into the workings of the algorithm (and the relevant data) in order to know how the algorithm outputs are determined. This does not have to require publication of the source code, but might instead be more effectively achieved by a schematic diagram of the algorithm’s decision steps.
Workshop schedule:
- 9:45-10am Welcome/informal networking
- 10:00 – 10:30 Brief introduction to UnBias project & pre-workshop questionnaire completing
- 10:30 – 10:45 Coffee break / choosing of case-study discussion group
- 10:45 – 11:30 case-study discussion
- 11:30 – 11:45 Coffee break
- 11:45 – 13:00 Results from case study groups opened up for plenary discussion
- 13:00 – 13:30 Wrap up, open discussion and networking
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.