June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.
Tag Archives: Elvira
2nd UnBias stakeholders workshop
It is our great pleasure to welcome you to the 2nd UnBias stakeholder workshop this June 19th (2017) at the Wellcome Collection in London, UK.
In this workshop we will build on the outcomes of the previous workshop, moving from the exploration of issues to a focus on solutions.
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations and the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
Structure of the 2nd stakeholders workshop
The workshop will consist of two parts.
- In the first part we will present a challenge to choose which out of four possible algorithms is most fair for a limited resources allocation task. We will do this under two transparency conditions: 1. when only observations of outcomes are known; 2. when the rational behind the algorithm is know. we will conclude this part with a discussion about the reasoning behind our algorithm choices.
- Having been primed with some of the challenges for designing fair algorithmic decision systems, the second part will explore ideas and frameworks for an ’empathy’ tool to help algorithmic system designers identify possible sources of bias in their system design.
Workshop schedule:
- 12:00-1:00pm Lunch/informal networking
- 1:00 – 1:15 Brief introduction with update about the UnBias project & outline of the workshop
- 1:15 – 2:45 Fair resource allocation algorithm selection task
- 2:45 – 3:00 Coffee break
- 3:00 – 4:30 Empathy tool for algorithm design
- 4:30 – 5:00 Wrap up and open discussion
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
Publication of 1st WP4 workshop report
We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.
The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.
UnBias public engagement at the Explorers Fair
On February 10th and 11th, UnBias participated in the 2017 Explorers Fair Expo at the Nottingham Broadway cinema to engage with parent, children and citizens of any age in discussing the ways in which algorithms affect our lives.
Continue reading UnBias public engagement at the Explorers Fair
Explorers Fair Expo
As part of the Explorers Fair Expo at the Nottingham Broadway cinema UnBias will run public engagement activities on Friday 1oth and Saturday 11th of February 2017. All ages welcome.
Our program for the event is as follows:
Friday 10th Febr 9.45 – 15.15
Drop in activity: Interacting with different web browsers & search engines – Do you care? E. Pérez-Vallejos, UoN
Hands-on exercises comparing results when using different browsers and/or search engines. To enquiry and discuss about their online preferences and/or concerns regarding: algorithms, filtering systems, fairness and possible recommendations.
Saturday 11th Febr
12-45 – 13.15
Talk: Who is in charge? You or the algorithm? A. Koene, UoN
Looking for an answer to just about any question? Just look it up online. All the world’s information is available through search engines, social networks, news recommenders etc. Ever wondered how these systems select which information is relevant for you?
1.45 – 15.00
“UnBias” Youth Juries: A youth-led discussion about algorithm fairness. M. Cano, L. Dowthwaite, V. Portillo, UoN
Youth-lead focus groups with different scenarios to prompt discussions about some particular aspects on how the internet works (with focus on algorithm fairness when interacting with automated systems), giving participants the chance to share their views and express their concerns.
1st UnBias Stakeholder workshop
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
The case studies
We have prepared four case studies concerning key current debates around algorithmic fairness. These relate to: 1) gaming the system – anti-Semitic autocomplete and search results; 2) news recommendation and fake news; 3) personalisation algorithms; 4) algorithmic transparency.
The case studies will help to frame discussion in the first stakeholder workshop on February 3rd 2017. Participants will be divided into four discussion groups with each group focusing on a particular case study and questions arising from it. There will then be an opportunity for open debate on these issues. You might like to read through the case studies in advance of the workshop and take a little time to reflect on the questions for consideration put forward at the end of each one. If you have a particular preference to discuss a certain case study in the workshop please let us know and we will do our best to assign you to that group.
Definitions:
To aid discussion we also suggest the following definitions for key terms:
Bias – unjustified and/or unintended deviation in the distribution of algorithm outputs, with respect to one, or more, of its parameter dimensions.
Discrimination (should relate to legal definitions re protected categories) – unequal treatment of persons on the basis of ‘protected characteristics’ such as age, sexual identity or orientation, marital status, pregnancy, disability, race (including colour, nationality, ethnic of national origin), religion (or lack of religion). Including situations where the ‘protected characteristics’ is indirectly inferred via proxy categories.
Fairness – a context dependent evaluation of the algorithm processes and/or outcomes against socio-cultural values. Typical examples might include evaluating: the disparity between best and worst outcomes; the sum-total of outcomes; worst case scenarios.
Transparency – the ability to see into the workings of the algorithm (and the relevant data) in order to know how the algorithm outputs are determined. This does not have to require publication of the source code, but might instead be more effectively achieved by a schematic diagram of the algorithm’s decision steps.
Workshop schedule:
- 9:45-10am Welcome/informal networking
- 10:00 – 10:30 Brief introduction to UnBias project & pre-workshop questionnaire completing
- 10:30 – 10:45 Coffee break / choosing of case-study discussion group
- 10:45 – 11:30 case-study discussion
- 11:30 – 11:45 Coffee break
- 11:45 – 13:00 Results from case study groups opened up for plenary discussion
- 13:00 – 13:30 Wrap up, open discussion and networking
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
Launch of 5Rights Youth Juries report at House of Lords
You are invited to join us for the launch of a groundbreaking report that articulates the voice of children and young people, and their relationship to the internet and digital technologies;
The Internet On Our Own Terms
How Children and Young People Deliberated about their Digital Rights
6 – 8pm
Tuesday 31st January 2017
Committee Room 3A
House of Lords
London, SW1A 0PW
Speakers;
Baroness Beeban Kidron, Prof. Stephen Coleman, Dr. Elvira Perez Vallejos and youth jurors, followed by a Q&A
In April 2015 young people aged between 12 and 17 gathered together in the cities of Leeds, London and Nottingham to participate in a series of jury-styled focus groups designed to ‘put the internet on trial’. In total, nine juries took place which included 108 young people, approximately 12 participants per jury.
The report outlines the ground-breaking research process, using actors to set scenarios for debate and a deliberative process to capture the changing views of young people as they examine a broad range of claims and evidence.
The policy suggestions, straight from the mouths and imaginations of the young participants, aimed at Ministers, Industry, Educators and Business are vibrant, surprising and pragmatic.
We hope you will join us to hear more
Announcement: 5Rights Youth Juries report launch at Parliament
The UnBias team is pleased to announce the launch of a ground-breaking report that articulates the voice of children and young people, and their relationship to the internet and digital technologies.
This report is titled ‘The Internet on our Own Term: How Children and Young People Deliberated about their Digital Rights’ and describes the work carried since April 2015 in which young people aged between 12 and 17 gathered together in the cities of Leeds, London and Nottingham to participate in a series of jury-styled focus groups designed to ‘put the internet on trial’. In total, nine juries took place which included 108 young people, approximately 12 participants per jury.
Continue reading Announcement: 5Rights Youth Juries report launch at Parliament
UnBias engagement with Ethics and Law communities
From Thursday 24th to Monday 28th November, the Nottingham UnBias team contributed to series of workshop/CPD events organized by the SATORI project, the Belfast Solicitors’ Association (“BSA”) and the Ethics of Big Data working group at the University of Cambridge.
Continue reading UnBias engagement with Ethics and Law communities
Workshop/The Ethics of Machine Learning in Professional Practice
This workshop is by application or invitation only. Limited places.
If you are interested in attending, please see information below.
Deadline : 30 October 2016
The Ethics of Using Machine Learning in Professional Practice: Perspectives from Journalism, Law and Medicine.
This workshop aims to bring together practitioners from law, journalism and bio-medicine together with social scientists and computer scientists to explore the ethical questions raised by the growing use of machine learning in processes of information discovery, analysis and decision-making.
Recent examples include the deployment of machine learning methods in the development of a proprietary digital tool used to generate risk assessments which inform judges in US parole hearings, the use of bots in international newsrooms to support editors and journalists’ selection of stories for publication and Google DeepMind’s partnership with the NHS to build an app for medical practitioners treating kidney disease. Are such cases indicative of a wider trend towards the delegation of decision-making to autonomous computer systems in areas of activity which were previously the preserve of human experts?
Presentations and discussions at the symposium will explore the implications for ethics and governance of integrating machine learning and other algorithms into wider computational systems and workflows and how this process relates to evolving social processes of decision-making and accountability in professional practice in law, journalism and bio-medicine.
This workshop is by application or invitation only and discussions will be conducted under Chatham House rules. Researchers or professional practitioners interested in attending should apply by email to Dr Anne Alexander (raa43@cam.ac.uk) before 30 October with a short statement explaining why they would like to participate in the event. The Ethics of Big Data group also welcomes proposals for short presentations. Potential presenters should include an abstract of their proposed contribution.
Part of the Ethics of Big Data Research Group, series
Organised by Ethics of Big Data Research Group in collaboration with The Work Foundation and InformAll.