Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.
Tag Archives: Policy
2nd UnBias stakeholders workshop
It is our great pleasure to welcome you to the 2nd UnBias stakeholder workshop this June 19th (2017) at the Wellcome Collection in London, UK.
In this workshop we will build on the outcomes of the previous workshop, moving from the exploration of issues to a focus on solutions.
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations and the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
Structure of the 2nd stakeholders workshop
The workshop will consist of two parts.
- In the first part we will present a challenge to choose which out of four possible algorithms is most fair for a limited resources allocation task. We will do this under two transparency conditions: 1. when only observations of outcomes are known; 2. when the rational behind the algorithm is know. we will conclude this part with a discussion about the reasoning behind our algorithm choices.
- Having been primed with some of the challenges for designing fair algorithmic decision systems, the second part will explore ideas and frameworks for an ’empathy’ tool to help algorithmic system designers identify possible sources of bias in their system design.
Workshop schedule:
- 12:00-1:00pm Lunch/informal networking
- 1:00 – 1:15 Brief introduction with update about the UnBias project & outline of the workshop
- 1:15 – 2:45 Fair resource allocation algorithm selection task
- 2:45 – 3:00 Coffee break
- 3:00 – 4:30 Empathy tool for algorithm design
- 4:30 – 5:00 Wrap up and open discussion
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
EuroDIG 2017
About
EuroDIG 2017 will take place in Tallinn, 6-7 June and will be hosted by the Ministry of Foreign Affairs of the Republic of Estonia. EuroDIG is not a conference, it is a year round dialogue on politics and digitisation across the whole European continent which culminates in an annual event. More about EuroDIG.
Pre- and side-events
A number of pre and side events will be enriching the EuroDIG programme. European organisations will organise meetings on day zero 5th June and the European Commission opens the High Level Group on Internet Governance Meeting on 8th June to the public.
Participate!
Our slogan is “Always open, always inclusive and never too late to get involved!”
Org Teams did their best to facilitate the ground for in depth multistakeholder discussion and our Estonian host, the Ministry of Foreign Affairs, worked hard to give you a warm welcome!
Now it is up to you to engage in the discussion – the floor is always open! A first opportunity will be the open mic session, the first session after the welcome.
We would like to hear from YOU: How I am affected by Internet governance?
No chance to travel to Tallinn?
No problem! We are in Estonia, the most advanced country in Europe when it comes to digital futures! For all workshops and plenary sessions we provide, video streaming (passive watching), WebEx (active remote participation) and transcription. Transcripts and videos will be provided at the EuroDIG wiki after the event. Please connect via the links provided in the programme.
UnBias at EuroDIG
UnBias is contributing to EuroDIG 2017 by running a Flash session on “Accountability and Regulation of Algorithms” and as part of the organizing team for the Plenary session of “Internet in the ‘post-truth’ era?“.
Looking forward to seeing you there!
The first IEEE P7003™ Working Group meeting
IEEE Standards Association (IEEE-SA) invites your participation in the IEEE P7003™, Standard for Algorithmic Bias Considerations Working Group.
Why get involved:
The goal of this Standard Project is to describe specific methodologies that can help users certify how they worked in order to address and eliminate issues of negative bias in the creation of their algorithms. “Negative bias” refers to the usage of overly subjective or uniformed data sets or information known to be inconsistent with legislation concerning certain protected characteristics (such as race, gender, sexuality, etc.); or with instances of bias against groups not necessarily protected explicitly by legislation, but otherwise diminishing stakeholder or user wellbeing and for which there are good reasons to be considered inappropriate.
Who should participate:
Programmers, manufacturers, researchers or other stakeholders involved in creating an algorithm along with any stakeholders defined as end users of the algorithm, and any non-user affected by the use of the algorithm, including but not limited to customers, citizens or website visitors
How to Participate:
If you wish to participate in the IEEE P7003™ Working Group, please contact the Working Group Chair, Ansgar Koene.
Meeting Information:
The first IEEE P7003™ Working Group meeting will be held online via (WebEx) on Friday, 5 May from 9:00 AM – 11:00 AM (EST)
If you cannot attend the meeting and want to be added to the distribution list please fill out this form.
Publication of 1st WP4 workshop report
We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.
The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.
When AI goes to War: public opinion, modern conflict, and autonomous weapons
“Growing up Digital” UnBias team members contribute to House of Lords report
On 21st March the House of Select Committee on Communications published a report called ‘Growing up with the internet’. The report is based on an enquiry conducted by the House of Lords into Children and the Internet. UnBias team member Professor Marina Jirotka served as a specialist advisor to the enquiry and team member Professor Derek McAuley gave verbal evidence to it, elaborating on the written evidence submitted by Perez, Koene and McAuley.
Continue reading “Growing up Digital” UnBias team members contribute to House of Lords report
Internet Society – European Chapters meeting
Agenda and Details
Wednesday 22 February
12:00 Lunch at the venue
Welcome and introductions, Frederic Donck
Introduction to trust, based on 2016 ISOC report (and discussion), Richard Hill
Editorial responsibility for online content – platform neutrality, recommender systems and the problem of ‘fake news’ (and discussion), Ansgar Koene
Future Internet Scenarios, Konstantinos Komaitis
17:30 Day 1 ends
19:00 Dinner (On Canal Boat leaving from Oosterdok in front of the hotel)
Thursday 23 February
9:00 Day starts
Collaborative security introduction, Olaf Kolkman
Real life examples of collaborative security in action meeting, Andrei Robachevsky
User-Trust, with regard to longevity and security of IoT devices (and discussion), Jonas Jacek
Round table on current issues related to user trust in Europe
12:30 Working lunch at the venue
Search ranking technologies (and discussion), Brandt Dainow
ISOC-NL presentation
Way forward: meeting on next steps, concrete actions for ISOC and chapters
16:00 Day 2 ends
Internet Society UK England – User Trust Webinar
In preparation for the European Chapters meeting (22-23 February 2017) we will have a 90 minutes Webinar / Conference call on Tuesday 14 February 2017 from 6pm to collect input from participants about the ways in which ISOC UK can/should engage with the theme of User Trust.
In June 2016 ISOC published a working paper “A policy framework for an open and trusted Internet” outlining the four interrelated dimension to be considered when developing policies for the internet. http://www.internetsociety.org/doc/policy-framework-open-and-trusted-internet
The aim of the European Chapters meeting is to build on this and identify specific areas related to User Trust that ISOC should prioritise and focus on when engaging with policy maker to build a trusted Internet.
The specific discussions around User Trust that have been proposed for the meeting are:
- Ethical data handling
- Privacy
- Data breaches
- Examples of collaborative security in action
- Internet of Things – implications for security, privacy, control (who control which aspect of the device: user vs. service provider), liability in case of problems, longevity (e.g. devices embedded in infrastructure)
- Digital Literacy – the need for people to understand basic aspects of how the internet, and digital services, work in order to: improve cybersecurity; be able to give informed consent to personal data usage; understand the implications of proposed legislation (e.g. snoopers charter); …
- User generated content moderation – how to approach the issues related to fake news and editorial responsibility
- An overview of the situation in Russia
Other areas of User Trust that might be especially relevant for ISOC UK could be:
- Government surveillance powers (implications and legal challenges to the Investigative Powers Act)
- The impact of nation-first, anti-globalization movement (Brexit)
- Governance of the platform economy (e.g. Uber, Deliveroo), i.e. classification as ‘tech’ company to avoid regulations
Which areas should we prioritize? The chapters meeting is only one and a half days long so time is limited.
Looking beyond the European Chapters meeting, what kind of follow-up activities should ISOC UK pursue, e.g. digital literacy 101 for parliamentarians?
Topic: Internet Society UK and User Trust – Webinar
Time: Feb 14, 2017 6:00 PM London
1st UnBias Stakeholder workshop
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
The case studies
We have prepared four case studies concerning key current debates around algorithmic fairness. These relate to: 1) gaming the system – anti-Semitic autocomplete and search results; 2) news recommendation and fake news; 3) personalisation algorithms; 4) algorithmic transparency.
The case studies will help to frame discussion in the first stakeholder workshop on February 3rd 2017. Participants will be divided into four discussion groups with each group focusing on a particular case study and questions arising from it. There will then be an opportunity for open debate on these issues. You might like to read through the case studies in advance of the workshop and take a little time to reflect on the questions for consideration put forward at the end of each one. If you have a particular preference to discuss a certain case study in the workshop please let us know and we will do our best to assign you to that group.
Definitions:
To aid discussion we also suggest the following definitions for key terms:
Bias – unjustified and/or unintended deviation in the distribution of algorithm outputs, with respect to one, or more, of its parameter dimensions.
Discrimination (should relate to legal definitions re protected categories) – unequal treatment of persons on the basis of ‘protected characteristics’ such as age, sexual identity or orientation, marital status, pregnancy, disability, race (including colour, nationality, ethnic of national origin), religion (or lack of religion). Including situations where the ‘protected characteristics’ is indirectly inferred via proxy categories.
Fairness – a context dependent evaluation of the algorithm processes and/or outcomes against socio-cultural values. Typical examples might include evaluating: the disparity between best and worst outcomes; the sum-total of outcomes; worst case scenarios.
Transparency – the ability to see into the workings of the algorithm (and the relevant data) in order to know how the algorithm outputs are determined. This does not have to require publication of the source code, but might instead be more effectively achieved by a schematic diagram of the algorithm’s decision steps.
Workshop schedule:
- 9:45-10am Welcome/informal networking
- 10:00 – 10:30 Brief introduction to UnBias project & pre-workshop questionnaire completing
- 10:30 – 10:45 Coffee break / choosing of case-study discussion group
- 10:45 – 11:30 case-study discussion
- 11:30 – 11:45 Coffee break
- 11:45 – 13:00 Results from case study groups opened up for plenary discussion
- 13:00 – 13:30 Wrap up, open discussion and networking
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.