As part of our ongoing collaboration with the UK England chapter of the Internet Society (ISOC-UK England), UnBias will run a workshop on:
Algorithmic awareness building for User Trust in online platforms
Time: Friday, November 30th 2018, 18:00 to 21:00 (UTC), London
Place: Cloudflare offices, 25 Lavington Street, Southwark, London (link to Google Map)
Follow this link to register.
Continue reading Workshop on Algorithmic awareness building for User Trust in online platforms →
OUR FUTURE INTERNET: FROM BIAS TO TRUST
DIGITAL CATAPULT: OCTOBER 1ST 10.30 AM TO 5.00 PM
On October 1st the UnBias project team will be showcasing the outcomes of our work. We are looking forward to welcoming an audience of 70 stakeholders from research, law, policy, education and industry.
In addition to reporting on our major findings we will also highlight key outputs such as policy guidelines and demonstrate our exciting fairness toolkit. This engaging and interactive event will also include presentations from external speakers and opportunities for networking. Furthermore, we will announce plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
Continue reading Looking forward to the UnBias Showcase! October 1st 2018, London →
On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.
The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.
Continue reading European Commission initiatives to explore regulatory requirements for AI →
From 21st to 22nd February the Royal Society and the Royal Netherlands Academy of Arts and Sciences (KNAW) held a UK – Netherlands bilateral international meeting to explore common research interests in the fields of Quantum Physics and Technology, Nanochemistry and Responsible Data Science. UnBias was pleased to participate as part of the Responsible Data Science stream.
Continue reading Responsible Data Science at Royal Society Bilateral UK-NL workshop →
HOW DO YOU TAKE CARE ON THE INTERNET?
Members of the UnBias team and the Digital Wildfire project from the Universities of Nottingham and Oxford were delighted to participate in Mozilla Festival (MozFest), which took place over the weekend of 28th-29th October 2017. The festival saw thousands of members of the general public, of all ages and nationalities, pass through the doors of Ravensbourne College to engage in a festival that aimed to promote a healthy internet and a web for all. Issues of digital inclusion, web literacy and privacy and security were some of the key topics that were discussed at the event.
Continue reading HELLO FROM MOZFEST! →
June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.
Continue reading A Month of Conferences and Workshops →
AMOIA (Algorithm Mediated Online Information Access) – user trust, transparency, control and responsibility
This Web Science 2017 workshop, delivered by the UnBias project, will be an interactive audience discussion on the role of algorithms in mediating access to information online and issues of trust, transparency, control and responsibility this raises.
The workshop will consist of two parts. The first half will feature talks from the UnBias project and related work by invited speakers. The talks by the UnBias team will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries deliberations and user observation studies with the perspectives and suggestions from our stakeholder engagement discussions with industry, regulators and civil-society organizations. The second half will be an interactive discussion with the workshop participants based on case studies. Key questions and outcomes from this discussion will be put online for WebSci’17 conference participants to refer to and discuss/comment on during the rest of the conference.
The case studies we will focus on:
- Case Study 1: The role of recommender algorithms in hoaxes and fake news on the Web
- Case Study 2: Business models that share AMOIA, how can web-science boost Corporate Social Responsibility / Responsible Research and Innovation
- Case Study 3: Unintended algorithmic discrimination on the web – routes towards detection and prevention
The UnBias project investigates the user experience of algorithm driven services and the processes of algorithm design. We focus on the interest of a wide range of stakeholders and carry out activities that 1) support user understanding about algorithm mediated information environments, 2) raise awareness among providers of ‘smart’ systems about the concerns and rights of users, and 3) generate debate about the ‘fair’ operation of algorithms in modern life. This EPSRC funded project will provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ that will be co-produced with stakeholders.
The workshop will be a half-day event
9:00 – 9:10 Introduction
9:10 – 9:30 Observations from the Youth Juries deliberations with young people, by Elvira Perez (University of Nottingham)
9:30 – 9:50 Insights from user observation studies, by Helena Webb (University of Oxford)
9:50 – 10:10 Insights from discussions with industry, regulator and civil-society stakeholders, by Ansgar Koene (University of Nottingham)
10:10 – 10:30 “Platforms: Do we trust them”, by Rene Arnold
10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens
10:50 – 11:10 Break
11:10 – 11:50 Discussion of case study 1
11:50 – 12:30 Discussion of case study 2
12:30 – 12:50 Break
12:50 – 13:30 Discussion of case study 3
13:30 – 14:00 Summary of outcomes
Workshop registration deadline: 18 June 2017
Workshop date: 25 June 2017
Conference dates: 26-28 June 2017
It is our great pleasure to welcome you to the 2nd UnBias stakeholder workshop this June 19th (2017) at the Wellcome Collection in London, UK.
In this workshop we will build on the outcomes of the previous workshop, moving from the exploration of issues to a focus on solutions.
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations and the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
Structure of the 2nd stakeholders workshop
The workshop will consist of two parts.
- In the first part we will present a challenge to choose which out of four possible algorithms is most fair for a limited resources allocation task. We will do this under two transparency conditions: 1. when only observations of outcomes are known; 2. when the rational behind the algorithm is know. we will conclude this part with a discussion about the reasoning behind our algorithm choices.
- Having been primed with some of the challenges for designing fair algorithmic decision systems, the second part will explore ideas and frameworks for an ’empathy’ tool to help algorithmic system designers identify possible sources of bias in their system design.
- 12:00-1:00pm Lunch/informal networking
- 1:00 – 1:15 Brief introduction with update about the UnBias project & outline of the workshop
- 1:15 – 2:45 Fair resource allocation algorithm selection task
- 2:45 – 3:00 Coffee break
- 3:00 – 4:30 Empathy tool for algorithm design
- 4:30 – 5:00 Wrap up and open discussion
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.
Continue reading UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law →
The 4th Winchester Conference on Trust, Risk, Information and the Law
Our overall theme for this conference will be:
Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones
Programme in brief
- Plenary Address: Prof. Katie Athkinson
‘Arguments, Values and Baseball: AI support for Legal Practice’
- Stream 1A: Automated weapons & automated investigations
- Stream 1B: Smart retail & behavioural advertising
- Stream 2A: Algorithms & criminal justice
- Stream 2B: Data power & its regulation
- Stream 1C: Artificial intelligence, decision-making & the protection of human interests
- Stream 2C: Smart contracts & smart machines
- Plenary Address: John McNamara
‘Protecting trust in a world disrupted by machine learning’
- Stream 3A: Workshop run by the UnBias project: An exploration of trust, transparency and bias in law enforcement and judicial decision support systems
- Stream 3B: Autonomous vehicles
- Stream 3C: Values & machine learning
- Panel Discussion: The Future of A.I., machine learning and
Full programme available here
Abstract booklet available here
UnBias will be at the conference running the workshop: ‘An exploration of trust, transparency and bias in law enforcement and judicial decision support systems’
This workshop will consist of two parts. In the first twenty minutes we will review some of the outcomes of the UnBias project. Specifically, we will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries
deliberations with the perspectives and suggestions from our stakeholder engagement discussions. We will then spend a couple of minutes to introduce our workshop participants to a case study based on the ProPublica report of bias in the COMPAS algorithms for recidivism probability forecasting and the subsequent studies showing that when it is not possible for an algorithm to be equally predictive for all without disparities in harm of incorrect predictions when the two populations have unequal base rates. This case study
will form the basis for discussions during the remainder of the session. Some of the questions we will raise include: what are the implications of such findings for trust in law enforcement and judicial rulings? What are the minimum levels of transparency and output audit-ability that a decision support system must have in order to maintain trust in a fair application of the law? The outcomes of the discussion will be summarized in a short report that will be sent out to all participants and feed into the development of policy
recommendations by UnBias.