On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.
Tag Archives: Helena
Unbias Hackathon
On the week-end of June 30th and July 1st, the UnBias team hosted a two-day hackathon at Codebase in Edinburgh, with support from local outfit Product Forge, whose experience organizing such events is unrivalled in Scotland.
The hackathon challenge was formulated as follows:
“Artificial Intelligence shapes digital services that have become central to our everyday lives. Online platforms leverage the power of AI to monetize our attention, with often unethical side-effects: our privacy is routinely breached, our perception of the world is seriously distorted, and we are left with unhealthy addictions to our screens and devices. The deep asymmetry of power between users and service providers, the opacity and unaccountability of the algorithms driving these services, and their exploitation by trolls, bullies and propagandists are serious threats to our well-being in the digital era.
UnBias Algorithmic Preference Survey
We are conducting a survey on algorithm preferences for solving resource-allocation problems. The survey consists of two case studies with 5 options to determine the allocation algorithm. Completing this task should take between 10 – 20 minutes.
UnBias takes part in European Researchers’ Night!
How do you take care on the Internet? What are the dangers of online fake news and filter bubbles? What are appropriate punishments for hate speech and trolling?
These are questions we asked members of the public during the Curiosity Carnival at the University of Oxford on September 30th. The Curiosity Carnival formed part of European Researchers’ Night, celebrated in cities across Europe. Oxford ran a city wide programme of activities across its universities, libraries, gardens and woods to give members of the public a chance to find out about real research projects and meet the people who conduct them.
Continue reading UnBias takes part in European Researchers’ Night!
A Month of Conferences and Workshops
9th International ACM Web Science Conference 2017
The 9th International ACM Web Science Conference 2017 will be held from June 26 to June 28, 2017 inTroy, NY (USA) and is organized by the Rensselaer Web Science Research Center and the Tetherless World Constellation at RPI. The conference series by the Web Science Trust is following events in Athens, Raleigh, Koblenz, Evanston, Paris, Indiana, Oxford and Hannover.
The conference brings together researchers from multiple disciplines, like computer science, sociology, economics, information science, or psychology. Web Science is the emergent study of the people and technologies, applications, processes and practices that shape and are shaped by the World Wide Web. Web Science aims to draw together theories, methods and findings from across academic disciplines, and to collaborate with industry, business, government and civil society, to develop our knowledge and understanding of the Web: the largest socio-technical infrastructure in human history.
AMOIA workshop at ACM Web Science 2017
AMOIA (Algorithm Mediated Online Information Access) – user trust, transparency, control and responsibility
This Web Science 2017 workshop, delivered by the UnBias project, will be an interactive audience discussion on the role of algorithms in mediating access to information online and issues of trust, transparency, control and responsibility this raises.
The workshop will consist of two parts. The first half will feature talks from the UnBias project and related work by invited speakers. The talks by the UnBias team will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries deliberations and user observation studies with the perspectives and suggestions from our stakeholder engagement discussions with industry, regulators and civil-society organizations. The second half will be an interactive discussion with the workshop participants based on case studies. Key questions and outcomes from this discussion will be put online for WebSci’17 conference participants to refer to and discuss/comment on during the rest of the conference.
The case studies we will focus on:
- Case Study 1: The role of recommender algorithms in hoaxes and fake news on the Web
- Case Study 2: Business models that share AMOIA, how can web-science boost Corporate Social Responsibility / Responsible Research and Innovation
- Case Study 3: Unintended algorithmic discrimination on the web – routes towards detection and prevention
The UnBias project investigates the user experience of algorithm driven services and the processes of algorithm design. We focus on the interest of a wide range of stakeholders and carry out activities that 1) support user understanding about algorithm mediated information environments, 2) raise awareness among providers of ‘smart’ systems about the concerns and rights of users, and 3) generate debate about the ‘fair’ operation of algorithms in modern life. This EPSRC funded project will provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ that will be co-produced with stakeholders.
The workshop will be a half-day event
Programme
9:00 – 9:10 Introduction
9:10 – 9:30 Observations from the Youth Juries deliberations with young people, by Elvira Perez (University of Nottingham)
9:30 – 9:50 Insights from user observation studies, by Helena Webb (University of Oxford)
9:50 – 10:10 Insights from discussions with industry, regulator and civil-society stakeholders, by Ansgar Koene (University of Nottingham)
10:10 – 10:30 “Platforms: Do we trust them”, by Rene Arnold
10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens
10:50 – 11:10 Break
11:10 – 11:50 Discussion of case study 1
11:50 – 12:30 Discussion of case study 2
12:30 – 12:50 Break
12:50 – 13:30 Discussion of case study 3
13:30 – 14:00 Summary of outcomes
Key dates
Workshop registration deadline: 18 June 2017
Workshop date: 25 June 2017
Conference dates: 26-28 June 2017
2nd UnBias stakeholders workshop
It is our great pleasure to welcome you to the 2nd UnBias stakeholder workshop this June 19th (2017) at the Wellcome Collection in London, UK.
In this workshop we will build on the outcomes of the previous workshop, moving from the exploration of issues to a focus on solutions.
Aims of stakeholder workshops
Our UnBias stakeholder workshops bring together individuals from a range of professional backgrounds who are likely to have differing perspectives on issues of fairness in relation to algorithmic practices and algorithmic design. The workshops are opportunities to share perspectives and seek answers to key project questions such as:
- What constitutes a fair algorithm?
- What kinds of (legal and ethical) responsibilities do internet companies have to ensure their algorithms produce results that are fair and without bias?
- What factors might serve to enhance users’ awareness of, and trust in, the role of algorithms in their online experience?
- How might concepts of fairness be built into algorithmic design?
The workshop discussions will be summarised in written reports and will be used to inform other activities in the project. This includes the production of policy recommendations and the development of a fairness toolkit consisting of three co-designed tools 1) a consciousness raising tool for young internet users to help them understand online environments; 2) an empowerment tool to help users navigate through online environments; 3) an empathy tool for online providers and other stakeholders to help them understand the concerns and rights of (young) internet users.
Structure of the 2nd stakeholders workshop
The workshop will consist of two parts.
- In the first part we will present a challenge to choose which out of four possible algorithms is most fair for a limited resources allocation task. We will do this under two transparency conditions: 1. when only observations of outcomes are known; 2. when the rational behind the algorithm is know. we will conclude this part with a discussion about the reasoning behind our algorithm choices.
- Having been primed with some of the challenges for designing fair algorithmic decision systems, the second part will explore ideas and frameworks for an ’empathy’ tool to help algorithmic system designers identify possible sources of bias in their system design.
Workshop schedule:
- 12:00-1:00pm Lunch/informal networking
- 1:00 – 1:15 Brief introduction with update about the UnBias project & outline of the workshop
- 1:15 – 2:45 Fair resource allocation algorithm selection task
- 2:45 – 3:00 Coffee break
- 3:00 – 4:30 Empathy tool for algorithm design
- 4:30 – 5:00 Wrap up and open discussion
Privacy/confidentiality and data protection
All the workshops will be audio recorded and transcribed. This in order to facilitate our analysis and ensure that we capture all the detail of what is discussed. We will remove or pseudonymise the names of participating individuals and organisations as well as other potentially identifying details. We will not reveal the identities of any participants (except at the workshops themselves) unless we are given explicit permission to do so. We will also ask all participants to observe the Chatham House rule – meaning that views expressed can be reported back elsewhere but that individual names and affiliations cannot.
TRILCon 2017
The 4th Winchester Conference on Trust, Risk, Information and the Law
Our overall theme for this conference will be:
Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones
Programme in brief
- Plenary Address: Prof. Katie Athkinson
‘Arguments, Values and Baseball: AI support for Legal Practice’ - Stream 1A: Automated weapons & automated investigations
- Stream 1B: Smart retail & behavioural advertising
- Stream 2A: Algorithms & criminal justice
- Stream 2B: Data power & its regulation
- Stream 1C: Artificial intelligence, decision-making & the protection of human interests
- Stream 2C: Smart contracts & smart machines
- Plenary Address: John McNamara
‘Protecting trust in a world disrupted by machine learning’ - Stream 3A: Workshop run by the UnBias project: An exploration of trust, transparency and bias in law enforcement and judicial decision support systems
- Stream 3B: Autonomous vehicles
- Stream 3C: Values & machine learning
- Panel Discussion: The Future of A.I., machine learning and
algorithmic decision-making
Full programme available here
Abstract booklet available here
UnBias will be at the conference running the workshop: ‘An exploration of trust, transparency and bias in law enforcement and judicial decision support systems’
Publication of 1st WP4 workshop report
We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.
The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.