AMOIA (Algorithm Mediated Online Information Access) – user trust, transparency, control and responsibility
This Web Science 2017 workshop, delivered by the UnBias project, will be an interactive audience discussion on the role of algorithms in mediating access to information online and issues of trust, transparency, control and responsibility this raises.
The workshop will consist of two parts. The first half will feature talks from the UnBias project and related work by invited speakers. The talks by the UnBias team will contrast the concerns and recommendations that were raised by teen-aged ‘digital natives’ in our Youth Juries deliberations and user observation studies with the perspectives and suggestions from our stakeholder engagement discussions with industry, regulators and civil-society organizations. The second half will be an interactive discussion with the workshop participants based on case studies. Key questions and outcomes from this discussion will be put online for WebSci’17 conference participants to refer to and discuss/comment on during the rest of the conference.
The case studies we will focus on:
- Case Study 1: The role of recommender algorithms in hoaxes and fake news on the Web
- Case Study 2: Business models that share AMOIA, how can web-science boost Corporate Social Responsibility / Responsible Research and Innovation
- Case Study 3: Unintended algorithmic discrimination on the web – routes towards detection and prevention
The UnBias project investigates the user experience of algorithm driven services and the processes of algorithm design. We focus on the interest of a wide range of stakeholders and carry out activities that 1) support user understanding about algorithm mediated information environments, 2) raise awareness among providers of ‘smart’ systems about the concerns and rights of users, and 3) generate debate about the ‘fair’ operation of algorithms in modern life. This EPSRC funded project will provide policy recommendations, ethical guidelines and a ‘fairness toolkit’ that will be co-produced with stakeholders.
The workshop will be a half-day event
Programme
9:00 – 9:10 Introduction
9:10 – 9:30 Observations from the Youth Juries deliberations with young people, by Elvira Perez (University of Nottingham)
9:30 – 9:50 Insights from user observation studies, by Helena Webb (University of Oxford)
9:50 – 10:10 Insights from discussions with industry, regulator and civil-society stakeholders, by Ansgar Koene (University of Nottingham)
10:10 – 10:30 “Platforms: Do we trust them”, by Rene Arnold
10:30 – 10:50 “IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems”, by John Havens
10:50 – 11:10 Break
11:10 – 11:50 Discussion of case study 1
11:50 – 12:30 Discussion of case study 2
12:30 – 12:50 Break
12:50 – 13:30 Discussion of case study 3
13:30 – 14:00 Summary of outcomes
Key dates
Workshop registration deadline: 18 June 2017
Workshop date: 25 June 2017
Conference dates: 26-28 June 2017
Weaponisation of artificial intelligence presents one of the greatest ethical and technological challenges in the 21st century and has been described as the third revolution in warfare, after the invention of gunpowder and nuclear weapons. Despite the vital importance of this development for modern society, legal and ethical practices, and for technological research, there is little systematic study of public opinion on this critical issue. Our interdisciplinary project, sponsored by CHERISH Digital Economy, addresses this gap. Our objective is to analyse what factors determine public attitudes towards the use of fully autonomous weapons.
To do this, we will produce a series of plausible but fictitious scenarios that will be presented to young adults (18-25 years old) part of a focus group. The scenarios will contain dilemmas to stimulate discussions. The aim of these focus groups is not simply to find out what young people think and feel about fully autonomous weapons, but to discover what shapes their thinking; how they came to define certain scenarios as problematic; how they attempt to work together to think through solutions to these problems; the extent to which they are prepared to change their minds in response to discussion with peer or exposure to new information; and how they translate their ideas into practical policy recommendations. Our working hypothesis is that society is not equipped by either biological evolution or contemporary human culture to make informed evaluations about the ethical implications of using autonomous agency to fight our wars for us.
This workshop is designed to bring researchers interested in ethics, virtual/augmented reality, modern warfare, public opinion, artificial intelligence and robotics. The speakers will be asked to share their research in a 20 minute presentation, with the goal of contributing to an edited volume based on the workshop, in which Routledge Publishing and Rowman Books have expressed an interest, and with the aim of developing a proposal for a major funding bid for the next state of this project.
Bursaries are available to support the travel costs incurred in attending this workshop. Please direct your inquiries to Elvira Perez Elvira.Perez@nottingham.ac.uk.
Workshop organised by:
Eugene Miakinkov, Lecturer in War and Society, Swansea University
Elvira Perez, Senior Research Fellow, University of Nottingham
Rob Wortham, PhD Researcher, University of Bath
Workshop Sponsored by:
A recent report from the BBC covers one instance of the ever-growing use of algorithms for social purposes and helps us to illustrate some key ethical concerns underpinning the UnBias project.
Continue reading The need for a responsible approach to the development of algorithms →
This workshop is by application or invitation only. Limited places.
If you are interested in attending, please see information below.
Deadline : 30 October 2016
The Ethics of Using Machine Learning in Professional Practice: Perspectives from Journalism, Law and Medicine.
This workshop aims to bring together practitioners from law, journalism and bio-medicine together with social scientists and computer scientists to explore the ethical questions raised by the growing use of machine learning in processes of information discovery, analysis and decision-making.
Recent examples include the deployment of machine learning methods in the development of a proprietary digital tool used to generate risk assessments which inform judges in US parole hearings, the use of bots in international newsrooms to support editors and journalists’ selection of stories for publication and Google DeepMind’s partnership with the NHS to build an app for medical practitioners treating kidney disease. Are such cases indicative of a wider trend towards the delegation of decision-making to autonomous computer systems in areas of activity which were previously the preserve of human experts?
Presentations and discussions at the symposium will explore the implications for ethics and governance of integrating machine learning and other algorithms into wider computational systems and workflows and how this process relates to evolving social processes of decision-making and accountability in professional practice in law, journalism and bio-medicine.
This workshop is by application or invitation only and discussions will be conducted under Chatham House rules. Researchers or professional practitioners interested in attending should apply by email to Dr Anne Alexander (raa43@cam.ac.uk) before 30 October with a short statement explaining why they would like to participate in the event. The Ethics of Big Data group also welcomes proposals for short presentations. Potential presenters should include an abstract of their proposed contribution.
Part of the Ethics of Big Data Research Group, series
Organised by Ethics of Big Data Research Group in collaboration with The Work Foundation and InformAll.
Legal training half day seminar on internet law organized by the Belfast Solicitors’ Association (“BSA”), in cooperation with the Bar of Northern Ireland.
AGENDA:
12.30 – 1pm LUNCH
WELCOME NOTE: @ 1PM (3 mins)
Chairman Bar Council Liam McCollum QC / Chairman BSA Olivia O’Kane (5 mins)
- KEY NOTE: (30- 45 mins?)
Honourable Mr Justice Stephens – INSERT TALK TITLE
- INSERT TALK TITLE (15-20)
Olivia O’Kane, Solicitor
- Defamation Practice-Some Reminiscences and Lessons Learnt (15-20)
David Ringland QC
- 2.40pm REFRESHMENTS
- Responsibility and accountability in algorithm mediated services – a look at regulatory and policy concerns (30 mins)
Dr. Ansgar Koene, Senior Research Fellow: Digital Economy Research Institute, University of Nottingham
- Privacy – A poor man’s defamation law? (15 – 20mins)
Ronan Lavery QC
- Computer Prefetch, Shellbags, and Mounted devices – what information can they glean for you? (30 mins)
Paul Birch, BDO Computer Forensics
4 PM DRINKS & NETWORKING
24 November 2016, 0900-1700 GMT.
Wellcome Collection, 183 Euston Road, London, NW1 2BE, UK
The SATORI project will organise a one-day mutual-learning workshop on 24 November 2016, at the Wellcome Collection, London. At this workshop, SATORI partners will present the project’s preliminary findings, and discuss, particularly with organisations engaged in ethics assessment and related practices (e.g. ethics review, institutional review, corporate social responsibility in relation to R&I), how to move forward. The workshop will address: the institutional landscape for ethics assessment and challenges for research ethics committees in ethics assessment; SATORI proposals for ethics assessment procedures and ethical impact assessment, and how to connect it to research and innovation. The workshop will include a limited number of participants, i.e. around 20, to allow for focused discussion.
Who should attend: research ethics committee members.
A lot has been said about algorithms working as gatekeepers and making decisions on our behalf, often without us noticing it. I can surely find an example in my daily life where I do notice it and benefit from it. This happens when I use the “Discover Weekly” Spotify play-list. By comparing my listening habits to that of other users with similar but not identical choices, Spotify allows information on the fringes to be shared. It is thus “tailored” to my music taste, and it is incredibly accurate in predicting things I would like. Besides, it lets me discover new music and bands and in many occasions can also take me back in time with some tunes I have probably not listened to for a long time.
Continue reading Algorithmic discrimination: are you IN or OUT? →