Earlier this year the UnBias team ran its first Ethicon. An Ethicon is a new kind of event developed by members of the Human Centred Computing theme at Oxford. It works as a twist on the traditional hackathon; it is geared towards forefronting ethical issues alongside design issues in the completion of a task.
In an Ethicon teams work together to carry out a competitive design task. In addition to thinking about technical features of design they are required to address the social and ethical implications of the particular technology involved. They are challenged to identify novel and creative solutions that embed ethical considerations into their design. Teams are interdisciplinary so that they can share expertise and learn from each other in a fun environment. They are then assessed by a panel of experts who judge the technical quality of their work alongside how well they have worked together to identify and address ethical concerns.
We ran an UnBias Ethicon with Horizon CDT students at the University of Nottingham. We began with an interactive seminar that introduced case studies relating to controversies over the use of algorithmic processes on online platforms – for instance in relation to fake news, excessive personalisation and search engine bias. We also discussed with the group different opportunities to pursue ‘fairness’ in the operation of these kinds of processes – for instance through platform design, policy change, education and user self-governance. The students were then put into teams and set a design challenge that required them to consider the design of a new social networking platform that is committed to the responsible use of algorithms. They had time over the period of a week to work on the challenge in their teams and then we came together for a second session. In this session the students presented their design ideas and highlighted the key features they felt would ensure an ethical approach to algorithms.
Our panel of expert judges selected the winners who received shopping vouchers as a reward for their work. All the groups gave strong presentations. We were impressed with how well they engaged with ethical issues, such as the potential for bias in the use of personalisation algorithms, and the creative solutions they identified, such as the internal alerts within platforms to raise user awareness and overcome potential bias.