On October 1st the UnBias project team will be showcasing the outcomes of our work. We are looking forward to welcoming an audience of 70 stakeholders from research, law, policy, education and industry.
In addition to reporting on our major findings we will also highlight key outputs such as policy guidelines and demonstrate our exciting fairness toolkit. This engaging and interactive event will also include presentations from external speakers and opportunities for networking. Furthermore, we will announce plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
Our new video animation explains what algorithms are, how they shape our online browsing and how they can create risks of bias. It also describes how the UnBias project seeks to promote a future Internet that is free and fair for all. Watch it here!
Earlier this year the UnBias team ran its first Ethical Hackathon. These are a new kind of event developed by members of the Human Centred Computing theme at Oxford. They works as a twist on the traditional hackathon; by building on principles of responsible innovation, our ethical hackathons are geared towards forefronting ethical issues alongside design ones in the completion of a task.
In the ethical hackathon teams work together to carry out a competitive design task. In addition to thinking about technical features of design they are required to address the social and ethical implications of the particular technology involved. They are challenged to identify novel and creative solutions that embed ethical considerations into their design. Teams are interdisciplinary so that they can share expertise and learn from each other in a fun environment. They are then assessed by a panel of experts who judge the technical quality of their work alongside how well they have worked together to identify and address ethical concerns.
To what extent can AI/statistical systems support the criminal justice process? Can we rely on algorithmic calculations to help us make decisions about whether an offender should receive a prison sentence? Are sentencing decisions made by statistical systems more or less likely to be flawed than those made by humans? As the use of AI in criminal justice systems around the world continues to grow, these questions become more and urgent to discuss – and were the focus of a recent roundtable discussion held at Oxford.