The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.
Members of the UnBias Team from Oxford and Nottingham attended and played an active role within the conference. We presented a poster outlining the aims of the project and its progress so far. We also hosted a workshop, during the afternoon session of the conference, where stakeholders were invited to come along and consider a case study on algorithmic fairness and take part in wider discussion.
The particular case study focused on controversies around the use of algorithms in the US criminal justice system. Some US courts use algorithmically derived risk assessment scores to determine whether a defendant should be granted bail or how long their sentence should be. Those deemed to be “low risk” offenders are often granted bail, given shorter sentences, or perhaps kept out of jail entirely. Though risk assessment scores are made available to the defendant’s legal team, there is typically no insight into what criteria are used to calculate the risk assessment score since the algorithms are proprietary to the companies that develop them. There are different, and often polarized, perspectives regarding the use of such decision making tools. Advocates would suggest that the use of algorithms removes any bias from the process, and may help in the reduction of overcrowding in prisons. However, it was recently found by the investigative journalism site Prob-Publica that such algorithms may be inaccurate and racially biased. Issues they uncovered included that only 20% of those predicted to commit a violent crime had gone on to do so, and that black people were twice as likely to be falsely labelled at future risk of offending than white people.
This case study stimulated considerable discussion amongst our workshop participants over whether or not such algorithms should and could be used in the criminal justice system. Given the presence of stakeholders from law enforcement agencies, this made for a lively and insightful session. Many participants were critical of the ‘right’ for development companies to retain complete privacy over the assessment criteria used to determine risk scores. Some also raised concerns over the data sets that were used to train the algorithms in the first place, and noted that though algorithms may be perceived to be entirely impartial, this is in fact not possible given values held by those developing them.
The workshop, other talks during the day, and a panel session to close the conference revealed the complexities involved in reaching solutions to engender fairness in the context of “artificial and de-personalised decision-making”. Indeed, they also raised the question of whether it is actually possible to embed a true sense of impartiality into these innovations. It was a wonderful day, and this was only enhanced at the end when UnBias was awarded the poster prize for best poster.
One thought on “UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law”