To what extent can AI/statistical systems support the criminal justice process? Can we rely on algorithmic calculations to help us make decisions about whether an offender should receive a prison sentence? Are sentencing decisions made by statistical systems more or less likely to be flawed than those made by humans? As the use of AI in criminal justice systems around the world continues to grow, these questions become more and urgent to discuss – and were the focus of a recent roundtable discussion held at Oxford.
The Roundtable on Uses of Artificial Intelligence in the Criminal Justice System was organised as part of the AI and Law in New Zealand project, at the University of Otago, with support from the UEHIRO Centre for Practical Ethics at Oxford University. The event was held on November 23rd and 24th 2017 at St Anne’s College, Oxford. The project team also held a second roundtable event on the same theme at the University of Otago on December 11th and 12th 2017. The aim of the day was to facilitate discussion on AI and criminal justice from multiple perspectives. It was attended by, amongst others, lawyers, policy researchers, AI technologists, statisticians, ethicists and police officers. The event was structured around 5 issues, as set out by the organisers:
• Accuracy: How reliable are the system’s predictions or judgements? How can it be tested for accuracy? Should the results of such evaluations be made public?
• Bias: Is the system discriminatory towards particular social groups? Is bias ever acceptable, if it leads to higher accuracy? Are there ways of removing bias without compromising accuracy?
• Control: How do human decision-makers interact with the system? How can we most productively combine human decisions with the system’s processes? What control should humans have over the system’s outputs?
• Transparency: Should there be a requirement that the system’s outputs be ‘explainable’? If so, how can/should explanations be provided? Can these explanations be provided without infringing on individuals’ privacy, or (for commercial systems) disclosing proprietary code?
• Oversight and Regulation: What ethical or legal frameworks should be established to ensure good practice on all of the above issues?
UnBias was represented by project team member Helena Webb, who contributed to the discussions by giving a presentation on the social and normative dimensions of algorithmic bias. She described the ways that some contemporary uses of algorithms in the criminal justice system (and other contexts) have been criticised as disadvantaging or even discriminating against certain societal groups. She highlighted that due to the newness of AI and algorithmic decision making much is as yet unknown about the potential positive and negative impacts of these innovations. She then used findings from our UnBias stakeholder workshops to outline the challenges that lie ahead in working through these issues and the importance of soliciting multiple perspectives in order to identify effective ways in which bias might be managed or avoided.