To what extent can AI/statistical systems support the criminal justice process? Can we rely on algorithmic calculations to help us make decisions about whether an offender should receive a prison sentence? Are sentencing decisions made by statistical systems more or less likely to be flawed than those made by humans? As the use of AI in criminal justice systems around the world continues to grow, these questions become more and urgent to discuss – and were the focus of a recent roundtable discussion held at Oxford.
Some of us attended a joint conference of the ECREA (European Communications Research and Education Association) Communication and Media Industries, on the 10th-11th November in Stockholm. About 100 people, mainly academics, researchers from NGOs and media consultants from Europe and the US, took part.
Members of the UnBias team and the Digital Wildfire project from the Universities of Nottingham and Oxford were delighted to participate in Mozilla Festival (MozFest), which took place over the weekend of 28th-29th October 2017. The festival saw thousands of members of the general public, of all ages and nationalities, pass through the doors of Ravensbourne College to engage in a festival that aimed to promote a healthy internet and a web for all. Issues of digital inclusion, web literacy and privacy and security were some of the key topics that were discussed at the event.
How do you take care on the Internet? What are the dangers of online fake news and filter bubbles? What are appropriate punishments for hate speech and trolling?
These are questions we asked members of the public during the Curiosity Carnival at the University of Oxford on September 30th. The Curiosity Carnival formed part of European Researchers’ Night, celebrated in cities across Europe. Oxford ran a city wide programme of activities across its universities, libraries, gardens and woods to give members of the public a chance to find out about real research projects and meet the people who conduct them.
On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.
Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.
In the current BBC series Secrets of Silicon Valley Jamie Bartlett (technology writer and Director of the Centre for Social Media Analysis at Demos) explores the ‘dark reality behind Silicon Valley’s glittering promise to build a better world.’ Episode 2, The Persuasion Machine, shines a spotlight on several of the issues we are investigating in UnBias.
As part of our work to contribute to the development of the IEEE P7003 Standard for Algorithm Bias Considerations we are reaching out to the community of stakeholders to ask for use cases highlighting real-world instances of unjustified and/or inappropriate bias in algorithmic decisions.
June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.