For the past two years our UnBias project has been exploring issues around ‘unfairness’ in algorithmic systems. Our research work has identified the existence of concerns across different societal and professional groups over the contemporary prevalence of algorithm driven online platforms. At the same time, we have also identified amongst these groups a desire for change to improve the user experience of platforms. Our analysis highlights several opportunities for positive change – in particular in relation to education, societal engagement and policy.
As the UnBias project draws to a close, we can a report a range of substantial findings. A summary can be downloaded here or watched as a short narrated slideshow below.
You can also find more details about our key findings and outputs here.
From December 2018 our project team will be working on a new project to follow on from UnBias. ReEnTrust will identify mechanisms to foster trust between users, algorithms and platforms.
On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.
For two years the UnBias project has been examining the user experience of algorithm driven internet platforms and seeking answers to important questions such as:
Are algorithms ever neutral?
How might algorithmic systems produce unexpected outcomes that systematically disadvantage individuals, groups or communities?
How can we make sure that algorithmic processes operate in our best interests?
On October 1st 2018 we held our UnBias project showcase event. Seventy attendees from the fields of research, policy, law, industry and education came together at the Digital Catapult in London to hear and discuss our key project findings. We also highlighted our practical outputs such as policy guidelines and our exciting fairness toolkit.
The event also included a lively panel debate and engaging presentations from external speakers about the social consequences of algorithmic biases and how they might be addressed. We ended the day by announcing plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
The Nottingham UnBias team have finished running our Youth Juries and we are delighted to announce the launch of a report ‘Youth Juries- what we learned from you’ – for the children and young people that participated. If you were one of these people- thank you for taking part! We hope that you enjoy reading the report. We would also like to thank our Youth Advisory Group for their thoughtful and constructive insights that helped to design and shape the report.
For a shorter summary of what we did in the Youth Juries and what the team discovered, please keep reading…
On October 1st the UnBias project team will be showcasing the outcomes of our work. We are looking forward to welcoming an audience of 70 stakeholders from research, law, policy, education and industry.
In addition to reporting on our major findings we will also highlight key outputs such as policy guidelines and demonstrate our exciting fairness toolkit. This engaging and interactive event will also include presentations from external speakers and opportunities for networking. Furthermore, we will announce plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
Over the last two years UnBias has engaged with a wide range of stake holder to explore the issue of bias in algorithmic decision making systems. In this post I wanted to share some personal thoughts on this issue, especially in relation the introduction of “AI” systems.