Our Key Findings!

For the past two years our UnBias project has been exploring issues around ‘unfairness’ in algorithmic systems. Our research work has identified the existence of concerns across different societal and professional groups over the contemporary prevalence of algorithm driven online platforms. At the same time, we have also identified amongst these groups a desire for change to improve the user experience of platforms. Our analysis highlights several opportunities for positive change – in particular in relation to education, societal engagement and policy.

As the UnBias project draws to a close, we can a report a range of substantial findings. A summary can be downloaded here or watched as a short narrated slideshow below.

You can also find more details about our key findings and outputs here.

From December 2018 our project team will be working on a new project to follow on from UnBias. ReEnTrust will identify mechanisms to foster trust between users, algorithms and platforms.

Presentation of “a governance framework for algorithmic accountability and transparency” at the European Parliament

On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.

Continue reading Presentation of “a governance framework for algorithmic accountability and transparency” at the European Parliament

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy