For the past two years our UnBias project has been exploring issues around ‘unfairness’ in algorithmic systems. Our research work has identified the existence of concerns across different societal and professional groups over the contemporary prevalence of algorithm driven online platforms. At the same time, we have also identified amongst these groups a desire for change to improve the user experience of platforms. Our analysis highlights several opportunities for positive change – in particular in relation to education, societal engagement and policy.
As the UnBias project draws to a close, we can a report a range of substantial findings. A summary can be downloaded here or watched as a short narrated slideshow below.
You can also find more details about our key findings and outputs here.
From December 2018 our project team will be working on a new project to follow on from UnBias. ReEnTrust will identify mechanisms to foster trust between users, algorithms and platforms.
With Paris playing host to the Paris Peace Forum from 11 to 13 November, the GovTech summit on November 12th, the Internet Governance Forum (IGF2018) from 12 to 14 November, and concluding with a UNESCO/ISOC/Mozilla symposium on November 15th (and the ITU Plenipotentiary Conference 2018 running simultaneously in Dubai from 29 October to 16 November), the start of November saw a lot a activity relating to Internet (as associated) Governance. For those of us based in the UK, this series of conference was further continued with the UK IGF 2018 on November 22nd.
Reporting on our work towards developing policy recommendation, industry standards and educational resources, UnBias participated in the IGF2018, the UNESCO/ISOC/Mozilla symposium, and UKIGF2018, as well as an informal presentation at the CNIL.
Continue reading UnBias participation in multi-stakeholder debates on Internet Governance and AI Ethics
As part of our ongoing collaboration with the UK England chapter of the Internet Society (ISOC-UK England), UnBias will run a workshop on:
Algorithmic awareness building for User Trust in online platforms
Time: Friday, November 30th 2018, 18:00 to 21:00 (UTC), London
Place: Cloudflare offices, 25 Lavington Street, Southwark, London (link to Google Map)
Follow this link to register.
Continue reading Workshop on Algorithmic awareness building for User Trust in online platforms
On October 25th we presented our Science Technology Options Assessment (STOA) report on “a governance framework for algorithmic accountability and transparency” to the Members of the European Parliament and the European Parliament Research Services “Panel for the Future of Science and Technology.
Continue reading Presentation of “a governance framework for algorithmic accountability and transparency” at the European Parliament
For two years the UnBias project has been examining the user experience of algorithm driven internet platforms and seeking answers to important questions such as:
- Are algorithms ever neutral?
- How might algorithmic systems produce unexpected outcomes that systematically disadvantage individuals, groups or communities?
- How can we make sure that algorithmic processes operate in our best interests?
On October 1st 2018 we held our UnBias project showcase event. Seventy attendees from the fields of research, policy, law, industry and education came together at the Digital Catapult in London to hear and discuss our key project findings. We also highlighted our practical outputs such as policy guidelines and our exciting fairness toolkit.
The event also included a lively panel debate and engaging presentations from external speakers about the social consequences of algorithmic biases and how they might be addressed. We ended the day by announcing plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
Watch an overview of the day here!
The Nottingham UnBias team have finished running our Youth Juries and we are delighted to announce the launch of a report ‘Youth Juries- what we learned from you’ – for the children and young people that participated. If you were one of these people- thank you for taking part! We hope that you enjoy reading the report. We would also like to thank our Youth Advisory Group for their thoughtful and constructive insights that helped to design and shape the report.
For a shorter summary of what we did in the Youth Juries and what the team discovered, please keep reading…
Continue reading Exciting News for our Youth Jurors!
OUR FUTURE INTERNET: FROM BIAS TO TRUST
DIGITAL CATAPULT: OCTOBER 1ST 10.30 AM TO 5.00 PM
On October 1st the UnBias project team will be showcasing the outcomes of our work. We are looking forward to welcoming an audience of 70 stakeholders from research, law, policy, education and industry.
In addition to reporting on our major findings we will also highlight key outputs such as policy guidelines and demonstrate our exciting fairness toolkit. This engaging and interactive event will also include presentations from external speakers and opportunities for networking. Furthermore, we will announce plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
Continue reading Looking forward to the UnBias Showcase! October 1st 2018, London
Over the last two years UnBias has engaged with a wide range of stake holder to explore the issue of bias in algorithmic decision making systems. In this post I wanted to share some personal thoughts on this issue, especially in relation the introduction of “AI” systems.
Continue reading Why personalization systems discriminate
On the week-end of June 30th and July 1st, the UnBias team hosted a two-day hackathon at Codebase in Edinburgh, with support from local outfit Product Forge, whose experience organizing such events is unrivalled in Scotland.
The hackathon challenge was formulated as follows:
“Artificial Intelligence shapes digital services that have become central to our everyday lives. Online platforms leverage the power of AI to monetize our attention, with often unethical side-effects: our privacy is routinely breached, our perception of the world is seriously distorted, and we are left with unhealthy addictions to our screens and devices. The deep asymmetry of power between users and service providers, the opacity and unaccountability of the algorithms driving these services, and their exploitation by trolls, bullies and propagandists are serious threats to our well-being in the digital era.
Continue reading Unbias Hackathon
In response to the growing importance of algorithmic products in international trade, regional and international trade negotiations at the WTO and elsewhere are currently seeking to set down new rules regarding issues such as Intellectual Property and algorithmic transparency.
In order to try to avoid outcomes of the trade negotiations that inadvertently blocks algorithmic accountability, Ansgar is supporting Sanya Reid Smith of Third World Network in her efforts to brief trade negotiators on causes and consequences of algorithmic bias and the current status of regulatory and standards initiatives to address these issues.
Continue reading Briefing to RCEP eCommerce trade negotiators