Interested stakeholders are invited to attend a one-day workshop on October 1st where we will be showcasing the outcomes of the UnBias (Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy) project.
In addition to reporting on our major findings we will also highlight key outputs such as policy guidelines and demonstrate our exciting fairness toolkit. This engaging and interactive event will be of relevance to members of industry as well those from the research, policy, law and education communities. The day will also include presentations from external speakers and opportunities for networking. Furthermore, we will announce plans for our follow-on project, ReEnTrust, which will identify mechanisms to rebuild and enhance trust in algorithmic systems.
HOW TO BOOK YOUR PLACE
The workshop is free to attend but registration is necessary.
If you would like to attend, please contact email@example.com
The Fairness Toolkit has been developed for UnBias by Giles Lane and his team at Proboscis, with the input of young people and stakeholders. It is one of our project outputs aiming to promote awareness and stimulate a public civic dialogue about how algorithms shape online experiences and to reflect on possible changes to address issues of online unfairness. The tools are not just for critical thinking, but for civic thinking – supporting a more collective approach to imagining the future as a contrast to the individual atomising effect that such technologies often cause.
In response to the growing importance of algorithmic products in international trade, regional and international trade negotiations at the WTO and elsewhere are currently seeking to set down new rules regarding issues such as Intellectual Property and algorithmic transparency.
In order to try to avoid outcomes of the trade negotiations that inadvertently blocks algorithmic accountability, Ansgar is supporting Sanya Reid Smith of Third World Network in her efforts to brief trade negotiators on causes and consequences of algorithmic bias and the current status of regulatory and standards initiatives to address these issues.
As of May 25th 2018 the Data Protection Act 2018 (DPA2018) has taken effect in the UK, supporting and supplementing the implementation of the EU General Data Protection Regulation (GDPR).
An important requirement in the DPA2018, going beyond the GDPR, is the inclusion of an Age Appropriate Design Code (section 123 of DPA2018) to provide guidance on the design standards that the Information Commissioner’s Office (ICO) will expect providers of online ‘Information Society Services’ (ISS), which are likely to be accessed by children, to meet.
We are pleased to announce that UnBias won one of the three 2017 RCUK Digital Economy Theme ‘Telling Tales of Engagement’ awards. The evaluation process for this award considered both the impact of our previous work and a proposed new activity to “tell the story” of our research.
Our submission was titled “building and engaging with multi-stakeholder panels for developing policy recommendations”, highlighting the importance to our research of engaging with our stakeholder panel and with organizations that are shaping the policy and governance space for algorithmic systems.
On 16th April the House of Select Committee on Artificial Intelligence published a report called ‘AI in the UK: ready, willing and able?”. The report is based on an inquiry by the House of Lords conducted to consider the economic, ethical and social implications of advances in artificial intelligence. UnBias team member Ansgar Koene submitted written evidence based on the combined work of the UnBias investigations and our involvement with the development of the IEEE P7003 Standard for Algorithmic Bias Considerations.
In the spirit of recent events surrounding the revelations about Cambridge Analytica and the breaches of trust regarding Facebook and personal data, ISOC UK and the Horizon Digital Economy Research institute held a panel discussion on “Multi Sided Trust for Multi Sided Platforms“. The panel brought together representatives from different sectors to discuss the topic of trust on the Internet, focusing on consumer to business trust; how users trust online services that are offered to them. Such services include, but are not limited to, online shopping, social media, online banking and search engines.
On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.
The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.
Some of us attended a joint conference of the ECREA (European Communications Research and Education Association) Communication and Media Industries, on the 10th-11th November in Stockholm. About 100 people, mainly academics, researchers from NGOs and media consultants from Europe and the US, took part.