June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.
The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.
We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.
The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.
As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene and will have its first web-meeting on May 5th 2017.
Many multi-user scenarios are characterised by a combinatorial nature, i.e., an algorithm can take meaningful decisions for the users only if all their requirements and preferences are considered at the same time to select a solution from a huge potential space of possible system decisions. Sharing economy application, where users aim to find peers to form teams with in order to accomplish a task, and situations in which a limited number of potentially different resources, e.g. hotel rooms, must be distributed to users who have preferences over them are examples of such scenarios.
On 21st March the House of Select Committee on Communications published a report called ‘Growing up with the internet’. The report is based on an enquiry conducted by the House of Lords into Children and the Internet. UnBias team member Professor Marina Jirotka served as a specialist advisor to the enquiry and team member Professor Derek McAuley gave verbal evidence to it, elaborating on the written evidence submitted by Perez, Koene and McAuley.
What are algorithms and how are they designed? Why are they used in commercial practice and what kinds of benefits can they bring? What are the potential harmful impacts of using algorithms and how can they be prevented?
On Wednesday 15th February 2017 some UnBias consortium members had the pleasure of attending an Algorithm Workshop hosted by the Law School, University of Strathclyde. During the workshop, we had the opportunity to consider, discuss and begin to address key issues and concerns surrounding the contemporary prevalence of algorithms. The workshop was also attended by students from the host University and an interdisciplinary group of experts from areas including Law, Computer Science and the Social Sciences. This mix of expertise made for a really great afternoon of talks and discussions surrounding the design, development and use of algorithms through various disciplinary perspectives.
On February 10th and 11th, UnBias participated in the 2017 Explorers Fair Expo at the Nottingham Broadway cinema to engage with parent, children and citizens of any age in discussing the ways in which algorithms affect our lives.
On February 3rd a group of twenty five stakeholders joined us at the Digital Catapult in London for our first discussion workshop.
The User Engagement workpackage of the project focuses on gathering together professionals from industry, academia, education, NGOs and research institutes in order to discuss societal and ethical issues surrounding the design, development and use of algorithms on the internet. We aim to create a space where these stakeholders can come together and discuss their various concerns and perspectives. This includes finding differences of opinion. For example, participants from industry often view algorithms as proprietary and commercially sensitive whereas those from NGOs frequently call for greater transparency in algorithmic design. It is important for us to draw out these kinds of varying perspectives and understand in detail the reasoning that lies behind them. Then, combined with the outcomes of the other project workpackages, we can identify points of resolution and produce outputs that seek to advance responsibility on algorithm driven internet platforms.
An important topic considered this year at the International Conference on Neural Information Processing Systems (NIPS), one of the prime outlets for machine learning and Artificial Intelligence research in the world, is the connection between machine learning, law and ethics. In particular, a paper presented by Moritz Hardt, Eric Price, and Nathan Srebro focused on Equality of Opportunity in Supervised Learning.