In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.

Continue reading In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”

Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.

Continue reading UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”

Algorithms and the persuasion machine

In the current BBC series Secrets of Silicon Valley Jamie Bartlett (technology writer and Director of the Centre for Social Media Analysis at Demos) explores the ‘dark reality behind Silicon Valley’s glittering promise to build a better world.’ Episode 2, The Persuasion Machine, shines a spotlight on several of the issues we are investigating in UnBias.

Continue reading Algorithms and the persuasion machine

A Month of Conferences and Workshops

June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.

Continue reading A Month of Conferences and Workshops

UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law

The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.

Continue reading UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law

Publication of 1st WP4 workshop report

We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.

The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.

Continue reading Publication of 1st WP4 workshop report

IEEE Standard for Algorithm Bias Considerations

As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene and will have its first web-meeting on May 5th 2017.

Continue reading IEEE Standard for Algorithm Bias Considerations

How hard is to be fair in multi-user combinatorial scenarios?

Many multi-user scenarios are characterised by a combinatorial nature, i.e., an algorithm can take meaningful decisions for the users only if all their requirements and preferences are considered at the same time to select a solution from a huge potential space of possible system decisions. Sharing economy application, where users aim to find peers to form teams with in order to accomplish a task, and situations in which a limited number of potentially different resources, e.g. hotel rooms,  must be distributed to users who have preferences over  them are examples of such scenarios.

Continue reading How hard is to be fair in multi-user combinatorial scenarios?

Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy