In the spirit of recent events surrounding the revelations about Cambridge Analytica and the breaches of trust regarding Facebook and personal data, ISOC UK and the Horizon Digital Economy Research institute held a panel discussion on “Multi Sided Trust for Multi Sided Platforms“. The panel brought together representatives from different sectors to discuss the topic of trust on the Internet, focusing on consumer to business trust; how users trust online services that are offered to them. Such services include, but are not limited to, online shopping, social media, online banking and search engines.
On September 14th the US ACM organized a panel on Algorithmic Transparency and Accountability in Washington DC to discuss the importance of the Statement on Algorithmic Transparency and Accountability and opportunities for cooperation between academia, government and industry around these principles. Also part of this panel was Ansgar, representing the IEEE Global Initiative on Ethical Considerations for Artificial Intelligence and Autonomous Systems, its P7000 series of Standards activities, and UnBias.
On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.
For algorithm based systems, as with many other topics, 2016 turned out to be an eventful year. As we close the year and look back on events, the course of 2016 brought many of the issues we intend to address in the UnBias project to the attention of people and organizations who previously perhaps had not considered these things before.
A recent report from the BBC covers one instance of the ever-growing use of algorithms for social purposes and helps us to illustrate some key ethical concerns underpinning the UnBias project.