On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.
Continue reading In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”
Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.
Continue reading UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”
In the current BBC series Secrets of Silicon Valley Jamie Bartlett (technology writer and Director of the Centre for Social Media Analysis at Demos) explores the ‘dark reality behind Silicon Valley’s glittering promise to build a better world.’ Episode 2, The Persuasion Machine, shines a spotlight on several of the issues we are investigating in UnBias.
Continue reading Algorithms and the persuasion machine
As part of our work to contribute to the development of the IEEE P7003 Standard for Algorithm Bias Considerations we are reaching out to the community of stakeholders to ask for use cases highlighting real-world instances of unjustified and/or inappropriate bias in algorithmic decisions.
Continue reading A call for use case examples
June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.
Continue reading A Month of Conferences and Workshops
The 4th Winchester Conference on Trust, Risk, Information and the Law took place at the University of Winchester on Wednesday 3rd May 2017. The overarching theme of the day was “Artificial and De-Personalised Decision-Making: Machine-Learning, A.I. and Drones”: offering a chance for multi-stakeholder and interdisciplinary discussion on the risks and opportunities presented by algorithms, machine learning and artificial intelligence.
Continue reading UnBias project contribution to the 4th Winchester Conference on Trust, Risk, Information and the Law
We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.
The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.
Continue reading Publication of 1st WP4 workshop report
As part of our stakeholder engagement work towards the development of algorithm design and regulation recommendations UnBias is engaging with the IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems to develop an IEEE Standard for Algorithm Bias Considerations, designated P7003. The P7003 working group is chaired by Ansgar Koene and will have its first web-meeting on May 5th 2017.
Continue reading IEEE Standard for Algorithm Bias Considerations
Many multi-user scenarios are characterised by a combinatorial nature, i.e., an algorithm can take meaningful decisions for the users only if all their requirements and preferences are considered at the same time to select a solution from a huge potential space of possible system decisions. Sharing economy application, where users aim to find peers to form teams with in order to accomplish a task, and situations in which a limited number of potentially different resources, e.g. hotel rooms, must be distributed to users who have preferences over them are examples of such scenarios.
Continue reading How hard is to be fair in multi-user combinatorial scenarios?
On 21st March the House of Select Committee on Communications published a report called ‘Growing up with the internet’. The report is based on an enquiry conducted by the House of Lords into Children and the Internet. UnBias team member Professor Marina Jirotka served as a specialist advisor to the enquiry and team member Professor Derek McAuley gave verbal evidence to it, elaborating on the written evidence submitted by Perez, Koene and McAuley.
Continue reading “Growing up Digital” UnBias team members contribute to House of Lords report