All posts by Ansgar Koene

European Commission initiatives to explore regulatory requirements for AI

On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.

The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.

Continue reading European Commission initiatives to explore regulatory requirements for AI

Responsible Data Science at Royal Society Bilateral UK-NL workshop

From 21st to 22nd February the Royal Society and the Royal Netherlands Academy of Arts and Sciences (KNAW) held a UK – Netherlands bilateral international meeting to explore common research interests in the fields of Quantum Physics and Technology, Nanochemistry and Responsible Data Science. UnBias was pleased to participate as part of the Responsible Data Science stream.

Continue reading Responsible Data Science at Royal Society Bilateral UK-NL workshop

In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.

Continue reading In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”

Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.

Continue reading UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”

A Month of Conferences and Workshops

June was a month of conferences and workshops for UnBias. The 3rd UnBias project meeting on June 1st, hosted by our Edinburgh partners this time, was quickly followed by the Ethicomp and EuroDIG conferences which both took place from June 5th to 8th.

Continue reading A Month of Conferences and Workshops

Publication of 1st WP4 workshop report

We are please to announce that the report summarizing the outcomes of the first UnBias project stakeholder engagement workshop is now available for public dissemination.

The workshop took place on February 3rd 2017 at the Digital Catapult centre in London, UK. It brought together participants from academia, education, NGOs and enterprises to discuss fairness in relation to algorithmic practice and design. At the heart of the discussion were four case studies highlighting fake news, personalisation, gaming the system, and transparency.

Continue reading Publication of 1st WP4 workshop report