All posts by Ansgar Koene

On 16th April the House of Select Committee on Artificial Intelligence published a report called ‘AI in the UK: ready, willing and able?”. The report is based on an inquiry by the House of Lords conducted to consider the economic, ethical and social implications of advances in artificial intelligence. UnBias team member Ansgar Koene submitted written evidence based on the combined work of the UnBias investigations and our involvement with the development of the IEEE P7003 Standard for Algorithmic Bias Considerations.

Continue reading

European Commission initiatives to explore regulatory requirements for AI

On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.

The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.

Continue reading European Commission initiatives to explore regulatory requirements for AI

Responsible Data Science at Royal Society Bilateral UK-NL workshop

From 21st to 22nd February the Royal Society and the Royal Netherlands Academy of Arts and Sciences (KNAW) held a UK – Netherlands bilateral international meeting to explore common research interests in the fields of Quantum Physics and Technology, Nanochemistry and Responsible Data Science. UnBias was pleased to participate as part of the Responsible Data Science stream.

Continue reading Responsible Data Science at Royal Society Bilateral UK-NL workshop

In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.

Continue reading In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”

Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.

Continue reading UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”