Tag Archives: Ansgar

On 16th April the House of Select Committee on Artificial Intelligence published a report called ‘AI in the UK: ready, willing and able?”. The report is based on an inquiry by the House of Lords conducted to consider the economic, ethical and social implications of advances in artificial intelligence. UnBias team member Ansgar Koene submitted written evidence based on the combined work of the UnBias investigations and our involvement with the development of the IEEE P7003 Standard for Algorithmic Bias Considerations.

Continue reading

European Commission initiatives to explore regulatory requirements for AI

On March 5th and 6th UnBias had the pleasure of participating in a workshop that was organized to signal the launch of the European Commission’s Joint Research Center’s HUMAINT (HUman behaviour and MAchine INTelligence ) project.

The HUMAINT project is a multidisciplinary research project that aims to understand the potential impact of machine intelligence on human behaviour. A particular focus of the project lies on human cognitive capabilities and decision making. The project recognizes that machine intelligence may provide cognitive help to people, but that algorithms can also affect personal decision making and raise privacy issues.

Continue reading European Commission initiatives to explore regulatory requirements for AI

Digital Democracy: Critical Perspectives in the Age if Big Data

Some of us attended a joint conference of the ECREA (European Communications Research and Education Association) Communication and Media Industries, on the 10th-11th November in Stockholm. About 100 people, mainly academics, researchers from NGOs and media consultants from Europe and the US, took part.

Continue reading Digital Democracy: Critical Perspectives in the Age if Big Data

In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.

Continue reading In the Conversation: “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of”

USACM Panel on Algorithm Transparency and Accountability

USACM, the ACM U.S. Public Policy Council, will be hosting a panel event on “Algorithmic Transparency and Accountability.” The event will provide a forum for a discussion between stakeholders and leading computer scientists about the growing impact of algorithmic decision-making on our society and the technical underpinnings of algorithmic models.

Panelists will discuss the importance of the Statement on Algorithmic Transparency and Accountability and the opportunities for cooperation between academia, government and industry around these principles.

UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”

Prior to the June 8th snap election there were two Commons Select Committee inquiries that both touched directly on our work at UnBias and for which we submitted written evidence. One on “Algorithms in decision-making” and another on “Fake News”.

Continue reading UnBias submissions to UK Parliamentary inquiries on “Fake News” and “Algorithms in decision-making”