On September 7th the Guardian published an article drawing attention to a study from Stanford University which had applied Deep Neural Networks (a form of machine learning AI) to test if they could distinguish peoples’ sexual orientation from facial images. After reading both the original study and the Guardian’s report about it, there were so many problematic aspects about the study that I immediately had to write a response, which was published in the Conversation on September 13th under the title “Machine gaydar: AI is reinforcing stereotypes that liberal societies are trying to get rid of“.
Tag Archives: the Conversation
News, algorithms bias and editorial responsibility
In an almost suspiciously conspiracy-like fashion the official launch of UnBias at the start of September was immediately accompanied by a series of news articles providing examples of problems with algorithms that are making recommendations or controlling the flow of information. Cases like the unintentional racial bias in a machine learning based beauty contest algorithm, meant to remove bias of human judges; a series of embarrassing news recommendations on the Facebook trending topics feed, as a results of an attempt to avoid (appearance of) bias by getting rid of human editors; and controversy about Facebook’s automated editorial decision to remove the Pulitzer prize-winning “napalm girl” photograph because the image was identifies as containing nudity. My view of these events? “Facebook’s algorithms give it more editorial responsibility – not less“ (published today in the Conversation).