In an almost suspiciously conspiracy-like fashion the official launch of UnBias at the start of September was immediately accompanied by a series of news articles providing examples of problems with algorithms that are making recommendations or controlling the flow of information. Cases like the unintentional racial bias in a machine learning based beauty contest algorithm, meant to remove bias of human judges; a series of embarrassing news recommendations on the Facebook trending topics feed, as a results of an attempt to avoid (appearance of) bias by getting rid of human editors; and controversy about Facebook’s automated editorial decision to remove the Pulitzer prize-winning “napalm girl” photograph because the image was identifies as containing nudity. My view of these events? “Facebook’s algorithms give it more editorial responsibility – not less“ (published today in the Conversation).
The issue of Facebook’s editorial responsibility was raised again following the US presidential election in which the spread of fake news via social media was seen as a major factor in polarizing the popular debate.
https://www.theguardian.com/technology/2016/nov/10/facebook-fake-news-election-conspiracy-theories
see especially the second half of the article.
See also: http://nymag.com/selectall/2016/11/donald-trump-won-because-of-facebook.html