In an almost suspiciously conspiracy-like fashion the official launch of UnBias at the start of September was immediately accompanied by a series of news articles providing examples of problems with algorithms that are making recommendations or controlling the flow of information. Cases like the unintentional racial bias in a machine learning based beauty contest algorithm, meant to remove bias of human judges; a series of embarrassing news recommendations on the Facebook trending topics feed, as a results of an attempt to avoid (appearance of) bias by getting rid of human editors; and controversy about Facebook’s automated editorial decision to remove the Pulitzer prize-winning “napalm girl” photograph because the image was identifies as containing nudity. My view of these events? “Facebook’s algorithms give it more editorial responsibility – not less“ (published today in the Conversation).