Algorithmic discrimination: are you IN or OUT?

chosenoneA lot has been said about algorithms working as gatekeepers and making decisions on our behalf, often without us noticing it. I can surely find an example in my daily life where I do notice it and benefit from it. This happens when I use the “Discover Weekly” Spotify play-list. By comparing my listening habits to that of other users with similar but not identical choices, Spotify allows information on the fringes to be shared. It is thus “tailored” to my music taste, and it is incredibly accurate in predicting things I would like. Besides, it lets me discover new music and bands and in many occasions can also take me back in time with some tunes I have probably not listened to for a long time.

However, one of the things that worry me when thinking about algorithms and the way they can shape our lives is discrimination. Some topics/examples I have come across in the press illustrates this very well: i) the use of algorithms by the U.S criminal justice system to predict recidivism – the system showed bias correlated with race; ii) gender discrimination in advertising – Google’s ads showed listings for high-paying jobs to men more often than it did to women; and iii) hiring algorithms. The case of Kyle Behm, a university student from Tennessee, who was being “red-lighted” by a personality test when applying for jobs drew my attention. Surely, there are many Kyles out there that have never thought this may have happened to them too. In brief, personal information should not be used to exclude applicants. Besides, if a candidate is not allowed to be asked personal questions on a face-to-face interview, how can mental-health related questionnaires be allowed to be requested online? How can corporations get away with things online that are illegal offline?

Computerised algorithms are often viewed as neutral. However, as they are designed by humans they depend on which elements were taken into account when the algorithm was programmed. Maybe because I enjoy nice food, I like Cathy O’Neil’s explanation of algorithms models resembling the process of cooking a meal. As such, the key issues that define an algorithm are what data goes in the model and how success is measured. At this stage is when inevitably, human bias gets incorporated. As Zeynep Tufekci has recently mentioned, often the computational systems themselves are not biases but the outcomes and what we do with them are.

As suggested by Keith Kirkpatric, “one solution for handling discrimination is to monitor algorithms to determine fairness”. However, agreeing on a common definition of fairness might be quite challenging.

Things are not normally black or white, but how can we trust “black boxes” if we don’t’ understand how they work? Maybe we could follow Tim O’Reilly’s rules on how to evaluate whether we can trust an algorithm:

  1. “Its creators have made clear what outcome they are seeking, and it is possible for external observers to verify that outcome”. Quite often this is not the case. Besides, where would you find this info? Can these be decoded from the “reader friendly” terms and conditions?
  2. “Success is measurable”. However, the definition of success can be biased, and just because it is said to be “measurable” still doesn’t mean it always benefits the user.
  3. “The goals of the algorithm’s creators are aligned with the goals of the algorithm’s consumers”. My question here is who are the main consumers? I don’t think it is us, but the corporations and organisations that use automated systems for their commercial benefits and interest, plus the advertisers that make profit from us.
  4. “Does the algorithm lead its creators and its users to make better longer term decisions?” I think this is linked to rule n. 3 and besides, in most of the cases I can think of, automated algorithms make decisions on our behalf. Is this what we want? And if so, in which context?

I think O’Reilly’s rules are corporate orientated and society does not seem to be taken into account on the equation.

When we are talking about algorithms and personal data, I would like to see a consistent ethical framework in place. I believe appropriate regulation and oversight of ethics on the design and use of automated systems should be a priority. This topic has recently raised lots of attention. The U.S government has become very interested in AI technology and on 13th October 2016 President Obama hosted the White House Frontiers Conference, highlighting science and technology frontiers, as well as being the guest-editor of the November 2016 issue of the Wired magazine. There is no doubt public policy approaches for the regulation of algorithms are moving forward. The big challenge is not only to persuade government institutions and money driven corporations that ethical standards and legal safeguards are needed, but also to put them into practice.

Computer systems should help to facilitate processes for us rather than taking decisions or discriminate us. Within the UnBias project we would like to raise awareness on these issues, produce educational materials for young people, fairness “toolkit”, and to produce recommendations to promote algorithmic accountability, auditing and transparency.

2 thoughts on “Algorithmic discrimination: are you IN or OUT?”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.