In algorithms we trust: The illusion of objectivity and the dangerous reality of implicit bias

This post was written by a psychology student in the context of the Honours Students Workshop, “Blogging in Science”

The other day, my best friend sent me a message saying “Google knows” with a picture attached. I opened the message and saw a screenshot depicting a group of girls. They were posing in front of a Holocaust memorial. Included was a quite shocking caption that I will not repeat here.

My friend’s phone suggested that this was a picture of me, but the resemblance wasn’t obvious. The girl in the picture was only algorithmically similar: even though a human would never mistake us, the computer had made a match.

Obviously, the facial recognition systems on our phones are not sufficiently refined to be 100% accurate. They’re offered as a service; to make it easier to find ourselves and our friends. Yet there are consequences when these systems are trusted unthinkingly. So let’s consider facial recognition systems used in the criminal justice system: an application that is far more serious than my friend’s personal camera roll.

Closed-circuit television cameras (CCTV) are used for security purposes around the world, often in combination with facial recognition systems, and these allow the police to track down a wanted person more efficiently. However, there is an issue with their accuracy; especially with the conditions in which they are inaccurate. This is a particular problem when these systems are accepted as sources of objective testimony; as evidence.

Briefly, the problem is this: the data that are fed into the algorithm during its creation only include samples of specific types of people. Often the data consists of the most common majority group in the place in which the algorithm is being created.

So what does that mean? Although I am definitely no expert in the complexity of algorithms, the case of the facial recognition systems used in the UK’s public domain is very well explained by Big Brother Watch (Big Brother Watch UK, 2020; Cavazos et al., 2021; Institute Montaigne, 2020; Lum & Isaac, 2016; Mittelstadt et al., 2016).

In short: the majority of these systems are developed through a semi-automated process called “machine learning”. The programmer provides the sample data: picture of peoples’ faces, which are pre-categorised. The algorithm is then trained on this dataset, taught to identify different faces, and then match new ones to their pre-established categories. Then the algorithm is let loose, in the wild, and expected to learn how to identify faces with new input from the street: to match them as well as to categorise them.

Most of the time, however, these sample data are biased. Of course, this isn’t added on purpose; it just reflects the creator’s environment.

In the case of policing data, for example, the data are produced as a by-product of normal police work. So the categories against which a new face might be matched are overwhelmingly criminal, and that bias carries forward to taint the rest of the process: “because the computer said you robbed the bank, it’s on you to prove that you didn’t”. (A reversal of the usual assumption of innocence until proven guilty.)

In other words: data output can only be as reliable as the data input, and the results are sometimes unjust. The catch is that the introduced bias is also difficult to reverse. Even if we start “feeding” the algorithm with more diverse data, the initial bias persists inherently. It becomes entrenched. And as they say: garbage in, garbage out.

 

A child looks up at a CCTV camera (Photo by Danny Lines)

 

It’s not hard to imagine an innocent child, walking home from school, being stopped and searched by the police solely because of how they look. Then take that a step further: how are the police likely to react when the algorithm has already said that child was a possible match for a shoplifter?

This is more likely to happen with ethnic minorities. Because they are typically underrepresented in the original data sample that trained the software—before the addition of the criminal database—meaning those matches are typically less precise; fuzzier.

These problems are not only restricted to facial recognition algorithms, nor to the UK. There are other algorithms which contribute to risk analyses (i.e. assessing the likelihood of reoffending). These then help policy makers decide where police officers should be distributed on patrol, via geographic crime prediction systems, and thus also where someone is more likely to be fuzzy-matched.

These systems are even used to try to predict crime before it happens; to promote interventions prior the actual criminal offense. So people are stopped, and detained, on the basis of the algorithm’s often-faulty judgment.

While most of you are probably concerned about the impact such biases have on our society, particularly regarding systemic discrimination within the criminal justice system, some of you might not feel personally affected.  So here’s the bad news: we all are affected by the biases of algorithms regardless of our ethnic background. Why? Well, because most search engines, web shops, etc., use our data to target our interests and present us with relevant content. However nice that sounds, it is dangerous, as it leads to self-confirm our own beliefs: the “echo chamber”, which has arguably broken modern politics, is the result of algorithmic bias too.

Self-confirmation biases are inherently human and are automatic processes affecting us all. In a world that becomes more complex, and with publicly accessible platforms allowing everyone to share their opinion and thoughts to the whole world, this can be problematic. This is not to say that we should all shut up unless we can be unbiased and objective—which is impossible anyways—but we should indeed be critical. We shouldn’t be blindly trusting these programmes that don’t allow us an insight into their decision-making processes.

I don’t mean to say that we should demonise these innovations. However, if we simply use these programmes for important processes in our life without any critical thinking, we are taking uncalculated risks that are impossible to assess until, well, after it’s too late (Mittelstadt et al., 2016).

Of course, it is understandable why we would like to trust the algorithms. We strive for the “objective truth.” And we all do it; whether you are trying to find the truth behind why your friends’ date ghosted them, or because you want to know why you are hungover although “you did not drink that much” (or “I had lots of water!”). Yet despite this aspiration to truth, we are also lazy and, if possible, we prefer using mental shortcuts. Algorithms with their illusion of objectivity seem like the perfect solution. Nevertheless, algorithms are man-made, and just like humans, can include the biases we carry onto them.

What I am trying to say is that while an instance of me being wrongly identified in a controversial and inappropriate picture might be “funny”, these mistakes do not stop with your phone. We rely on algorithms and similar technologies in all areas of our individual and collective lives, and we introduce them into the world without completely understanding them or the consequences they bring with them. Although we should not stop innovation, we should also never stop being critical of the limits of our own capabilities.

 

References

Big Brother Watch UK policy brief: https://bigbrotherwatch.org.uk/wp-content/uploads/2020/06/Big-Brother-Watch-briefing-on-Facial-recognition-surveillance-June-2020.pdf

Bogert, E., Schecter, A., & Watson, R. T. (2021). Humans rely more on algorithms than social influence as a task becomes more difficult. Scientific reports, 11(1), 1-9.

Cavazos, J. G., Phillips, P. J., Castillo, C. D., & O’Toole, A. J. (2021). Accuracy comparison across face recognition algorithms: where are we on measuring race bias?. Transactions on Biometrics, Behavior, and Identity Science, 3(1), 101–111. doi: 10.1109/TBIOM.2020.3027269.

Chen, S., Duckworth, K., & Chaiken, S. (1999). Motivated heuristic and systematic processing. Psychological Inquiry, 10(1), 44–49. https://doi.org/10.1207/s15327965pli1001_6

Institute Montaigne (2020). Algorithms: Please mind the bias!. https://www.institutmontaigne.org/en/publications/algorithms-please-mind-bias

Lum, K. & Isaac, W. (2016). To predict and serve?. Significance 13(5), 14-9.

Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L.  (2016). The ethics of algorithms: mapping the debate. Big Data & Society 3(2). https://doi.org/10.1177/2053951716679679

Sun, W., Nasraoui, O., & Shafto, P. (2020). Evolution and impact of bias in human and machine learning algorithm interaction. Plos One, 15(8), 1-21. https://doi.org/10.1371/journal.pone.0235502

Feature photo by Eugene Lim

Lea is a Psychology student in her last year of her Bachelor, primarily interested in Social Psychology – mainly relating to group dynamics, discrimination, and culture – while also showing interest in current political and social matters. Outside the more serious matters in life, she enjoys time spent in good company, travelling, cooking, and music.


You may also like

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.