by Ali Murphy
AI is an alarming proof of our blind-spots, proving just how we see.
In 530 BCE, Pythagoras created what is thought to be the first school of mathematics, its dictum being all is number.
In this issue of Radical Art Review, we’re looking towards the future; starting with an ancient mathematician might seem an odd place to start. But was Pythagoras really so far from the truth? Whilst we might think our cultural and ethical practices rely on something other (though no less concrete) than number, in this article I’ll be exploring how numbers and ethics intersect.

Digital spies
I am talking about code. More specifically, I’m talking about artificial intelligence. Is AI - the fruit of human mind - ever deserving of its accolade as brain 2.0? Does it really possess intelligence without the perils of human bias?
We encounter AI coding every day. The best example is through the screen of our smartphone, which uses facial recognition software to unlock. This seems a fairly obvious marketing tool, a convenient function that makes it seem as though my iPhone really knows me, and is personalised to optimally work for me. But have you ever walked along talking to a friend about a specific product or travel destination with your iPhone in your pocket or hand? Have you then received targeted Facebook and Instagram adverts you might not even realise are targeted? Whilst this Big-Brother-esque use of AI might seem a little intrusive, this technique also has some important functions, like identifying potential criminals and deciding how long a suspect is detained for after conviction.
Progressive technology?
Our use of code is increasingly broad and we are increasingly using AI to make vitally important decisions. But how fair is AI? Are numbers as impartial as they seem on the surface?

Artificial intelligence requires the input of data from computer scientists and data analysts. How these practices are carried out is not simply a question of inputting large volumes of data into a computer system. As Joy Buolamwini explores in her ground-breaking Masters thesis Gender Shades, progressive technology does not necessarily guarantee a progressive approach to its use.
AI is generally considered to reflect the limitlessness of human creation, the technological prodigy of the whole computer scientist community. We have moved on from the biblical creative process where we are made in the image of that which we aspire to be. In manipulating code to produce AI, we are making in aspiration of that which we can never be, completely impartial and emotionless. Is this even possible? Are humans able to create a machine that exceeds our own capabilities in regard to offering complete neutrality?
Sharp white background data
When Buolamwini started working with facial recognition software as an undergraduate student of computer science, she noticed that the test dummy could not identify her face. Instead, the dummy only recognised her features when she covered her face with an expressionless, texture-less white mask. Technology quite literally could not identify her as a subject.
In Claudia Rankine’s most famous publication Citizen: An American Lyric, Rankine incorporates Glen Ligon’s artwork which states that ‘he feels most coloured when thrown against a sharp white background’. Buolamwini’s experience shows how AI both distorts and conforms to this sentiment. The inability of artificial intelligence to recognise a black face is evidence of the inequality in our use of data and coding that operates in the background of AI software.
Machines - whether we like it or not - are made using the data that we supply them with. If our use of algorithm is biased towards a “generic” – read white, male – benchmark data set, this only proves that our machines are incapable of overriding our social inequalities. The technology that we birth reflects our own prejudices: a contemporary Frankenstein’s monster.
Out of sync
Buolamwini found that black women were 32 times more likely to be misgendered by facial recognition technology than white men. Not only this, but many examples of AI software that Buolamwini studied translated male and female directly into ‘men’ and ‘women’ subjects in a heteronormative manner. Artificial intelligence, Buolamwini proves, is not a non-biased definitive human advancement: in fact it is alarming proof of our blind-spots, proving just how we see.
The generic or standard human is still defined as a white male with regard to testing and technology. Our ability to align socio-cultural progression and technological advancement is visibly out of sync. If AI is used in crucial proceedings such as identifying potential criminals and determining how long suspects should be detained for after conviction, the stakes are exceedingly high. Buolamwini proves that whilst number in itself is impartial, our manipulation of it is not so straightforward.
AI has the potential to dramatically change our future. Despite this immense potential, equality must be at the core of our approach to code. AI reflects socio-cultural blind-spots in that which it cannot comprehend. If we are to secure an AI future, as Buolamwini concludes, ‘we must increase transparency in our approach to coding’.
Joy Buolamwini is the Founder of the Algorithmic Justice League. Find out more