In the late 1970s British immigration laws mandated that an engaged woman coming to Britain within three months to marry her fiance did not require a visa for entry. However, if an immigration officer suspected a woman was married, but only pretending to be engaged to gain entry, they could take her for further examination in a “virginity test”. This degrading and humiliating procedure was conducted on the stereotypical assumption that South Asian women would be “submissive, weak and tradition bound” and sometimes even targeted women who had proper paperwork. [1]
Women complied with the procedure to avoid being in a situation where they would be stuck in their country of origin away from their spouses abroad. This incident of invasive scrutiny is an example of how circumstances of racialized immigrants and travellers force them to consent to a violation of their basic rights. What we know as “security measures” is often based on biased presumptions that equate stereotypical somatic markers with criminality. For racialized travellers going to western countries, anxiety and fear is an all too familiar experience. As Qazi Mustabeen Noor writes, the airport is a “site of power relations that operate above the rule of law”. Though the racialized traveller possesses appropriate paperwork and has a valid purpose of visit, they are always placed at the mercy of customs and immigration officers who decide who looks “suspicious”, often on racist and discriminatory grounds. Noor further notes that airport authorities possess power to detain whomever they see fit and the resulting layer of scrutiny creates an environment of anxiety and hostility for the marginalized traveller. [2] With the incorporation of state-of-the-art security systems, biometrics and AI based threat assessment, this scrutiny on racialized travellers becomes even more intrusive, exclusionary and dehumanizing. In my paper, I want to point out that what those unfortunate Indian women faced at the immigration line can only be amplified with the help of technology, giving rise to fear and apprehension during international travel.
In a Canadian context, Gidaris writes how the western airport after the events of 9/11 became an institution of anxiety and stuckedness post 9/11. Canadian Muslims describe a sense of uncanny anxiety at the airport, something that non-Muslim Canadians do not experience, Gidaris argues, where Muslims are assumed to be guilty of the ideologies that led to 9/11 until they prove themselves to be a “good Muslim”. [3] Muslim travellers are subject to biometrics systems that have ingrained racism stemming from colonial origins, that despite claiming to be unbiased in terms of race, routinely flag those with Muslim-looking features. Gidaris states that over the 2000s and 2010s, the responsibility of scrutinizing racialized travellers has now shifted from human beings to automated systems that utilize AI.
In this paper I will analyze literature that contradict well known beliefs that Artificial Intelligence or AI is unbiased. I argue that systems utilizing AI can inherit biases from human beings while at the same time introducing biases of their own. I also analyze biometric systems at use in the airport, their colonial origins and the biases they operate on. Despite widespread trust by governments and airport authorities on these systems, I attempt to show the impact of these systems to perpetrate feelings of anxiety and stuckedness on racialized travellers.
The farce of neutrality #
AI has seen substantial growth in recent years and it is common to see headlines on a new AI milestone in established news outlets every day. One such headline is how an AI powered system is to be implemented by Toronto Pearson Airport to detect concealed weapons on passengers. [4] The use of AI is not new in the airport space. Facial recognition systems such as Faceit by Visionics Corporation has been used for airport surveillance since the 9/11 attacks. The use of algorithms has often been defended as being unbiased and apolitical. However there has been a lot of discourse in recent years on the biases inherent in AI systems.
Famous examples of AI bias include the “unprofessional hair” incident [5], where performing a Google search for the keyword “unprofessional hair” would only return images of black women with natural hair. Another example is predictive policing technology in the US which predict locations in which crimes are more likely to occur, where the results of the system were heavily skewed towards predominantly black neighborhoods. [6] AI bias can manifest in several ways, one of them being that AI can inherit real world biases from data. In Rage Inside The Machine, Robert Elliot Smith discusses the nature of AI being complicated functions that do not possess “intuition” at least in the manner that human intelligence would. Rather AI utilizes statistics to predict sets of outcomes and takes actions based on the result. For instance, the famous AlphaGo by Google “plays” the game of Go by using statistical analysis derived from its long history of games played against itself to determine which move is the most likely to win. The primary limitation of this sort of statistical analysis is that historical biases in training data can easily be programmed into systems which lead to discriminatory outcomes. [7]
A very recent example of this in action in a manner that affected real lives is in 2020 when the coronavirus pandemic led to A level examinations not being held. The UK government’s solution to the problem was to use an AI system. The results were controversial as countless students and parents complained that their assigned grades were lower than the grade predicted by their school and teachers. The algorithms used historic results of institutions as reference, which biased results against schools which were underdeveloped and underfunded; and high grades were based on a “postcode lottery”. [8] This significantly stalled the future potential of brilliant students from historically underprivileged schools, who were otherwise predicted to have a good grade. Though industry experts could have easily predicted such a pitfall, the UK government was more than willing to utilize this technology, showing governments’ willingness to throw technology they do not understand at problems they cannot solve.
Smith has argued that while it is assumed that motivations of algorithms are ambivalent at worst, most algorithms still have one primary motivation and that is to profit [7:1]. It is best to remember that algorithms are created by software companies with a profit motive in mind. Examples include algorithms that determine worker schedules in delivery apps, or recommendation algorithms used in social media sites are designed to keep users on the app at the expense of the said users. The profit motive coded into these AI systems supersede any possible harm the systems may cause while removing the creators from taking responsibility for said harm. Algorithms in delivery apps are designed to make the most from their “riders” in mechanisms which, according to Smith, mimic the industrial age factories from 18th century Britain. Simplistic recommendation systems in Youtube recommend content that is more likely to be clicked on. In the political sphere this more likely results in videos aligned with the user’s predisposed beliefs, while hiding content to the contrary. Iteration of this leads to increasing radicalization among users and the generation of information bubbles.
A lack of diversity in development teams that work on algorithms can also lead to skewed results. Software created in Western countries are more likely to be created by teams dominated by white males. Under-representation often leads to biased results that perhaps could have been spotted by a more diverse team. An example of this is computer vision, where the most widely used facial recognition classifiers from Microsoft, IBM among other companies were found to have lower error rates when detecting light male faces and highest error rates when detecting dark female faces. [9] Noble reports a far more egregious instance of this was when Google images showed pictures of black men and women with the search keyword “gorilla” in 2015. [10] Perhaps the most alarming aspect of this incident was that Google solved the issue in the short term by completely removing “gorilla” from its search index while a proper fix was eventually implemented. Unlike common software bugs of this magnitude that are seen to be hotfixed in days, this could not be fixed as easily. As Smith argues, AI algorithms develop complexity that quickly becomes incomprehensible to even their creators, leading to some comparing AI to modern day alchemy. [7:2]
While the above examples showcase biases that come from historic data, this might give the impression that the algorithms themselves are unbiased. As discussed by Catherine Stinson, AI developers commonly believe that all forms of AI bias originate from bias in data, however there are several known forms of bias in known algorithms. As a case study, Stinson analyzes biases in collaborative filtering algorithms which are used in recommendation algorithms. Examples of such biases include selection bias where the system selects items to be initially rated by the user and then iterate on that bias, cold start bias where new items are less likely to be recommended than existing items, popularity bias where popular items get recommended no matter what the user prefers, and homogenization effect where the variance in what the algorithm recommends narrows over time. All of these biases occur independent of data bias and if left uncorrected could lead to discriminatory results. [11] While some of these from search results are merely annoying instances of bad recommendations, there can more harmful instances. In Algorithms of Oppression, Noble discusses how from 2009 and 2016, searching on Google with keywords “black girl” would lead to pornographic results that objectified and sexualized black bodies. At the same time, businesses owned by African Americans would struggle to get recommended on Yelp, as searching for “black owned business” would return results of businesses run by white owners. [10:1]
In either case it is a fair assessment that algorithmic biases can be inherent to AI systems, whether the said bias be external or internal. While AI can undoubtedly be a useful tool in the correct circumstances, one must be aware of its inherent dangers before utilizing it. As we shall see however, companies and governments not only do not fully understand AI systems, but are more than willing to throw it into use cases that actively harm already marginalized people.
Algorithms as Border Security #
The history of using algorithmic techniques for surveillance predates the modern airport and can be traced to colonialism and the Transatlantic slave trade according to Constantine Gidaris. The British were early adopters of mass fingerprint-based surveillance in colonial India after the events of the Sepoy Mutiny in 1857, in a system of civil surveillance. Gidaris writes that that this was intended as a method of control over the “unruly colonies” and was based on the supposed inferiority of the colonized, whose possible criminality was written on their bodies. This assumption was mired in racism and designed on anthropomorphic knowledge at the time that criminality was ethnic and hereditary [3:1]. Transatlantic slave traders used branding as a method to mark slaves, using physical markers to detach slaves from ownership of their own bodies.
Smith writes that early research on algorithms were deeply linked to early anthropomorphic theories on evolution in late 1800s and early 1900s. With “survival of the fittest” being the tagline emerging from Darwin’s research, people at the time believed the “fittest” could also refer to anthropomorphic factors such as race and gender. One of the most popular discoveries of that time would end up being the normal distribution or the Bell curve which is a statistical model that can be used to model several things such as height or weight distributions in a population. The problem lay in the assumptions that such distributions were considered “normal” or “natural” and that it also applied to intelligence. This led to the formation of the I.Q. test as a means to “reliably” measure one’s intelligence despite opposition by other academics even at that time. However, Smith writes that this was a period of time in which much of the US public held the racist view that Irish immigrants were “an inferior race and mentally defective”. In 1882 the US Congress passed a law that prevented entry of “feeble-minded” people as they believe that they would, “through intermarriage and reproduction with Americans, lower the quality of the white, Anglo-American majority”.
While such a law was initially difficult to implement as there was no way to measure one’s intellect, in 1913 the I.Q. test would explode in popularity as a means to measure incoming immigrants’ intelligence, thus forming the early basis of eugenics which would go on to inspire much of Nazi ideology. [7:3] While it would no longer be considered acceptable to academically pursue eugenics after World War II, elements of it would still survive with their pseudosciences and biases intact in the form of biometric and algorithmic systems of surveillence. The legacy of colonial biometrics based surveillance continues in modern systems at use in airports that are systemically designed to be racist and discriminatory.
The aforementioned facial recognition systems that are optimized for light skin users are less likely to decipher details in darker skinned faces. Unreadable faces can often be interpreted by humans as angry or suspicious and demand higher levels of scrutiny by authorities. As Baretta, Adolphis and Marcella writes, research into criminal cases suggest “that defendants who are perceived as untrustworthy receive harsher sentences than they otherwise would” [12] This is due to the jury and judges historically using their reading into facial emotions as a guide for assessing the character of the defendant. As a result, defendants whose facial features match what is normally believed to be the appearance of “anger” are judged more harshly than others. Baretta, Adolphis and Marcella uncovered that these human biases have now been coded into the widely used facial analysis algorithms that claim to be able to read people’s emotions but are far less accurate at detecting anger than they are detecting happiness.
The impact of facial recognition on Muslim travellers would particularly be exacerbated after the events of 9/11, which led to a far greater demand for airport security as a frontline defense against terrorism. This demand would go on to be exploited by the companies such as Visionics Corporation with technology that promised to solve threats posed by airport security. As pointed out by Kyle A. Gates, the only way such systems could operate is through international agencies to compile databases of known terrorists. [13] Such databases were compiled, as international agencies in western countries all sought to assist in boosting frontline security against terrorism and used it to train a generation of facial recognition algorithms. Historic biases of the time would be ingrained into the system, sparking the new automated war against the brown bearded Muslim enemy.
As pointed out by Smith, these face recognition algorithms are not unbiased because they are driven by a motive of profit. Corporations such as Visionics sought to maximize sales of their application and they did so by exploiting the geo-political state of the world at the expense of Muslim populations all over the world. These algorithms operate on the fundamentally incorrect assumption that intent of terrorism is somehow linked to one’s appearance. The “face of terror” used as a point of reference was mostly Muslim-looking faces, and implicated Muslims with no relation to extremist ideologies with the burden of proving their innocence due to algorithms that are fundamentally designed to flag Muslim looking travellers. Yet despite everything we know, algorithms that rely on biometrics are still widely trusted by government officials as a reliable mode of defense. The Canadian government in 2018 stated “biometric screening has proven effective in protecting the safety and security of Canadians and the integrity of the immigration system” [14]
Conclusion #
While AI is undoubtedly going to find extremely helpful applications, there is also growing awareness of the problems inherent to AI and the risks of implementing algorithmic solutions without fully understanding the dangers. Combining AI with biometrics and aiming the reticule at human beings in order to solve complex problems such as airport security and immigration is bound to harm groups of people who are historically oppressed. Yet despite these findings and despite governments and policymakers not fully understanding AI systems, airports all over the world are rapidly adapting AI powered biometric systems. Gidaris writes that airports in Buenos Aires, Amsterdam and Australia have implemented or are in the process of implementing automated screening systems. [3:2] This trend is likely to impose a form of control on international travel that puts one’s racial identity and appearance at the forefront. The oppressive nature of such governmentality can be concealed through the lens of abstraction these automated systems provide, removing the burden of human responsibility to a system that can be cold and faceless. For marginalized groups such as Muslims, the future at the airport is likely to be an exercise of fear and anxiety.
Qureshi, H., 2011. Passport, Visa, Virginity? A Mother's Tale Of Immigration In The 1970S. [online] The Guardian. Available at: Link [Accessed 11 January 2021]. ↩︎
Noor Q. M., 2020. Under white scrutiny: The airport as a site of apprehension and oppression for the racialized traveller [Unpublished Manuscript] CULTRST 721: Writing, Land, and Place, McMaster University ↩︎
Gidaris C., 2020. The Carceral Airport: Managing Race as Risk through Biometric Systems and Technologies, Public: Art, Culture, Ideas, vol 30, no. 60, 2020, pp. 76-91 ↩︎ ↩︎ ↩︎
McQuigge, M., 2019. Toronto Pearson Airport To Use AI-Powered Technology To Detect Weapons. [online] Global News. Available at: Link [Accessed 11 January 2021]. ↩︎
Alexander, L., 2016. Do Google's 'Unprofessional Hair' Results Show It Is Racist?. [online] The Guardian. Available at: Link [Accessed 11 January 2021]. ↩︎
Cumming-Bruce, N., 2020. U.N. Panel: Technology In Policing Can Reinforce Racial Bias. [online] Nytimes.com. Available at: Link [Accessed 11 January 2021]. ↩︎
Smith, R. E., 2019. Rage Inside The Machine. The Prejudice Of Algorithms, And How To Stop The Internet Making Bigots Of Us All. London: Bloomsbury Publishing Ltd. ↩︎ ↩︎ ↩︎ ↩︎
Osborne C., 2020, When algorithms define kids by postcode: UK exam results chaos reveal too much reliance on data analytics [online] Zdnet.com. Available at: Link [Accessed on 11 January 2020] ↩︎
Buolamwini, J., and Gebru, T., 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. Conference on fairness, accountability and transparency, 77–91. ↩︎
Noble S. U. 2016, Algorithms of Oppression: How Search Engines Reinforce Racism ↩︎ ↩︎
Stinson C., 2020. Algorithms are not neutral: Bias in collaborative filtering (Forthcoming) in Proceedings of the 6th World Humanities Forum ↩︎
Barrett, L., Adolphs, R., Marsella, S., Martinez, A. and Pollak, S., 2019. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychological Science in the Public Interest, 20(1), pp.1-68. ↩︎
Gates, K. A., 2011. Our biometric future: facial recognition technology and the culture of surveillance. Choice Reviews Online, 48(12) ↩︎
Immigration, R.A.C.C., 2018. Canada Expands Its Biometrics Screening Program - Canada.Ca. [online] Canada.ca. Available at: Link [Accessed 11 January 2021]. ↩︎