
The overwhelming 鈥榃hiteness鈥 of artificial intelligence 鈥 from stock images and cinematic robots to the dialects of virtual assistants 鈥 removes people of colour from humanity's visions of its high-tech future.
The overwhelming 鈥榃hiteness鈥 of artificial intelligence 鈥 from stock images and cinematic robots to the dialects of virtual assistants 鈥 removes people of colour from humanity's visions of its high-tech future.
If the developer demographic does not diversify, AI stands to exacerbate racial inequality
Kanta Dihal
This is according to experts at the 国际米兰对阵科莫, who suggest that current portrayals and stereotypes about AI risk creating a 鈥渞acially homogenous鈥 workforce of aspiring technologists, building machines with bias baked into their algorithms.
They say that cultural depictions of AI as White need to be challenged, as they do not offer a "post-racial"听future but rather one from which people of colour are simply erased.
The researchers, from 国际米兰对阵科莫鈥檚 , say that AI, like other science fiction tropes, has always reflected the racial thinking in our society.
They argue that there is a long tradition of crude racial stereotypes when it comes to extraterrestrials 鈥 from the "orientalised"听alien of Ming the Merciless to the Caribbean caricature of Jar Jar Binks.
But artificial intelligence is portrayed as White because, unlike species from other planets, AI has attributes used to "justify colonialism and segregation"听in the past: superior intelligence, professionalism and power.
鈥淕iven that society has, for centuries, promoted the association of intelligence with White Europeans, it is to be expected that when this culture is asked to imagine an intelligent machine it imagines a White machine,鈥 said Dr Kanta Dihal, who leads CFI鈥檚 鈥樷 initiative.
鈥淧eople trust AI to make decisions. Cultural depictions foster the idea that AI is less fallible than humans. In cases where these systems are racialised as White that could have dangerous consequences for humans that are not,鈥 she said.
Together with her colleague Dr Stephen Cave, Dihal is the author of a new paper on the case for decolonising AI, published today in the journal .
The paper brings together recent research from a range of fields, including Human-Computer Interaction and Critical Race Theory, to demonstrate that machines can be racialised, and that this perpetuates "real world"听racial biases.
This includes work on how robots are seen to have distinct racial identities, with Black robots receiving more online abuse, and a study showing that people feel closer to virtual agents when they perceive shared racial identity.听听
鈥淥ne of the most common interactions with AI technology is through virtual assistants in devices such as smartphones, which talk in standard White middle-class English,鈥 said Dihal. 鈥淚deas of adding Black dialects have been dismissed as too controversial or outside the target market.鈥
The researchers conducted their own investigation into search engines, and found that all non-abstract results for AI had either Caucasian features or were literally the colour white.
A typical example of AI imagery adorning book covers and mainstream media articles is Sophia: the hyper-Caucasian humanoid declared an 鈥渋nnovation champion鈥 by the UN development programme. But this is just a recent iteration say researchers.
鈥淪tock imagery for AI distills the visualizations of intelligent machines in western popular culture as it has developed over decades,鈥 said Cave, Executive Director of CFI.
鈥淔rom Terminator to Blade Runner, Metropolis to Ex Machina, all are played by White actors or are visibly White onscreen. Androids of metal or plastic are given white features, such as in I, Robot. Even disembodied AI 鈥 from HAL-9000 to Samantha in Her 鈥 have White voices. Only very recently have a few TV shows, such as Westworld, used AI characters with a mix of skin tones.鈥
Cave and Dihal point out that even works clearly based on slave rebellion, such as Blade Runner, depict their AIs as White. 鈥淎I is often depicted as outsmarting and surpassing humanity,鈥 said Dihal. 鈥淲hite culture can鈥檛 imagine being taken over by superior beings resembling races it has historically framed as inferior.鈥
鈥淚mages of AI are not generic representations of human-like machines: their Whiteness is a proxy for their status and potential,鈥 added Dihal.
鈥淧ortrayals of AI as White situate听machines in a power hierarchy above currently marginalized groups, and relegate people of colour to positions below that of machines. As machines become increasingly central to automated decision-making in areas such as employment and criminal justice, this could be highly consequential.鈥
鈥淭he perceived Whiteness of AI will make it more difficult for people of colour to advance in the field. If the developer demographic does not diversify, AI stands to exacerbate racial inequality.鈥
The text in this work is licensed under a . Images, including our videos, are Copyright 漏国际米兰对阵科莫 and licensors/contributors as identified.听 All rights reserved. We make our image and video content available in a number of ways 鈥 as here, on our main website under its Terms and conditions, and on a range of channels including social media that permit your use and sharing of our content under their respective Terms.