In the film Weird Science, teenage geeks Gary and Wyatt discover a way to use their computer hacking skills to create a fantasy woman who grants all their wishes. It’s a sad reality of life as a teenage geek (I know, because I was one myself), that fantasy women are a high priority for their attention. But for geeks who struggle with understanding emotions, and enjoy spending time with machines (once again, I know myself), it can be a challenge to even meet women, let alone start a relationship with one.
The experiences and fantasies of straight teenage boys and immature young men have played a depressingly large part in the development of computer technology, to an extent that the 1985 fantasy of Weird Science has nearly been achieved over the last 40 years, in a worldwide enterprise of Weird Computer Science. In the first flush of academic research into location technologies and mobile computing, the kinds of new social media products being pitched at research conferences and to business investors were all too often summarised in the catchphrase “girlfriends for geeks”. Facebook would never have existed, if it wasn’t for a few young men at Harvard who wanted a catalogue of women they might be able to ask on dates. The Facebook founders got lucky (in more ways than one), but it’s more common for male computer science students to find greater empathy and success with machines than with women. Many computer science projects today aim to re-code social networks and emotions as an objective and quantifiable science, hoping to tidy up the messiness of actual human relationships.
Computer science is definitely weird as a result of its domination by immature men. But this is only a small part of an even bigger problem, which is that computer science really is WEIRD. The acronym WEIRD was created by researchers in psychology[1], drawing attention to the scandal that almost all published research in academic psychology had until then been (and still is) based on studies of people in countries that were Western, Educated, Industrialised, Rich and Developed, in other words, W.E.I.R.D.
The obvious conclusion was that, if psychology is supposed to be a universal theory of human cognition[2], rather than just a theory about a WEIRD minority, it would be necessary to blow open the doors of the research community and its methods, finding ways for psychology researchers to study people beyond those (largely their own students) who not-very-coincidentally happened to be very much like the researchers themselves.
Computer science does use a lot of research methods from psychology, in the field of human-computer interaction (HCI). My own PhD, after an early career in engineering and computer science, was in a psychology department for that reason. The objective of HCI at that time was to observe and learn how ordinary people thought about computers, in order to help make the computers easier to use (more “intuitive”), and in the hope of designing user interfaces that fit better with how people naturally think and work. Many of the innovative visualisations reported in chapter 8 follow a line of research from my own PhD, building on the work of applied psychologists like my supervisor Thomas Green.
However, there have been limits to the success of such research. People are complex and contradictory, and design is difficult because there are usually conflicting requirements and trade-offs, not easily recognisable opportunities or solutions. This is already disappointing to computer scientists who would prefer a simple answer - a recipe that would guarantee a successful product if the engineer follows a predictable set of steps based on scientific evidence.
A further problem is that classical HCI research, including my own, had not understood the wider problem: that they were doing their own kind of WEIRD science, blinded by biased expectations, and testing their experimental designs with people who, although they might not be computer experts, were actually fairly similar to the researchers themselves. This way of doing research might be acceptable for a psychology student (although hardly recommended), but it’s no way to make a globally successful product or business, by ignoring the majority of potential customers, and marketing only for a WEIRD minority who have the same needs and interests as the developer.
In order to avoid that problem, modern HCI researchers working for a multinational company are far more likely to use research methods from anthropology, rather than cognitive psychology. Anthropologists have the opposite priorities from the WEIRD discipline of psychology. The business of anthropologists is precisely to study people who are not like them, their friends and families, or perhaps anyone they have ever met. If a software company aspires to a product that will be popular all over the world, they will probably do better to listen to anthropologists rather than psychologists. There are also some promising adjustments to the traditional cognitive approaches, for example Microsoft Research Cambridge working in partnership with sister laboratories in India and Africa to recruit a more diverse sample of participants for new studies of end-user programming in Excel[3].
I want to step back at this point, to discuss some of the history that has led us to this position. The intellectual foundations of computer science move surprisingly slowly, despite the continual bluster of technology promoters and showmen celebrating the latest AI algorithm. Many of the technical and mathematical concepts in everyday software engineering and AI come from the 19th century, rather than the 20th. Science at the peak of the industrial revolution was determined by the logic of fossil fuel extraction, colonialism, and slavery - all great for business, and all terrible for humanity, as we have come to understand. (Even after 200 years, those things do have some defenders - but not usually among scientific researchers, apart from a few prominently WEIRD technologists).
This story goes back to 19th century London, around the time that Charles Babbage applied his insights of mathematical economics to realise that routine aspects of intellectual labour could be profitably mechanised by his Difference Engine, and as economist Karl Marx was realising how AI would work – that humans would become “conscious linkages” in complex systems that appear intelligent only because of the human labour hidden inside[4].
In the second half of that century, another young man in London was becoming a celebrity for building a mathematical logic of exploitation based on the work of his cousin Charles Darwin. Francis Galton, after studying mathematics, had taken a job in southern Africa working as a surveyor to define colonial boundaries that persist today. South African historian Keith Breckenridge, in his book Biometric State[5], describes Galton’s travels with an anthropologist whose role was to document local populations in the areas being surveyed. Galton’s own view was that many people he met were happiest when told what to do by more advanced white people. He also amused himself with a hobby that could only come from an immature geek - using his surveying instruments to measure the bodies of the African women he saw.
Galton had no particular reputation when working in Southern Africa, but after returning to England, he combined his interest in scientific instruments with his mathematical training, and with public interest in the discoveries of his cousin Darwin. Galton hoped to demonstrate racial superiority, justifying (economic) survival of the (white) fittest, by measuring people and making statistical comparisons between groups. Charles Darwin’s son Horace had established the very first of the “Cambridge Phenomenon” technology companies, Cambridge Scientific Instruments[6], and Galton commissioned a set of tools from Horace Darwin for use in measuring grip strength, head circumference, hair colour, and all kinds of other factors that were considered evidence of superiority. These anthropometric instruments were a great success at London exhibitions, where people queued to have their own superiority confirmed through measurement. The instruments also became the basis of the long-running Anthropometric Project in my own university, that measured Cambridge undergraduates in order to correlate their head measurements with academic results.
We might laugh at the unsuccessful scientific experiments of 100 years ago, but today the Anthropometric Project seems worse than ridiculous. We have seen the progression from scientific curiosity (whatever its questionable motivations in Galton’s earlier life), to the eugenics projects of the Nazis and other racists who continue to suggest that it is possible “scientifically” to improve the human race through excluding or even eliminating those less fit by some supposedly objective measure. The relationship to British colonialism and slavery is less often commented on (especially in British histories), and Galton and his students are still widely celebrated for their invention of modern statistical methods - in fact, the very same statistical methods that were used in my own psychology PhD. The idea of measuring WEIRD people as a basis for “intuitive" software design now seems a worryingly racialised project.
Unfortunately, many modern machine learning algorithms build directly on mathematical principles that were invented as part of the racist legacy of Galton. For example the word “regression” was first used as a way of predicting the supposed regression to inferior racial traits, across successive human generations who might appear briefly to be advancing, but whose children will always revert to type because they have descended from inferior genetic stock[7]. The word regression is used every day by computer scientists who have no idea that it was originally created to support a racist theory (although the word itself was always there as a clue, for anyone sufficiently curious to ask).
I’m not suggesting that a mathematical algorithm is necessarily racist, even if it was originally invented for racist purposes, but we do need to be concerned that the word “intelligence” is also part of the racist project of eugenics. It soon became clear from the anthropometric projects in Cambridge and elsewhere that measurements of a person’s head or brain size did not have any correlation to their academic results, or indeed to any other performance measurement of any interest (apart from, presumably, the useful ability to put one’s head into small holes when looking for things).
The importance of intellectual labour, as already analysed by Babbage and Marx, meant that there was great demand for a way to measure intellectual productivity and aptitude. Ideas of scientific government demanded scientific evidence in support of the claims that WEIRD people (usually white people in the global North) were qualified to tell Black people in the South how to run their lives. Intelligence testing was developed right out in the open as a way of justifying race colonialism, but has continued into the late 20th century and even the 21st, long after the other projects of eugenics have been condemned. Philosopher of AI Stephen Cave has documented how the word “intelligence” itself was not used to refer to a measurable aptitude until this period of global racism[8]. Cave draws attention to the worrying fact that, since the very idea of mechanically quantifiable intelligence is a racist project, the scientific ambition to replicate that mechanically quantifiable phenomenon in a computer - artificial intelligence - is nothing more or less than artificial racism.
When I realised this myself, a lot of the problems of AI suddenly became clear. It is well known that AI systems are routinely biased, making decisions in ways that are racist and sexist. Most AI researchers believe that this is an accident, and can be fixed through better (unbiased) training data, or even mathematical methods to identify racist bias as a statistical deviation to be corrected. However, scholars of technology who understand the history of race, in particular Ruha Benjamin[9], have fought against those naive assumptions, demonstrating the many ways in which these systems are racist by design, not by accident. Indeed, racism itself is a kind of technology, invented to defend the industrial processes of slavery.
My own relatively limited research into 19th century intellectual history, in particular the origins of machine learning statistics in Cambridge and London, confirm the insights of Ruha Benjamin and Stephen Cave to observe that it’s not only the technology designers who intentionally build racist systems that suit their business models. Even academic researchers who invent new AI technologies are continuing the racist 19th century programmes of Galton and others. As Ruha Benjamin says, it is no accident that these systems are racist. They were designed that way, based on science that made them that way[10]. There are still a few people who defend intelligence testing, including recent scandals that use statistical arguments to claim Black people are less intelligent than white people. Perhaps this is consistent with an ongoing colonial logic. If you believe that all good things in the world are created by WEIRD white people, and that the world’s problems will be solved by WEIRD white people continuing to tell poor Black people what to do, then I guess you might think that AI research is going in the right direction and will solve the world’s problems.
On the other hand, if you have started to notice that some of our engineering advances have made the world worse instead of better, you might wonder whether more of the same is such a good idea. As an engineer myself, trained in the continuous tradition of mathematics and computing that has passed pretty seamlessly from the 19th century to the 20th and now to the 21st, I’m certainly thinking that we might need to work differently in future.
I’m a WEIRD person myself. Not only have I had a comfortable life with my white skin, my middle class upbringing, and casual exploitation of the beautiful natural resources in my home country of Aotearoa New Zealand, I was also lucky enough to grow up in an era when geeky teenage boys obsessed with maths and machines got to be rich celebrities rather than sad outcasts. We have made some progress, in our WEIRD computer science departments, to at least ‘let some women into the clubhouse’ as Jane Margolis and Allan Fisher put it[11]. At the time I am writing, this too often requires women to put up with casual sexism and to be patient with the questionable fascinations of many AI researchers who want to create machines as their social partners (in the classic Turing test) rather than have meaningful relationships with actual women.
All of this is directly relevant to the problem of Moral Codes. So much of AI is created on foundations of global inequality, racism, and environmental devastation, as chillingly documented by technology scholar Kate Crawford in her Atlas of AI[12]. But the alternatives to AI that I’ve been describing in this book, including computers that are more instructable by regular people, could easily continue to perpetrate the same problems - especially in the hands of people who have no interest in justice or moral codes.
There is a wealth of expertise in HCI and human-centered AI, with research tools and design methods that directly tackle these inequalities. Nicola Bidwell has spent many years working with local and indigenous communities in Africa and Australia, developing design approaches that integrate technology more meaningfully into their lives[13]. Margaret Burnett and her colleagues created the GenderMag(nifier)[14] as a design tool helping engineers to recognise and avoid the gendered assumptions in abstract interfaces like those I described in chapters 8 and 9. Excellent critical scholars who draw attention to the legacies that still constrain such design work include Rachel Adams[15], Abeba Birhane[16] and Shaowen Bardzell[17]. It has been a privilege for me to learn from such friends and mentors, and I refer constantly to their work. Many others also understand the importance of escaping the WEIRD lenses of computer science, perhaps most famously Timnit Gebru, one of the lead authors of the seminal Stochastic Parrots paper, who subsequently established the DAIR institute for distributed AI research to “Mitigate/disrupt/eliminate/slow down harms caused by AI technology, and cultivate spaces to accelerate imagination and creation of new technologies and tools to build a better future.”[18]
Readers will not be surprised if I advocate new approaches that are less WEIRD. This will require new research methods, probably different to the work done by the teams of innovators mentioned in earlier chapters of this book. Although there have been substantial advances from the WEIRD universities and corporate research laboratories, that could be used to make future computer technologies more controllable and explainable, many of those advances have come from old white men like me who had access to the first generation of personal computers in their wealthy schools and universities. I’m thrilled that some of the most recent and important advances, especially those lifting the lid on the worst problems, are now coming from people with Black and brown skins, from women and from people with queer identities. Although the voices of these people have been essential in helping us to step outside the deplorable mindset of the movie Weird Science and the tech companies that still behave the same way, these more diverse innovators still get pigeonholed as a minority within the WEIRD world of technology, to be assimilated as evidence of statistical inclusion and diversity, rather than recognised to be only the first hints of truly global and equitable knowledge infrastructure.
The next necessary steps are fairly obvious, though this book is not the best place for them, and I am certainly not the right person to be defining them. In my own work, I now do everything I can to collaborate directly with people who live in the countries of the Global South, not only the indigenous Māori of my home country or the indigenous populations of other wealthy white-ruled colonies, but the (usually black and brown-skinned) people of low-income countries. In recent years, I have particularly appreciated the opportunity to work with computer science researchers in several countries of sub-Saharan Africa, asking what AI would look like if it was invented in Africa. It’s easy to assume that AI would be no different, but after listening to the real priorities of African people, I wonder how it could possibly be the same[19].
This has to go beyond giving African computer scientists permission (and resources) to use the same algorithms and server farms as Facebook, Google or Amazon. Computer science research, including AI, could be asking different questions and setting different priorities. But at present, there is very little encouragement for computer scientists in Africa to do that. If an African scientist makes a proposal that contradicts any standard assumption of WEIRD computer science, it is not popular with the WEIRD peer review community that manages all the conferences and journals. Sadly, the easiest way for a computer science researcher in Africa to have a successful career is to attend a WEIRD university, get employed by a WEIRD company, and work on the same problems that all the WEIRD people do.
I love learning how mathematical reasoning is done differently by local communities with traditional indigenous knowledge in different parts of Africa, and in other countries around the world[20]. There should be ways to program that knowledge, bringing real human benefits of the kinds that I’ve already described in other chapters. But there is no reason to expect that these kinds of knowledge technology should work the same way as the legacy of colonialists and slavers. Indigenous ways of thinking will gain more power if they become the basis of indigenous programming languages that build on concepts of local culture rather than sexism and racism. There is a wonderful international community of indigenous researchers asking the same questions, although mostly among the first nations populations of WEIRD countries rather than in the Global South.
Nevertheless, they show that there is a future for computer science that follows the agenda described by leading Māori academic Linda Tuhiwai Smith. As she makes clear in her book Decolonising Methodologies, these would not be tools for WEIRD researchers to extract even more value from the traditional knowledge systems of ex-colonies[21]. It would be genuinely exciting to have creative new ways to express algorithms, growing from more diverse knowledge traditions, for example in the experimental programming language by Jon Corbett that draws on the language and geometry of the Cree nation[22]. The vast number of different ways that programming could be done, the potential for languages in themselves to support new modes of expression, and the political implications of doing that, have been explored by my friends Geoff Cox and Alex McLean in their book for this same series at MIT Press, Speaking Code.[23] I hope we all see a future when the liberating agenda of Speaking Code can be recognised in computer science research programmes arising from the Global South.
Although I am excited by the idea of indigenous programming languages (and perhaps even indigenous AI, though this has to address the problem of how to simulate intelligence, if the original definition of intelligence comes from Western racism), and will do whatever I can to support and advocate for Southern and Black leadership of that enterprise, I will not suggest any prescriptions in this book of how it ought to be done. I am proud to be associated with the work already being done by my collaborators from the South, to continue to learn from friends and mentors whose work I have mentioned here, and to work with students from those countries, but the most effective leadership and best ideas will come from them, not from what I write here. I’m certainly excited by projects like the work of Jon Corbett, and ideas developed by participants in the Indigenous Protocols project convened by Jason Lewis in Hawai’i.[24] I hope that readers of this book will find many other examples to draw on in future.
[1] Joseph Henrich, Steven J. Heine, and Ara Norenzayan, "The weirdest people in the world?" Behavioral and brain sciences 33, no. 2-3 (2010): 61-83.
[2] For a survey of the reasons we might doubt this assumption, see Geoffrey Lloyd, Cognitive variations: Reflections on the unity and diversity of the human mind. (Oxford, UK: Clarendon Press, 2007).
[3] Sebastian Linxen, Christian Sturm, Florian Brühlmann, Vincent Cassau, Klaus Opwis, and Katharina Reinecke. "How weird is CHI?" In Proceedings of the 2021 ACM CHI conference on human factors in computing systems,
(2021), 1–14. See also recent work by Advait Sarkar and colleagues, at Microsoft Research, and “End-user programming is WEIRD: how, why and what to do about it”.
[4] Karl Marx, “The Chapter on Capital (Fragment on Machines),” Grundrisse, trans. Martin Nicolaus (London: Penguin, 1973), 690-695, 699-711.
[5] Keith Breckenridge, Biometric state. (Cambridge, UK: Cambridge University Press, 2014).
[6] Michael J.G. Cattermole and Arthur F. Wolfe, Horace Darwin's Shop, A History of the Cambridge Scientific Instrument Company 1878-1968. (Bristol, UK: Adam Hilger, 1987).
[7] Francis Galton, "Regression towards mediocrity in hereditary stature." The Journal of the Anthropological Institute of Great Britain and Ireland 15 (1886): 246-263. For a more extended discussion of statistical terminology see Alan F. Blackwell, "Objective Functions: (In)humanity and Inequity in Artificial Intelligence," HAU: Journal of Ethnographic Theory 9, no. 1 (2019): 137-146.
[8] Cave, "The problem with intelligence"
[9] Ruha Benjamin, Race after technology: Abolitionist tools for the new jim code. (Cambridge, UK: Polity Press, 2019).
[10] Angela Saini, Superior: The Return of Race Science. (London: 4th Estate, 2019).
[11] Jane Margolis and Allan Fisher, Unlocking the clubhouse: Women in computing. (Cambridge, MA: MIT press, 2002).
[12] Crawford, Atlas of AI.
[13] Nicola J. Bidwell, "Moving the centre to design social media in rural Africa." AI and society 31 (2016): 51-77; Heike Winschiers-Theophilus,, and Nicola J. Bidwell. "Toward an Afro-Centric indigenous HCI paradigm." International Journal of Human-Computer Interaction 29, no. 4 (2013): 243-255; Nicola J. Bidwell, Tigist Sherwaga Hussan, Satinder Gill, Kagonya Awori, and Silvia Lindtner. “Decolonising Technology Design,” In Proceedings of the First African Conference on Human Computer Interaction (AfriCHI'16). (2016), 256–259. https://doi.org/10.1145/2998581.2998616
[14] Margaret Burnett, Simone Stumpf, Jamie Macbeth, Stephann Makri, Laura Beckwith, Irwin Kwan, Anicia Peters, and William Jernigan, "GenderMag: A method for evaluating software's gender inclusiveness," Interacting with Computers 28, no. 6 (2016): 760-787.
[15] Rachel Adams, "Can artificial intelligence be decolonized?" Interdisciplinary Science Reviews 46, no. 1-2 (2021): 176-197; Rachel Adams, "Helen A'Loy and other tales of female automata: a gendered reading of the narratives of hopes and fears of intelligent machines and artificial intelligence," AI and Society 35, no. 3 (2020): 569-579.
[16] Abeba Birhane, "Algorithmic injustice: a relational ethics approach," Patterns 2, no. 2 (2021): 100205; Abeba Birhane, Elayne Ruane, Thomas Laurent, Matthew S. Brown, Johnathan Flowers, Anthony Ventresque, and Christopher L. Dancy, "The forgotten margins of AI ethics," in Proceedings of the 2022 ACM Conference on Fairness, Accountability, and Transparency (FAccT) (2022), 948-958.
[17] Shaowen Bardzell, "Feminist HCI: taking stock and outlining an agenda for design," in Proceedings of the SIGCHI conference on human factors in computing systems (2010), 1301-1310.
[18] https://www.dair-institute.org
[19] Alan F. Blackwell, Addisu Damena, and Tesfa Tegegne, "Inventing artificial intelligence in Ethiopia," Interdisciplinary Science Reviews 46, no. 3 (2021): 363-385.
[20] e.g. Helen Verran, Science and an African logic. (Chicago, IL: University of Chicago Press, 2001).
[21] Linda Tuhiwai Smith, Decolonizing methodologies: Research and indigenous peoples. (London: Bloomsbury, 2021).
[22] Jon M. R. Corbett. “Indigenizing computer programming for cultural maintenance”. In Conference Companion of the 2nd International Conference on Art, Science, and Engineering of Programming (2018), 243–244. https://doi.org/10.1145/3191697.3213802
[23] Cox and McLean, Speaking code
[24] Jason Edward Lewis, ed., Indigenous Protocol and Artificial Intelligence Position Paper. (Honolulu, Hawaiʻi: The Initiative for Indigenous Futures and the Canadian Institute for Advanced Research (CIFAR), 2020).
DOI: 10.11573/spectrum.library.concordia.ca.00986506. See also Jason Edward Lewis, Noelani Arista, Archer Pechawis, and Suzanne Kite. "Making kin with the machines." Journal of Design and Science 3, no. 5 (2018). https://doi.org/10.21428/bfafd97b