Chapter 1: Are You Paying Attention?
There are two ways to win the Turing Test. The hard way is to build computers that are more and more intelligent, until we can’t tell them apart from humans. The easy way is to make humans more stupid, until we can’t tell them apart from computers. The purpose of this book is to help us avoid that second path.
Sadly, the temptation for technology researchers and businesses is to take the easy road. Anyone who uses computers a lot will recognise that this has become a basic design principle of many systems we use. I’m writing as a technologist, rather than a philosopher, based on my practical experience of working as an artificial intelligence (AI) engineer since 1985 – nearly 40 years ago. AI has changed a lot since 1985, but humans have not, meaning that this choice between the hard and easy routes has been with us a long time.
Although I have a lot of relevant technical knowledge, dating back to before leading AI experts of today were even born, this book is intended for a non-technical audience. I’m going to explain fundamental ideas in an accessible way, emphasising the aspects that are important from a human perspective. These explanations will not always agree with the mainstream views of the AI research community, and will almost certainly be different to the promotional messages (and the warnings) that come from large AI companies. This should not be a surprise, because the success of so many business models depends on customers being made to appear more stupid, and on both customers and workers not realising what is being done to them.
Human bodies may be specified by the code of DNA, but our conscious lives are increasingly specified in software source code. This makes software design a moral problem. The self-described “anarchist anthropologist” David Graeber, in his books Bullshit Jobs and The Utopia of Rules, draws our attention to how many supposed advances of information society have actually made people’s lives worse. Graeber’s work inspired many young people to take direct action, not least through movements like Occupy Wall Street and Extinction Rebellion, and his writing has helped a new generation to understand the underlying reasons why economic and political systems turn out the way they do. However, one thing that Graeber did not pay so much attention to is the software engineers who design and build IT systems. Sometimes those people are just following orders from elsewhere. But sometimes the logic of software development seems to lead to new business models, inequalities, bureaucracies, and dysfunctional societies, even worse than the ones we started with. As the critic of sociotechnical infrastructure Geoffrey Bowker puts it, “bureaucracy makes us act like machines, algorithms seek to make us into machines”.
Decades ago, we were promised that robots and computers would take over all the boring jobs and drudgery, leaving humans to a life of leisure. That hasn’t happened. Even worse, while humans keep doing bullshit jobs, AI researchers work to build computers that are creative, self-aware, emotional, and given authority to manage human affairs. Sometimes it seems like AI researchers want computers to do the things humans were supposed to enjoy, while humans do the jobs that we thought we were building the robots for.
How did we get into this situation? In this book, I’m going to argue that it’s the result of a fundamental flaw in the research agenda of AI. AI researchers have confused philosophy and engineering. As a result, what started out as a fairly innocent philosophical question has turned into a global industry. The bad guys in this story are AI engineers and research programmers. I know this, because I spent years working as one of them. This book is an attempt to fix the problem, by explaining how we can make software better. I don’t mean faster, cheaper, more reliable, efficient or profitable. The software engineering industry is already pretty good at all those things. All are important, but there are many other books explaining how to design efficient and profitable software. This book is about how to design software that is better – better for society, better for people, better for all people - even if the result might be slightly less efficient overall, or less profitable for some.
Why are we making software we don’t need?
Software developers and computer scientists like me are fascinated by what computers can do. Our imagination and enthusiasm has resulted in IT systems, personal devices and social media that have changed the world in many ways. Unfortunately, imagination and enthusiasm, even if well-intentioned, can have unintended consequences. As software has come to rule more and more of the world, a disturbing number of our social problems seem to be caused by software, and by the things we software engineers have made.
Perhaps the original sin, in the story of how computer software has evolved from an intellectual challenge to a social nightmare, is the story of Alan Turing’s “Imitation Game”, now famous in computer science, AI research, and popular media as the Turing Test. Turing proposed the test in a 1950 paper on Computing Machinery and Intelligence, which starts by defining very clearly what problem Turing was trying to solve:
I PROPOSE to consider the question, ‘Can machines think?’ This should begin with definitions of the meaning of the terms ‘machine’ and ‘think’. The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous. If the meaning of the words ‘machine’ and ‘think’ are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, ‘Can machines think?’ is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words.
The new form of the problem can be described in terms of a game which we call the ‘imitation game’. It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart from the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman.
The intriguing thing about Turing’s introduction to this famous paper is the way that, despite Turing’s own training as a mathematician, he grapples here with problems of metaphysics (the nature of mind), the social shaping of science (the absurdity of popular opinion advancing scientific debate), and questions of gender (a haunting augur of Turing's persecution by the British government).
Turing’s paper is a remarkable piece of work, and still an excellent introduction to many central questions in AI, including some that I will return to in this book. But it is also a work of imagination, in a very long tradition of stories that raise intriguing questions about what it means to be human. We are fascinated by stories of animals or trees that talk, human children who grow up with no language, or dolls that move but have no soul, because all of these cross the boundaries of what we consider distinctively human. And the dream that we might create a machine or statue that behaves like one of us is also an ancient one. Building a talking robot in our own likeness is not only a path to self-understanding (assuming we understand what we have built), but a fantasy of power, potentially resulting in the perfect slave or the perfect lover. The fantasy has recurred over centuries: the Golem, Frankenstein, the clockwork automata of the 18th century, the original Pygmalion whose statue comes to life, not to mention Rossum’s Universal Robots, the robot Maria in Fritz Lang’s Metropolis, and any number of famous robots, sexy and otherwise, in movies like Ex Machina, 2001 or Terminator. A good definition of AI is the branch of computer science dedicated to making computers work the way they do in the movies.
Those kinds of story don’t usually end well, although for reasons that relate to the logic of storytelling rather than engineering (we should never forget that these supposed “inventions” are, before anything else, fictional devices). To follow storytelling logic, someone has to be punished for hubris, or the anxiety of the audience must be resolved with reassurance that they are uniquely human after all. The logic of plot structure, while possibly offering philosophical inspiration, has no practical relevance to engineering questions.
Unfortunately, when engineers are building real systems, the moral problem of the Turing Test is more fundamental, and not so easy to recognise and resolve as in fictional narratives. If we did succeed in making a system that was indistinguishable from a human, even in a small way, what value would this bring? The typical answers in public debate have come from business perspectives. Some people regret that a worker has been displaced by a machine, while others celebrate how the business has benefited from cheaper work. In practice, robots are often more expensive than humans, not cheaper, so the real benefit may be that they feel no pain, are less likely to make mistakes, or perhaps that they are not eligible to join unions. The management of factory automation is an economic, social and ethical problem, with the precise details of each generation of technology somewhat irrelevant, meaning that I won’t discuss these particular questions very much further.
Of course Turing was not a manufacturer or a management consultant. His test was a thought experiment, not a business plan. Thought experiments are valuable tools in science and philosophy, but it’s foolish to confuse them with engineering. Einstein famously imagined an elevator being pulled through space, but Elon Musk is not (yet) suggesting that we should replace rocketry with interstellar elevators.
The two kinds of AI
There is an important technical distinction that needs to be clear at the start of this book, and that I will return to in later chapters. The term AI has become very familiar, associated with all kinds of medical and scientific advances, whose implications are debated almost daily in newspapers and by policy makers. Although there are many kinds of AI system, addressing all kinds of problem, journalistic and political discussions tend to assume that they are all fundamentally the same. I want to make clear that there are two fundamentally different kinds of AI, when we look at these problems and systems from a human perspective.
The first kind of AI used to be described as “cybernetics”, or “control systems” when I did my own undergraduate major in that subject. These are automated systems that use sensors to observe or measure the physical world, then control some kind of motor in response to what they observe. A familiar example is the home heating thermostat, which measures the temperature in your house, and automatically turns on a heating pump or boiler if it is too cold. Another is automobile cruise control, which measures the speed of the car, and automatically steps on the gas if it is too slow. One or two hundred years ago, such mechanisms seemed really magic, making decisions as if they were intelligent. Mechanical clocks and steam engines were the earliest machines to use closed-loop feedback regulators to maintain their speed, and our forebears talked about clockwork and steam like we talk about AI today, as if machines were coming to life or becoming human. Connecting observation to action, making “decisions” from what the machine has “remembered” from its “instructions” or “training” all seem like signs of intelligence. But when we think realistically about a familiar object like a thermostat, it’s clear that all this human-like terminology – of learning, remembering, deciding and so on – is only a poetic analogy for devices that are really rather simple compared to humans.
The second kind of AI is concerned, not with achieving practical automated tasks in the physical world, but with imitating human behaviour for its own sake. The first kind of AI, while often impressive, is clever engineering, where objective measurements of physical behaviour provides all necessary information about what the system can do, and applying mentalistic terms like “learning” and “deciding” is poetic but misleading. The second kind of AI is concerned with human subjective experience, rather than the objective world. This is the kind of AI that Turing proposed as a philosophical experiment into the nature of what it means to be human, inspired by so many literary fantasies. The goal is to imitate human behaviour, not for some practical purpose like making a machine run more efficiently, but to explore the subjective nature of human experience, including our experience of relating to other humans.
Many public discussions of AI do not acknowledge this distinction between the objectively useful and (sometimes) straightforward engineering of practical automated machinery, and the subjective philosophical enterprise of imitating the way that humans interact with each other. This is in part because a few mathematical approaches have recently turned out to be useful in both kinds of problem. These are known generically as “machine learning” algorithms, and I will go on to discuss them in more detail, though it must be remembered that “learning” is only a poetic analogy to human learning, even when done by “neural networks” that while poetically named, in reality have very little resemblance to the anatomy of the human brain. Machine learning algorithms are useful statistical methods, increasingly included as a component in all kinds of software, including both control systems and imitation of humans (especially imitation of human language in chatbots, as I will be discussing in great detail). However, the failure to distinguish between objective and subjective AI is only in part because some of the algorithms being used might be similar. I suspect that a large part of the confusion between the two kinds of AI is intentional, trying to muddy the waters between the first kind that results in useful automation, and the second whose usefulness is more doubtful, or even harmful in many cases.
The motivation for intentionally confusing the two kinds of AI often comes from the profit opportunities associated with the more harmful cases, as documented comprehensively by Harvard business professor Shoshanna Zuboff in her book on Surveillance Capitalism, and Nick Couldry and Ulises Mejias’ explanation of How Data is Colonizing Human Life and Appropriating it for Capitalism. Confusing useful AI with harmful AI, or even claiming that they are essentially the same thing, is an effective way to avoid questions from regulators who might otherwise insist on technology that would be less harmful, but may also be less profitable. A popular way to make the argument that these two kinds of AI are really the same (or that they could become the same in future even if different today) is to invoke the speculative brand of “artificial general intelligence” (AGI). This idea interprets the Turing Test as an engineering prediction, arguing that the machine “learning” algorithms of today will naturally evolve as they increase in power to think subjectively like humans, including emotion, social skills, consciousness and so on. The claims that increasing computer power will eventually result in fundamental change are hard to justify on technical grounds, and some say this is like arguing that if we make aeroplanes fly fast enough, eventually one will lay an egg.
I am writing at a time of exciting technical advance, when Large Language Models (or LLMs) such as ChatGPT are turning out to be useful for many things, and are increasingly able to output predictive text that makes them seem surprisingly like an intelligent human. I am following (and participating in) those debates right now, but there is not much to be said at this stage. A succinct summary of the real power of LLMs comes from eminent AI professor Rodney Brooks, who says “It just makes up stuff that sounds good”. A more technical explanation of how LLMs “make up stuff,” by imitating human language using predictive text, and how the things they do are different from intelligence, is provided by AI researcher and philosopher Murray Shanahan. I’ll be returning to those questions in future chapters, so this note is just to let you know that I’m aware of the debates, and don’t see them invalidating the main point of the book (which is not really about AI at all).
My own view of such research programmes is that, as with other ways to win the Turing Test, the only way to succeed would be by making human emotions and social skills more stupid – and we might ask whether this has already happened in AI-driven social media platforms and the substitution of chatbot sessions for human conversation. As science and technology scholar Harry Collins puts it in the subtitle of his book Artifictional Intelligence, this is the problem of humanity’s surrender to computers, and technology critic Jaron Lanier put his finger on the problem of designing products this way in his manifesto affirming that You are not a Gadget.
The moral imperative for code
These seem like important questions to me, and I hope also to you. However, they are rather different to the academic concerns of the rapidly expanding specialist community dedicated to AI Ethics. I won’t be saying very much about how to deploy information systems that are fair, transparent, accountable and so on, although I hope that some of the technical approaches I describe later in the book might have those benefits as a by-product. I’m also not going to say very much about policy and regulation, or codes of governance and law, which are far better attended to by politicians, perhaps guided by philosophers and legal scholars, but probably not acting on instructions from AI companies and researchers.
Software companies are already obliged to be fair, transparent, and accountable to some extent via legal codes - the laws of the countries where they operate. Of course, not all software companies follow the laws of all countries. As legal scholar Lawrence Lessig says, in practice, Code is Law. The actual operation of a software system is ultimately determined by its source code, whether or not legislation says the same thing. Computer programs are no more or less than a system of rules about how to behave, and any questions about the moral basis of what a program does must look first at those rules. As criminologist Per-Olof Wikström says, “Moral action is any action that is guided by (moral) rules about what it is right or wrong to do, or not do, in particular circumstances.” Source code - the rules about what a computer does in particular circumstances - becomes a moral code, when it articulates questions of good and bad.
In the fictional universe of AI robots, the most famous speculation about the problem of encoding ethical constraints has been Isaac Asimov’s Three Laws of Robotics: that a robot must not harm a human being, that it must obey orders if not in conflict with the first law, and otherwise must protect itself. The potential conflicts between these priorities, and narrative plots that turn on resolving them, was a feature of many of Asimov’s stories, including the hit movie I Robot based on them. Among today’s philosophers and computer scientists, stories about the robots of the future often focus on the “alignment problem” - how could we know whether an AGI has the same values that we do, and what equivalent of Asimov’s Three Laws could be built in to protect us from evil?
The fictional problem of far-future robot “alignment”, as an alternative to today’s pressing but technically mundane problems of business ethics and government regulation, might actually be a convenient distraction for the large companies that are building and investing in AI. The robot characters in Asimov’s books take responsibility for their own actions when resolving the implications of the Three Laws in a tricky plot twist. Similarly, much talk about alignment is based on the premise that an independent AGI actor should in future be able to take moral responsibility for its own decisions and actions, rather than placing responsibility with the company which made it, or the moral decisions made by the designers and engineers who work there.
In reality, the software that controls digital systems and operates technology corporations is made using code. Many software systems are used in unethical ways, are designed for unethical purposes, or perpetuate structural injustices including racism and misogyny. But in tracing the problems of software ethics, we should look at the tool users, rather than the tool. We don’t ask if a screwdriver is unethical, and there are no conferences on carpentry ethics. There are clearly machines and buildings that have been designed unethically and for unethical purposes, but attempting to blame the tools for being unethical would be an obvious misdirection. I am pleased that there are now large communities of scholars interested in the problems of designing software in more ethical ways. But these scholars should attend to how software is built and used today, clearly attributing responsibility to the tool-user rather than the tool, and probably not spending too much time discussing the plots of science fiction novels and their robot characters.
Attending to programming
This is not a book about AI ethics, software industry regulation, or responsible system design. In all of those fields, tools may have some relevance to the discussions, but the tools are secondary to what they are used for. In contrast, my alternative to AI does involve paying closer attention to the tools used to make software, and in particular to the opportunities and ways of thinking that tools provide. Tools can certainly be dangerous, if used in the wrong ways and by the wrong people. But tools also offer a space for participation, creativity and debate. If a company starts selling machines that are dangerous or harmful, there is little point in regulating screwdrivers. A more effective response is to ensure that more people know how these tools could be used, so that alternative actions are clear. Furthermore, when these are knowledge tools rather than physical ones, then the tools themselves could be designed in ways that offer opportunities for open participation and debate about what is being created.
The imagined problems of “alignment” are really questions about knowledge-tools. What knowledge-tools could we use, to describe digital systems that are more controllable, and that help us explain what we want the software to do? Legal codes might be used to prosecute the consequences of bad software, but the actual behaviour of the software is defined in its source code, not the legal code. Therefore, the kinds of tool that I think we need to pay closer attention to are the programming languages in which source code is written. The ability to instruct computer systems, rather than surrender agency to the companies that make them, is the moral purpose that underpins better systems of AI ethics and regulation. As Geoff Cox and Winnie Soon say in their instructional guide to creative coding as political resistance: “program or be programmed” .
I’ve explained why attending to the act of programming might be a moral priority, placing a focus on who uses the tools and why. With appropriate tools, programmers can express and act on systems of values that shape all of our actions. But we also need to understand why and how being programmed is the opposite of moral agency. As an alternative to humans becoming machines, what does it mean for a person (not a machine) to be conscious, to be awake, to be aware, to attend to your surroundings, to be attentive to those around you, attentive to a child, a teacher, colleagues, your cousins or grandparents, to be alert and embodied, attending to and attending in your body? These things are the essence of being human - being attending and attentive beings, paying attention. Conscious attention is the moral centre of human life - we live to enjoy the alert ownership of bodily sensation, having the freedom to attend to ourselves and those we love. If we get programmed, we lose that freedom of attention. If our consciousness is confined by attending to mechanical tasks, we are no longer free to give the gift of attention, or to grow and develop through more fully knowing ourselves.
The attention economy of the information age takes this central moral aspect of being human - our own conscious attention - and turns it into a commodity to be traded. As Nobel prize-winning economist Herbert Simon famously wrote “In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention.”
Every one of us has a finite number of conscious hours in our life for meaningful attending and attention. Whenever a person is made into a conscious component of a machine, their capacity for meaningful attention gets reduced to the mechanical role they play, whether a call-centre worker, delivery driver, shipment packer, or an online ghost worker checking social media posts for offensive content. Social media platforms even turn their users/customers into mechanical components, reducing the richness of meaningful human attention to the simplest mechanical form, whether the few words allowed in a tweet, a pictorial meme to be seen at a glance, or the trivialised robotic responses of a “like,” a “follow,” or an emoji.
In contrast, when humans act as moral agents, they are responsible to, and conscious of, themselves. A moral agent acts intentionally, through deliberation. Each of us becomes and grows as a person through conscious attention and reflection on the life-story that we tell. As Per Olof Wikström says, human free will involves conscious choice between the opportunities for action in any situation, within the moral rules of conduct that make up human society. Different human societies have different codes of law and ethics, but every society is a community of moral actors, and moral action depends on conscious attention.
What does an economist like Herbert Simon mean by “wealth of information” and “poverty of attention?” Tradable commodities deliver profits in proportion to numbers: quantity, not quality. The companies in the attention economy trade directly in the number of hours out of your life that you spend attending to their screens. Their profit is maximised by the hours you watch. The company also loses money if its staff spend hours of their own valuable time listening to you. That’s why digital technologies so often make their customers more stupid by design, because the company doesn’t want you to say anything sufficiently intelligent that it would require a human to interpret. They would rather you spend as many hours as possible attending to an automated system - whether a social media feed or an AI chatbot - instead of having a meaningful conversation with one of their employees.
That’s why the title of this chapter asked “are you paying attention?” Moral code is about paying attention to what you say, investing attention in your own needs, and giving attention to those around you. A focus on moral codes is about being a conscious human, who is able to make moral choices, and not about trying to make a conscious computer. Unfortunately, the dynamics of the software industry work the other way, winning the Turing Test by making people less conscious, in a steady tide of mechanised trivialisation. Each wave of that tide promises convenience, simplicity and usability, while actually making us compliant consumers of the business logic, government logic, or just plain logics of racism, sexism, ableism, classism that seem to get systemically embedded, even if starting from good intentions, in every bureaucracy.
Could super-intelligent AI code itself?
I’ve already observed that one of the worst distractions is the attempt to avoid the urgent problems of today, by instead debating the problems of the far future. Getting people to attend to the future rather than the problems of today can only be done if they are persuaded that the future is more urgent, for some reason. A common theme in the discussion of AI, and one that seems to be especially popular among AI researchers and the companies that employ them, is to imagine that artificial general intelligence will turn out to be an existential risk to the human race by becoming more intelligent than humans – “superintelligent” – applying its unlimited mental abilities to design itself to become even more intelligent, evolving beyond our control, and wiping us out.
The problem with this supposed existential risk is that it comes from a fictional premise. If I imagine a super-AI that, like Superman, can do anything at all, then of course my imaginary software can use its superpowers to do anything in my imaginary world, including writing the code to create itself. Like a fairy story character whose final wish to the genie is for another three wishes, the logic is appealing in stories, but impossible in real life. The circular argument of superintelligence creating itself is only possible by avoiding a clear definition of what intelligence is, and also avoiding the question of what kind of code might be needed to write down that definition. It’s reminiscent of St. Anselm’s proof of the existence of God, which defines God as both unknowable and perfect, concluding that God couldn’t be perfect if He didn’t exist.
The magical powers of genies and deities are not precisely defined in stories, because clear definitions would limit the power of the narrative device. If anyone did commit to a technical definition of AGI, the contradictions would become clear, because the definition itself has to be coded in some way. Of course the answer can’t be a circular one - you can’t say that the necessary code definitions will be written by the AGI itself! Intelligence that could manifest itself in any way, without the inconvenience of a clear definition, is a plot device like the shape-changing T-1000 of Terminator II which might be described as an “artificial general body” in the way it is unlimited by any definition of shape, and thus able, in the imaginary universe of the movie, to become anything at all.
Alan Turing anticipated the philosophical dangers of speculating about undefined capabilities, which is precisely why his thought experiment proposed to test intelligence only by comparison to a human. If the best performance in the Imitation Game is to behave exactly like a human mind, then how could super-AI be even more like a human than that? Many discussions of this problem for popular audiences try to sidestep the problem by slipping in other undefined terms, for example casual words like “smart”, to suggest that computers would be dangerous if they were smarter than humans, but never pausing to say whether “smart” is the same thing as “intelligent”, and whether they are still talking about imitating what humans do, or doing something new but different. That final possibility is, of course, what computers have already become – very good at doing something else, but never, as it turns out, the same kind of things that humans do. All of these considerations make it clear how important it is to be clear about the difference between the two kinds of AI: practical automation versus fictional imitation.
In practice, all computer systems, including AI systems, do the things that we define them to do, where the definitions are written in computer source code. In many cases those systems greatly surpass human abilities (remembering millions of bank balances, or doing billions of sums very quickly), but those mechanical kinds of “superintelligence” are neither more or less threatening to us than “superhuman” inventions like earth-moving machinery or machine guns. Such things are powerful, and might certainly be dangerous, but are not magical. Everything that computer systems do is defined by their code, just as physical machines are defined by their physical design, components and materials.
AI companies find it easy to sponsor academic debates into magic technologies that might exist in the distant future, addressing ethical problems that would only become relevant a century or two after the sponsors have enjoyed a luxurious and untroubled old age. It also doesn’t hurt business to advertise the possibility that your company’s technology might be so immensely powerful that it is nearly magic, so powerful that governments should tremble at your words. If the intelligent products of the future promise to deliver science fiction, or even magic, access to this immense potential might seem like an excellent investment, especially to shareholders or policy-makers who don’t fully understand technology, and believe that science fiction or magic will come true one day.
Neural network pioneer Geoff Hinton, in recent public lectures following his high profile resignation from Google, has explained that the danger he is most concerned about is not super-rational AI, but rather automated language generators that devalue political debate by making up facts, abandoning logic, and provoking people to violence through repeating lies rather than considering rational alternatives. These kinds of irrational but persuasive language are easy to construct with AI, far easier than building super-intelligence. These behaviours may not be super-intelligent (or even intelligent at all), but they certainly are dangerous, as proven in the premierships of Vladimir Putin, Donald Trump, and many other demagogues.
Public debate about the far-future dangers of AGI, encouraged by government commissions and corporate ethics boards, and marketed alongside fantasy movies about people falling in love with sexy computers or battling killer robots from the future, seemed to reach its highest pitch at the same time as AI companies and researchers were creating serious ethical problems in the present day. In the era of big data and deep neural networks it became increasingly clear that AI systems were exploitative, biased, sexist and racist, through their use of machine learning techniques that had been coded to capture and replay existing prejudice.
This was a very different, sadly more mundane, problem than the plot devices of science fiction and magical stories. The reason for the present-day ethical problems was precisely opposite to the fantasies of the undefined “general” minds and bodies. The Turing Test definition of AI relies on an abstraction of intelligence, seen as behaviour that is supposed to look the same whether implemented in a machine or in a human brain. But feminist scholars like Kate Devlin and Olivia Belton, drawing on N. Katherine Hales, observe that very few people have the luxury of defining their intelligence as independent from their own bodies. Privileged white men might believe that people listen to their ideas without paying any attention to the speaker’s body. But women, people of colour, and those who are poor, low-caste or speak the wrong language understand that the privilege of defining intelligence in such an abstract and disembodied way is only available to those who already have the “right” kind body themselves.
As I discuss in more detail in chapter 13, philosopher Stephen Cave explains how the scientific history of “intelligence” itself was invented alongside the other principles of measuring and comparing human bodies, driven by the need for evidence that could justify a scientific theory of race as a basis for eugenics, genocide, and slavery. If the notion of measurable intelligence is fundamentally a racist and sexist principle, then is the notion of immeasurable super-intelligence simply artificial super-racism and super-sexism? As AI researchers increasingly apologise for the racist and sexist biases of their systems, companies and regulators behave as though the biases can be designed away. But if the whole technology is defined on immoral principles, is it possible that we are starting from the wrong place altogether?
How can we re-code AI?
This introduction has explained why much of the current debate about the ethics of AI has been looking the wrong way, whether through creative magical thinking or purposeful distraction and misdirection. The rest of the book is not fundamentally about AI ethics, though it does make many suggestions on how recent inventions of AI researchers could be more usefully applied. Rather than dwelling on the (very real) injustices being constructed and reinforced through deployment of AI, and not spending any time at all speculating about magical technologies of a fictional distant future, my intention is to be optimistic, offering concrete suggestions on how we can make software better in other ways.
In particular, this book directly addresses the questions of how we can communicate with computers, to explain what we want them to do. For leading AI researchers such as Stuart Russell, problems faced by (or created by) AI might be solved using more AI. According to that reasoning, if a computer is doing things that are not beneficial to the human race, the computer needs to use better AI methods, learning how to observe humans even more closely, so it can learn better what our needs and preferences are, and then learn how to do things that might be most beneficial us. The obvious alternative is seldom discussed. Rather than waiting for a computer to figure out what I might want by watching me, why don’t I just tell it what I want? Even after 40 years’ marriage, my wife says she finds it difficult to work out what I would like her to do, if I don’t actually say anything. Why would I expect a robot to do better?
The real priority for future computers should be, not how they might figure out what we want when we’re not saying anything, but rather what language we should speak, if we were going to be able to tell them what to do. By describing these languages as Moral Codes, I want to draw attention to the importance of programming languages, but also how the power of programming languages could become more widely available, and how more kinds of interaction with computers could gain the power and control available through programming. Although AI researchers use programming languages all the time, to create the systems that aim to control others, there’s an unspoken assumption that this elite privilege couldn’t realistically be extended to regular humans. My question challenges that assumption, by asking what new kinds of code might make a moral alternative to AI feasible? I did not originally invent the phrase Moral Code with the intention of making it an acronym, but there is a useful mnemonic for how both programming languages and AI might be improved, if they were More Open Representations, for Access to Learning, and Creating Opportunities for Digital Expression.
This book could be considered as a product design manual, to be read alongside the work of insightful critics like David Graeber, Kate Crawford, Geoffrey Bowker, Shoshanna Zuboff, Harry Collins, Nick Couldry and Ulises Mejias. Although it does introduce some theory, the intention is for these to be engineering principles, applicable by software developers, to make software genuinely better in ways that AI doesn’t.
Unfortunately, it’s not easy to combine practical advice with new (philosophical) ways of thinking, and there is a danger that the attempt to do so won’t make anybody happy. Where I work in Cambridge, this is the famous problem of Two Cultures – arts and humanities (and social sciences) on one side, with engineers and scientists on the other. British public life is constantly screwed up by the Two Cultures, though in part because simple numeracy is assumed to be on the side of science, making it difficult to have sensible discussions about the climate crisis, pandemics or the economy whenever simple maths is needed. As a result of this divide, public debates about AI tend to focus either on the need for regulation, or on the potential for engineering fixes that might be bolted-on to the latest inventions, but do not ask the more important definitional questions of how AI is actually created (with code), or what else we might be able to build if we could redefine the knowledge tools of code itself.
American computer scientist turned media theorist Philip Agre, after completing his PhD at the MIT AI Lab, succinctly diagnosed the problem being faced by the whole field. He observed how AI researchers in those laboratories were actually doing covert philosophy. Sure, they did engineering work and coded software prototypes. But their explanations of what they were doing related to “learning”, “knowledge”, “thinking” and other philosophical categories, just like the questions raised in the introduction to Turing’s famous paper. This would perhaps be OK, if the researchers had any philosophical training. But in fact, those talented AI engineers understood very little about the concepts they were supposedly researching. Agre’s classic essay Toward a Critical Technical Practice: Lessons Learned Trying to Reform AI explains the urgent need for researchers who are able both to build sophisticated software, and also understand sophisticated arguments. Where AI researchers had been doing covert philosophy, Agre argued that this needed to become overt, in which case code must be open, and theory must be coded, not with layers of wishful thinking, magical fantasy, or speculation about dystopian futures.
This is the real goal of the book you are reading – a guide for those who are prepared to approach software design as a critical technical practice, requiring understanding of both technical and philosophical problems, to provide Moral Codes that respect human attention and also deserve it. I believe this is the only way to make software that is genuinely better – not just efficient and profitable (for which many books already exist), but better for people and for the world.
How to read this book
I’ve written this book to be easy to read from start to end, and if you’ve got this far, you may like to continue. But if you prefer to dip in, many chapters focus on one theme in a reasonably self-contained way, referring to other chapters for background where needed. The academic content is broadly the background to human-centered AI: human-computer interaction (HCI), especially sub-fields of intelligent user interfaces (IUI), critical HCI, visualisation, end-user development (EUD), and psychology of programming. It also touches on software engineering, software studies, and critical history of computing. All are interdisciplinary fields, and the range of references may seem scattered, or difficult to follow if you are new to computer science. In that case, reading from start to finish may be the best way to get a coherent story.
For those who want to explore specific themes, Chapter 2 gives examples of early machine learning methods for controlling computers more effectively. Chapter 3 explains the technical differences between coding and machine learning algorithms. Chapter 4 explains algorithms that imitate human language, and chapter 5 investigates the implications of where that language comes from. Chapter 6 explores the historical foundations of human-centred computing, when code was evolving into the first graphical user interfaces. Chapter 7 considers forgotten lessons from the Smalltalk language, which was associated with the very first personal computer, the birth of Wikipedia, object-oriented programming and Agile management. Chapter 8 is a fast-forward to current research in intelligent interactive tools, including recent projects from my own team, and their contributions to new products emerging in major companies. Chapter 9 shows how fashions in user interface and app design often benefit software companies rather than empowering users , including the “flat design” trend of the past decade. Chapter 10 investigates the essence of programming, as needed to make better progress in future, and chapter 11 considers the way that generative AI can help with improved coding tools. Chapters 12 and 13 explore important specialist questions - the use of AI and coding tools for creativity, and the problems of AI and code that come from focusing only on the needs of people in the wealthy West and Global North. Chapter 14 introduces design principles to invent new kinds of code, and to improve those we already have. After a short conclusion, I’ve included an appendix suggesting further reading, from the 40 years of my own research that form the technical foundations of this book.
For a thematic overview, I’ve provided a map in Figure x.x, showing which chapters relate to which aspects of the Moral Codes agenda.
More Open Representations
Access to Learning
When AI makes code
Why code isn’t AI
How language codes
What language codes
Chapter 6: Being at home in code
Chapter 7: Lessons from Smalltalk
Making better codes
Beyond flat design
The craft of coding
AI as a coding tool
Codes for creativity
Chapter 13: Making code less WEIRD
Chapter 14: Re-imagining code
Fig x.x - a map of Moral Codes
 Mike Monteiro, Ruined by Design: How Designers Destroyed the World, and What We Can Do to Fix It. (San Francisco, CA: Mule Design 2019)
 David Graeber, Bullshit Jobs: A Theory. (London: Penguin Books, 2019); David Graeber, The utopia of rules: On technology, stupidity, and the secret joys of bureaucracy. (London: Melville House, 2015.)
 Geoffrey Bowker, foreword to The Constitution of Algorithms: Ground-Truthing, Programming, Formulating by Florian Jaton. (Cambridge, MA: MIT Press, 2021), ix
 Kate Crawford, Atlas of AI (New Haven CT: Yale University Press, 2021), 7
 Alan M. Turing, "Computing Machinery and Intelligence", Mind 70, no. 236, (October 1950): 433–460, https://doi.org/10.1093/mind/LIX.236.433
 Despite Turing’s view that an opinion poll would be an absurd way to approach this research question, there have since been many serious attempts to gain insight through opinion polls - albeit polls of AI experts rather than the general population. A considered review of notable examples, (alongside results from a further poll) is provided by Vincent C. Müller and Nick Bostrom, “Future progress in artificial intelligence: A survey of expert opinion”, in Fundamental Issues of Artificial Intelligence, ed. Vincent C. Müller (Berlin: Springer, 2016), 553-571.
 See Andrew Hodges, Alan Turing: The Enigma. (New York: Vintage Books, 1983).
 Jessica Riskin, "The defecating duck, or, the ambiguous origins of artificial life." Critical inquiry 29, no. 4 (2003): 599-633.
 Shoshana Zuboff, The age of surveillance capitalism: The fight for a human future at the new frontier of power. (London: Profile books, 2019).
 Nick Couldry and Ulises A. Mejias. The costs of connection: How data is colonizing human life and appropriating it for capitalism.(Redwood City, CA: Stanford University Press, 2020).
 Glenn Zorpette, "Just Calm Down About GPT-4 Already. And stop confusing performance with competence, says Rodney Brooks," IEEE Spectrum (17 May 2023). https://spectrum.ieee.org/gpt-4-calm-down
 Murray Shanahan, Talking About Large Language Models. (2022). arXiv preprint arXiv:2212.03551.
 Harry Collins, Artifictional intelligence: against humanity's surrender to computers. (Cambridge, UK: Polity Press, 2018).
 Jaron Lanier, You are not a gadget. (New York: Vintage, 2010).
 I am writing this at a time when prominent researchers and founders of AI companies are, almost every week, issuing new warnings and demands for international regulation. Commentators from outside the technology business are rightly sceptical about the motivations involved, and I will discuss some of the key dynamics later in this chapter. Nick Couldry suggests that investment in understanding these models is urgently needed by the technology companies themselves, and that their request for regulation is simply asking for public subsidy, with taxpayers funding what the company was going to do anyway. Nick Couldry and Ulises Mejias. Data Grab: The New Data Colonialism and How to Resist It. (London: Allen Lane, forthcoming 2024).
 Lawrence Lessig, Code and other laws of cyberspace. (New York: Basic Books, 1999).
 Per-Olof H. Wikström, "Explaining crime as moral actions," in Handbook of the Sociology of Morality, edited by Steven Hitlin and Stephen Vaisey. (Berlin/Heidelberg: Springer Science & Business Media, 2010), 211-239.
 Brian Christian, The alignment problem: How can machines learn human values? (London: Atlantic Books, 2021).
 A tool can’t be unethical, but some tools are dangerous, and it makes sense to reduce dangers to the general population by regulating who gets access to poison, explosives, earthmoving machines or firearms. This may be worth considering for social media in future, especially if content becomes dominated by chatbot output.
 Geoff Cox and Winnie Soon, Aesthetic Programming: A Handbook of Software Studies, (London: Open Humanities Press, 2020), 30. http://aesthetic-programming.net
 Shannon Vallor, Technology and the virtues: A philosophical guide to a future worth wanting. Oxford: Oxford University Press, 2016; Ibo van de Poel, “Embedding Values in Artificial Intelligence (AI) Systems”. Minds and Machines 30 no. 3 (2020): 385–409 https://doi.org/10.1007/s11023-020-09537-4
 For a spiritual perspective on the values and meanings that are inherent in consciousness, see Peter D. Hershock, Buddhism and intelligent technology: Toward a more humane future. (London: Bloomsbury, 2021)
 Herbert A. Simon, "Designing organizations for an information-rich world". In Computers, communications, and the public interest, ed Martin Greenberger (Baltimore, MD: Johns Hopkins University Press, 1971), 38-52.
 Per-Olof H. Wikström, Dietrich Oberwittler, Kyle Treiber, and Beth Hardie. Breaking rules: The social and situational dynamics of young people's urban crime. (Oxford, UK: Oxford University Press, 2012).
 Andreas Bandak and Paul Anderson, “Urgency and Imminence”, Social Anthropology/Anthropologie Sociale 30, no. 4 (2022), 1-17. Retrieved May 4, 2023, from https://doi.org/10.3167/saas.2022.300402. Bandak and Anderson point out that business leadership texts have even advocated the intentional use of urgency as a strategic asset, for example John P. Kotter, A sense of urgency. (Boston, MA: Harvard Business Press, 2009).
 Novelist Susanna Clarke offered an impressively clear vision of what the world might have been like, if the kind of promises now made by AI researchers to secure government funding had previously been used to realise the potential applications of magic. Susanna Clarke, Jonathan Strange and Mr. Norrell. (New York: Bloomsbury, 2004).
 Geoffrey Hinton, “Two Paths to Intelligence”. Public lecture in the University of Cambridge Engineering Department, Thursday May 25, 2023.
 Kate Devlin and Olivia Belton, "The Measure of a Woman: Fembots, Fact and Fiction". In AI Narratives, ed. Stephen Cave, Kanta Dihal and Sarah Dillon. (Oxford UK: Oxford University Press, 2020).
 Stephen Cave, "The problem with intelligence: its value-laden history and the future of AI." In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (2020), 29-35.
 Stuart Russell, Human compatible: Artificial intelligence and the problem of control. (London: Penguin, 2019)
 Charles P. Snow, The two cultures and the scientific revolution. (Cambridge: Cambridge University Press, 1959) Many facets of the history and legacy of Snow's provocative claims are discussed in a special issue of Interdisciplinary Science Reviews: Frank. A.J.L. James, "Introduction: Some significances of the two cultures debate". Interdisciplinary Science Reviews 41, nos. 2-3 (2016): 107-117.
 Simon J. Lock, "Cultures of Incomprehension?: The Legacy of the Two Cultures Debate at the End of the Twentieth Century". Interdisciplinary Science Reviews 41, nos. 2-3 (2016):148-166.
 Philip E. Agre, “Toward a Critical Technical Practice.” In Bridging the Great Divide: Social Science, Technical Systems, and Cooperative Work, ed. Geoffrey Bowker, Les Gasser, Susan Leigh Star, and Bill Turner, (Mahwah, NJ: Lawrence Erlbaum, 1997), 131–158.