So, how do we design software that will let us spend the hours of our lives more meaningfully, attentively conscious and creative moral agents, rather than being condemned to bullshit jobs while the machines pretend to be human? My simple answer is that we need better programming languages and less AI. Other commentators on AI have emphasised the need for new legal policies, removal of bias, transparency of algorithms, improved data rights and sovereignty, explanation of algorithmic decisions and so on. All of those would be helpful, and excellent books and articles on those topics have been published, but improved access to programming can contribute to them all.
In this project, I have tried to emphasise lessons of history, especially principles of human-centric design that have remained important across generations of technical approaches and different kinds of representational code. However, people often tell me that the latest AI technologies are so different to anything we have ever seen before, that the lessons of history are not relevant. They say AI is now so different that lawyers and policy makers have no idea what the implications might be, and that government regulation will either be too late, or else too early, so that it impedes progress. Perhaps it is true that AI has suddenly become fundamentally different in 2023, but after 40 years in the field, I feel that many things have stayed the same, just as they did during previous decades when colleagues told me that fundamental changes had finally arrived. Although people might possibly be right that history and policy are no longer relevant to their work, it’s also possible those arguments look more attractive to someone who is simply not inclined to read history, and who doesn’t want to follow the rules.
People who think about AI ethics from the perspective of imagined future problems, rather than the historical practicalities of technical development, sometimes talk about the “alignment problem” for artificial general AI (AGI). The problem of alignment is supposedly that the AGI will have its own ideas about what might be useful or ethical behaviour, and that those ideas might be different to what human users want to achieve. In the introduction to the book, I’ve explained why I am not persuaded that AGI will ever be a useful ambition, especially not the more-human-than-human notion of superintelligence. However, even with the kinds of stochastic parrots we have today, we can see how the alignment problem might work out in practice. To some extent, talk about the alignment problem returns to the themes of old science fiction novels, such as Isaac Asimov’s famous Three Laws of Robotics. In Asimov’s work, the robots are not like the machines in factories, but metal-bodied people - like those in Star Wars, or like the androids and Vulcans in Star Trek, having more or less the same capabilities as human characters, but with distinctive personal strengths and weaknesses in order to serve as plot points. The Three Laws are used as plot devices, where the plot turns on the need to circumvent or comply with one or another of the Laws.
In currently deployed LLM products, any “alignment” with the user’s values comes either from text off the Internet, the guardrails added by the company, or the prompt supplied by the users themselves. I’ve discussed in chapter 5 whether the needs of any single user can really be “aligned” with the values of the whole Internet. Users of the Internet have a pretty good idea what kinds of things are out there, and we also know that any illusion of alignment between a single reader and the whole Internet is achieved mainly through filter bubbles, hiding the parts that you don’t want to see because they aren’t “aligned” with your personal values. When an LLM-based chatbot presents the content of the Internet via a fictional first-person conversation, the illusion that the whole Internet might have a single point of view, consistent with your own, is especially pernicious. The filter mechanisms become invisible, can’t be explicitly controlled, and depend on contextual factors that you can only imagine. The ideal of alignment could only ever occur if the code was openly visible and controllable. The best way for a computer to understand your goals is not through mysterious “value alignment”, but for you to tell it what you want through programming.
The title of this book refers to programming in the broadest sense, going beyond old-fashioned views of “code” as green text on terminal screen. I originally used the phrase Moral Code to point out that the way we design programming languages carries moral commitments, because attending to abstraction offers ways for people to control and construct their conscious lives. The phrase has also offered a convenient acronym, reminding us that moral codes do not need to look like the programming languages of 50 years ago. MORAL CODE is an agenda for the design of programmable computers that offer More Open Representations, Access to Learning, and Control Over Digital Expression. These things let us attend to our own needs as creative moral agents and authors of our own lives.
I’ve explained that many recent demonstrations of AI involve little more than scraping, re-arranging and re-selling the intelligent online work of other humans, whose contributions are hidden inside a kind of computational fancy dress. The results can be an entertaining pastiche, but fundamentally achieve their intelligence in the same way as von Kempelen’s chess-playing “Turk” from the 18th century, through the humans hidden inside. We have always found it entertaining when machines behave in human-like ways, even when these are magic tricks. In the same way, the magic tricks of AI demonstrations continue to be entertaining today. We don’t always want to spoil the illusion by drawing attention to the man behind the curtain.
However, there are some dangers associated with taking these entertaining technologies at face value. The less serious danger is that many AI experts downplay the difference between the two kinds of AI: objective mechanical tools and imitation of human subjectivity. This fallacy encourages magical thinking, in which future AI systems are imagined as being able to do everything a human can do and more, but it has little practical impact today, other than where important decisions being made by investors and policy-makers might be misguided.
A more serious problem is that real people are doing the intelligent work behind the scenes, often with exploitative working arrangements, or even working for free, at the same time as huge profits are being made by people who are not doing the basic labour. Real hours of conscious attention are dedicated to training the machine, turning original thoughts and words into aggregated components. Perhaps the world has always been this way, and it would be overly idealistic to argue for anything else. But the dangerous fiction of AI is that the machine is doing this work all by itself, that no humans are involved, and that this means nobody deserves to be recognised and more fairly compensated, or to receive any share of the profits being made.
If we were to replace the search for better AI with a search for better programming languages, it would restore an earlier ambition of computer science, which is getting computers to do useful things. Everybody has their own personal goals and priorities, and what is useful to one person will not be useful to another. For computers to be more generally useful, each person should be able to describe what they want, telling a computer how they want it to behave differently in future.
It might perhaps have been possible that the language of instruction could just be our natural human language. Indeed, many current products are being developed with the assumption that natural language will be the best programming language. However, past evidence and recent experiments all suggest that a more nuanced approach is required[1]. The most successful “languages” for instructing computers have often involved visualisations, creating new kinds of abstract notation that are suited to different kinds of information-processing and design tasks. Human language is not designed for any of these things, and relying on speech as a technical notation is clearly unhelpful in many ways. We need to think more flexibly about every user interface as a kind of programming language, asking what kind of notation it offers, what we can use this notation for, and in what ways this would allow us to define and control future behaviour of the machine.
Sadly, this technical manifesto promoting Moral Code requires some progress in the opposite direction to many research and business trends of today. In Shoshanna Zuboff’s comprehensive explanation ofSurveillance Capitalism, businesses grow by capturing the “data exhaust” of human attention and intelligence. That human data is repackaged and resold, with the objective of capturing even more human attention, delivering it to advertisers, who in turn use this to influence the habits and decisions of human customers. Similarly, Rebecca Giblin and Cory Doctorow document how the creative industries have fallen prey to Chokepoint Capitalism, where a single company becomes the only conduit between artists and their audience, able to devalue creative work through algorithmic repackaging while minimising the compensation to human creators.
Profitable surveillance and chokepoint businesses do not offer control to their customers, and do not benefit from customers who are free to redirect their attention. This is the exact reverse of Moral Codes, in which people would have the opportunity to instruct the computer, and would be able to make their own decisions on whether to invest attention in programming or not.
Accounting for attention - or rather the limited number of conscious hours in a human life - is perhaps the largest moral problem of all. At a purely physical level, consciousness could be measured as information processing - when I am awake (and sober), I have a fairly low limit on the number of things I can attend to. There is certainly a finite limit to the hours of my life that I can spend being attentive. Many of the experiences that fulfil me as a person, whether the comforting familiarity of a song I love, the playful creativity of optimal flow, or simply time spent in the same space with my family or colleagues, are valuable precisely because I am devoting attention to them.
We are also famously living in the era of the attention economy, where the largest and most profitable businesses in the world are those that consume my attention. The advertising industry is literally dedicated to capturing the conscious hours of my life and selling them to someone else. It might seem magical that so many exciting and useful software systems are available to use for free, but it is now conventional wisdom that if you can’t see who is paying for something that appears to be free, then the real product being sold is you. Our creative engagement with other people is mediated by AI-based recommendation systems that are designed to trap our attention through the process that Nick Seaver calls captology[2], keeping us attending to work sold by one company rather than another, replacing the freedom of personal exploration with algorithm-generated playlists or even algorithm-generated art.
Every aspect of the alternative Moral Codes agenda might potentially be measured in terms of attention costs and benefits using information theory: More Open Representations allow information to be exchanged, Access to Learning allows it to be acquired, and Control Over Digital Expression allows it to be expressed. If computer users have access to appropriate notations - Moral Codes - they would also be able to use simple automation to spend the conscious hours of their lives in ways that are less mechanical, rather than more. If computer interfaces are designed as notational spaces, they offer freedom and negotiation, even forms of social organisation, complex assemblies of intelligent decision making and deliberation, respecting the humans creating them, rather than pretending humans were not involved.
This book has argued for an alternative technical agenda in which machine learning algorithms could be offered in the service of coded instructions, rather than overriding human control. This is only a technical agenda, and much research remains to be done, but it does represent a clear alternative. Instead of pursuing AI that consumes, packages and re-sells our precious hours of conscious life, we deserve greater control over our own attention, including the ability to instruct computers to carry out mechanical tasks on our behalf, to recognise how they work, and to use them for our own creative purposes. All these things can be better achieved with codes that are designed for doing so, not by building machines that pretend to be human. The less time we spend attending to machines, the more we can grow as people, building richer cultures and societies, not worrying whether machines can be conscious, but being conscious ourselves, and being human. Moral Codes that would let us do this are a far more exciting thing to imagine than stories about robots of the future.
END
[1] Michael Xieyang Liu, Advait Sarkar, Carina Negreanu, Benjamin Zorn, Jack Williams, Neil Toronto, and Andrew D. Gordon, "“What It Wants Me To Say”: Bridging the Abstraction Gap Between End-User Programmers and Code-Generating Large Language Models," In Proceedings of the 2023 ACM CHI Conference on Human Factors in Computing Systems (2023), 1-31.
[2] Seaver, Computing Taste