Skip to main content
SearchLoginLogin or Signup

Chapter 14: Re-imagining AI to invent more Moral Codes

Published onSep 20, 2022
Chapter 14: Re-imagining AI to invent more Moral Codes
·

Chapter 14: Re-imagining AI to invent more Moral Codes

Friends in English faculties are amused when I explain that AI is a branch of literature, not a branch of science. It’s true that a lot of AI research has been based in computer science departments like my own. But there is an old academic joke that any discipline needing the word “science” in its name is probably not a science. We don’t talk about physics science, chemistry science, or biology science. When it comes to engineering science, political science, or - yes - computer science, does this mean there was some doubt about how scientific the subject really was?

The origins of the natural sciences were looking objectively at the world, making measurements, and developing theories from the evidence[1]. This is in contrast to research in the arts (whether literary, depictive or mechanical) which studies things that we make. Two big differences are firstly that in the arts, we want to make the world better rather than just observing how it is, and secondly that things we make can change as a result of what we say about them. Technology is a thing that we do, not a thing that happens to us. Neither of those is true of the natural sciences, where scientists try not to be normative (making judgements about what should be, rather than report what they observe), and not to make any intervention that might bias their measurements.

Before going on, it’s also important to emphasise that we should not be distracted by whether a discipline uses numerical measures and mathematical calculations. Maths is often associated with how “scientific” something seems to be, but without proper justification[2]. A researcher might work quantitatively by counting the different words used in James Joyce’s Ulysses[3], but just because they are using maths, this doesn’t mean that they have suddenly become a physicist rather than a literary scholar.

AI is a branch of literature because it is a work of imagination. All AI research starts with some kind of fantasy about what a computer might be able to do in the future. The daily work of an AI researcher, just like a novelist or playwright, involves typing on a computer keyboard to produce a text. If things go well, a successful AI program, as with a successful novel or play, fills in enough convincing detail so that the initial concept becomes a fully realised imaginative world. Literary texts are evaluated when the results are presented or performed in front of an audience. In the case of a play, this might involve using stage machinery to present an illusion to the audience in a theatre. In the case of an AI program, the evaluation comes when it is performed (or “executed”) with computing machinery to present an illusion to the audience on a display screen[4]. The value and significance of literary works, whether poems, plays, novels or AI programs, is decided by how the audience reacts, by what the critics say about it, and most importantly, whether people want to see more of this kind of stuff, perhaps after they have considered comments from the critics.

The idea of machines that behave like people is not new in literature - there have been stories like this for 2000 years or more, and artificial humans were a regular feature of plays, novels and films long before the invention of the Turing Test or the first research meetings and funded projects in AI. But just because something has been imagined, we are not obliged to build it. Thought experiments can simply be great literature. Franz Kafka’s Metamorphosis explored the consequences of a man turning into an insect. We appreciate the implications without sponsoring research into human-insect studies or trying to create artificial insects by genetic manipulation. Artificial general intelligence, a useful thought experiment in Alan Turing’s Test, does not need to become a design objective, just as Kafka’s creation can be appreciated without trying to design more human-like insects.

AI and the entertainment industry

It has been remarked that AI is the branch of computer science trying to make computers work the way they do in the movies. This is a perfectly valid objective. If AI were recognised as a branch of the entertainment industry, it could join the other parts of computer science already devoted to entertainment. We enjoy digital effects in movies, and explore imaginary worlds in virtual reality headsets. Sophisticated theatrical illusions and stage machinery have been a focus of innovation for humanity over centuries. AI fits comfortably alongside computer graphics and virtual reality within that entertainment tradition, and I will continue to enjoy them all. The necessary AI hardware is already installed at movie companies and in gaming consoles, because the deep neural networks used in AI research run on the same graphics processor units (GPUs) that power CGI animation and virtual reality games.

By this stage of the book, you probably won’t be surprised that I consider AI to be a branch of the entertainment industry, best studied by academics trained in literature (perhaps with some mathematical tools from the digital humanities), rather than attempting to treat it as a natural science. However, I wouldn’t want to suggest that entertainment and literature are unimportant. I think we should have more, and better. I also think academics should contribute to the entertainment industries among other branches of culture. One of the largest AI research centres in my own university right now is the Leverhulme Centre for the Future of Intelligence, which is based in the Philosophy faculty rather than Computer Science, and employs experts in science fiction criticism on its academic staff.

This might be a useful point to remind readers how at the start of the book I made a distinction between two kinds of machine-learning “AI". One of them is purely an engineering technique for building better closed loop control systems that interact with the physical world. The other kind is to do with observing and simulating human behaviour, although as I have explained at length, the simulation is really a kind of institutionalised plagiarism, in which actual behaviour of real humans is simply recorded and regurgitated as pastiche - neither original nor a copy[5]. Any appearance of surprising originality is simply random noise, which might sometimes be interesting when throwing dice for creative purposes, but only in the context of the right kind of game or artwork. This second kind of AI, the kind dedicated to the Turing Test, is a branch of the entertainment industry. The first kind, dedicated to measuring and acting in the physical world, is an engineering field reliant on understanding the relevant science.

Representations for creative engineering

This has not been a book about engineering problems, although AI engineering certainly has its place (and I worked as an AI engineer myself for half a decade). But humanity is facing many problems that are problems of imagination, rather than problems of natural science. Indeed, human wars, famine, pandemics and the climate emergency all show us how engineering efforts, when lacking the right kind of imagination, have dreadful consequences. Perhaps the word “entertainment” is too glib, but there are many potential benefits from more creative opportunities for digital expression.

For that reason, I want to end this book by advocating how machine learning algorithms can be incorporated as an element of better creative tools, helping us to make literature and art that reimagines the world. Sophisticated works of literature have always been reliant on sophisticated information representations, and the complexity of a large and thoughtful literary novel, stage performance or movie is easily comparable to a large computer program. I have worked on many projects that studied the tools used by artists, to gain insights into how computers might support them, and enable others to create great works.

For example, I spent a day with author Philip Pullman, who had visited Cambridge as a guest of Microsoft Research, to talk about the tools that he used to write his Dark Materials trilogy. He explained a complex working process, involving walls full of sticky notes and colour codes, voice dictation of a fluent narrative that would be typed out before cutting up and reorganising, and so on. All this sounded very much like the kinds of activity necessary to create a complicated computer program, which also involves revision and reorganising, keeping all the various interconnections in mind as the named parts are moved around for structural clarity. 

In planning the future relationship between programmable systems and machine-learning systems, we must keep in mind what kinds of engineering tools we will need to organise our own thinking. Software tools have become an essential element in the design and operation of engineering products including structural girders, field irrigation systems, fabric stitching and heating boilers. It is important to understand that those kinds of physical engineering are quite different to social engineering, such as wars and taxation systems, which are primarily works of imagination - the results of what people want and do to each other, rather than to the physical world.

Nevertheless, in this century, all of these things (taxation as much as irrigation) rely critically on software. This means that there are programmers who work on them all. Machine learning algorithms are useful in all kinds of problems, including these, and programmers should use those algorithms when they are relevant.

Many of these problems, both physical and social, also offer opportunities for non-programmers to have more control over their surroundings, and to use computer technologies to reduce the extent to which we are dehumanised by systems that wastefully consume our attention rather than allowing us to attend to the business of being human. As explained in earlier chapters, being in control of our own attention requires ways of telling computers what to do. We would all benefit from more diverse ways to achieve this, suited to the great diversity of situations where computers can be helpful.

However, the considerations in designing tools for practical physical action are different to creative tools that help produce works of literary imagination. Muddling the two kinds of system together is unhelpful, both in critique of AI, and also in the design of more programmable and controllable software tools. A very current example of problematic muddling is the concept of the “autonomous vehicle”, which has some design elements that are straightforward control systems (for example keeping a certain distance from the edge of the road and travelling at a certain speed), and other elements that rely on social imagination (observing speed limits, or knowing how to behave politely at an intersection). Some can be “objectively” explained in mathematical terms as being wholly determined by physical measurements, while others result from argument, negotiation, and attending to the cultural business of human intentions and social conventions.

How can we go about imagining new systems that allow users greater control, agency, accountability, transparency, and all the other positive features that are advocated in human-centred AI initiatives?

Earlier chapters have shown how, if we are going to tell computers what we want them to do, we need languages for instructing computers. In the early days of computing, these were described as programming languages. Computer scientists do still use the phrase “programming language”, and it has become a routine feature of school education, skills initiatives for “learn to code”, and a central meme in geek culture events such as live-coding algoraves. However the examples in this book have shown how many interactive abstractions deliver the power of code, without looking like stereotypical programming languages any more. They use the computer display to visualise algorithms and data, offering many different ways of instructing computers how to behave in future. I’ve described the work of some innovators who integrate machine learning methods with programmable systems, offering mixed initiative interaction where the computer might offer to help, but the user remains in control of the machine and of their own attention.

This huge diversity of interactive displays, offering different combinations of data, algorithm, inference and control specification, can be understood as notational systems - what is the screen showing, how is that display structured, and what must the user do to understand or modify the structure? In the least programmable forms of AI, such as pure voice interaction, the display might be completely invisible, or perhaps a single coloured light. In many other applications, an organisation might be willing to provide information on a display screen, but be reluctant for users to modify what they see, to control the algorithms that have produced the display, or to describe exactly what the user (rather than the company) wants or needs.

If we think of user interfaces and computer displays as structured notations, where the layout and design of the display helps users to see how the structure might be modified, we have created a computer designed to be instructable rather than intelligent. Design for Moral Codes recognises how everything we see on the screen of a computer is some kind of graphical, textual, numeric, diagrammatic or pictorial code, 

The systematic study of computer displays as notation systems originally came out of research into the usability of programming languages, but has been supplemented with insights from research in graphic design, applied linguistics, cognitive science and the history of engineering, among others. My PhD supervisor Thomas Green was one of the first to see the need for a universal theory of notation combining insights from all of these disciplines, which could be applied to the design of all kinds of information systems. Green’s Cognitive Dimensions of Notations framework[6] has been adopted and extended in many ways, including a proposed Physics of Notations that business school researcher Daniel Moody hoped might become a quantifiable science of engineering diagrams[7], and my own Patterns of User eXperience that considers all the broader purposes and contexts for which interactive notational displays might be relevant[8].

Design Patterns for Moral Codes

More detailed guidance for the designers of notational systems can be found in many other academic publications, but to give a flavour of the types of design properties that might feature in the future design of Moral Codes, I can present an overview of the design properties described in my Patterns of User eXperience (PUX) framework. PUX is directly inspired by the architectural pattern language of Christopher Alexander, which was adopted by the Smalltalk practitioners of Chapter 7 as a way of describing how programmers might “live” within the abstract world of software tools. Unfortunately, their original ambition for a pattern language accessible to all users has since been reduced to a set of rather mundane software construction tricks, but my own approach returns to the more powerful idea of how to structure user experience. 

Category 1: Reading Code. The first category of user experience pattern relates to how we read code. Even for people who prefer not to create coded information structures themselves, they benefit from being able to read the codes created by others (often called “explainability” or “transparency”). This is true whether the code was created by people (perhaps government policy makers defining new regulations in an algorithmic form), or created automatically, where machine learning algorithms identify an opportunity to automate mundane actions, or a language model recommends reusing a piece of code extracted from an online library. Particular kinds of experience occur repeatedly when reading a complex coded structure. These include searching for a particular piece of information, comparing one piece of information to another (possibly across different pages, parts of the system, or different applications), and even sense-making, where the reader needs to gain an overall impression of what a complicated system is doing, or what its designers expect the reader to do.

For these user experiences of readership, there are notation properties that we know will be helpful. These include (obviously) making the relevant part of the code visible at all, but also presenting it in a way that is clear, concise, and visually draws your attention to the parts you need. This might involve viewing controls such that you can see detail within the relevant larger context (not always possible with simple zoom and scroll), or having paths marked out within the structure, so that you can navigate from one place to another. We need to think about their linguistic properties: are the elements of the code meaningful because they look like what they describe, and is it possible to tell the difference between different things, while also recognising similar things that look similar? Clever diagrammatic notations allow the reader to recognise new patterns, even where the designer had not anticipated specific questions, by including structural correspondences between the graphical properties (such as linking, containment, colour) and the structure of the problem domain.

Category 2: Creating Code. Beyond the ability to use a notational system to read information from an encoded information structure (already a desirable advance over current AI-based government and business systems), this book has emphasised the ambition not to surrender control to such systems, but to provide the ability for users to write code - telling the computer what they want to do. Giving instructions to a computer means writing a program of some kind, even if it is a very simple one. In a notational system, this means that the user changes the representation in some way, which is the second category of user experiences.

At the very simplest, a user might want to add a single instruction. This could be the first thing they are asking for (in which case a one-step program might be very simple). But before long, it could be useful to add another step, or add information to a structure originally created by someone else - perhaps a further item in an existing list, or to note an exception or special requirement. If all the necessary information is to hand, even a complex task might be a simple matter of copying things from one notation to another, one step at a time, like a well-organised person filling in their tax return after they have all the necessary documents to hand and in the right order.

It is more challenging to modify a structure that someone else has created, to adapt it to your own needs. In professional software engineering, this kind of “refactoring” is understood to be both challenging and risky, and it was the insight that led Thomas Green to identify the “sticky problem for HCI” - the cognitive dimension of viscosity. A notational system providing Moral Codes should consider the possibility of change. Even more ambitious, the most adventurous way to code is through “exploratory design”, when you start work without knowing what structure you are hoping to create, as I did myself when building the Palimpsest system described in chapter 8. Most creative and intellectual tasks are like this to some extent, and exploratory design can be a very satisfying activity. But it does introduce special constraints and requirements on the notation being used.

The most basic properties of a notational system for people to control their own information structures are access to the tools, and a system that preserves what you have done. If making changes, it’s important to know what the function of each part is, allowing you to make changes fluidly when necessary, and to be steered toward the specific actions you need. Those actions must somehow match your own idea of what you are trying to achieve, and if repetitive, may also need to be automated (an ideal opportunity for mixed-initiative machine learning).

Modifying the structure of a notation is where computer interaction starts to look like programming. For organisations that try to control what their customers and workers do within very limited parameters, these kinds of facilities will not be available at all. Standard user interfaces are very restricted in the amount of structural modification they allow, especially on mobile devices where modification might be limited to no more than “undo” (if you are lucky). The kinds of professional programming capability that could be extended to users with Moral Codes would allow users to change their mind, to be non-committal, and to try out partial products so that the implications of changes to a complex system can be investigated and evaluated (languages for creative learning, such as Scratch or Sonic Pi, often prioritise such capabilities). There may well be dependencies within the notation such that some things need to be done before others, in which case that order should correspond naturally to how people think about the problem. It may even be necessary for users to extend or modify the notation itself, including inventing new names. Very importantly, many people have things to say that do not fit within the formal model of the notation - programmers call these “comments” and they are a feature of every professional programming language. Some educational languages, and many other kinds of notation, don’t provide any way to add useful information outside the formal structure, in the same way that you might add a pencilled note on a page of this book.

Supporting creative activities through exploratory design requires a special kind of notational system, where the ability to change your mind is prioritised. The kinds of random surprise described in chapter 12 on creativity can be valuable tools for creators, as are notations that are less formal, even ambiguous, so that an artist might see different things each time they look at the screen. Some of those properties would be frustrating and unhelpful in technical and business contexts, meaning that notations designed for artistic creativity often look very different to engineering documents.

Interestingly, because computer science researchers spend so much time creatively exploring novel structures, computer science research languages like Haskell are also rather popular in creative arts contexts such as live coding[9]. At the same time, computer science research languages tend to be unpopular in practical and business applications - a fact sometimes puzzling to computer science researchers, but not so surprising in the light of the analysis I’ve provided here, where the trade-offs between these different priorities in the design of new Moral Codes should have become quite clear[10].

Category 3: Code and Society. Everything I have described so far is framed as though it is the experience of one person, attempting to achieve their own objectives (whatever those are) by expressing them in a coded formal notation. To some extent, that has been the tone of this whole book - the embattled individual versus the world, in which that person is beset by companies and governments conspiring to frustrate, oppress and exploit them through software. That single-user perspective is indeed quite conventional in the field of human-computer interaction, which originally emerged from human-factors engineering, where the operator’s control panel was often described as a “man-machine interface” in military and industrial settings. Nevertheless, I do understand (and take a strong interest in) the way that people do not use software systems in solitary confinement, but often deal with problems in consultation with their friends and family, or interact with organisations through direct contact with human employees rather than software surrogates of those employees.

This is the third category of user experiences that we need to support. When the code of a notational structure is shared among a group of people, or becomes part of a social activity, the formal visual structure of the notated data and algorithms needs to be integrated into those social structures. Social situations are structured in ways that all of us have experienced since childhood, where visual material might illustrate a story, focus a discussion, persuade an audience, or simply collect and organise information. In these situations, notational codes should support social structures rather than disrupt or replace them. Many of the features already described are valuable in these situations, including the ability to escape the formality of the system and support diverse interactions between people. But there are also specific dynamics within the structure of particular social conventions, such as when contrasting alternative ideas for dramatic effect or perhaps persuasive purpose. A notation that supports comparison, on a display that can focus or draw attention to different things, becomes a resource for particular kinds of social action, for example when students in a lecture theatre use a WhatsApp group to exchange commentary on the slides they see.

This section has summarised findings from many years of work, and I hope that themes raised in earlier chapters can be recognised - especially the lesson that different notations are good for different purposes, that a mix of formal and informal is important, and that there are trade-offs between design decisions that will make a certain notation better for one purpose but not another. For designers who would like to dig into this topic more deeply, designing their own new notations, or identifying ways that existing interfaces can be made more controllable, I have published many academic papers with further examples and case studies. Building on the original insights from Thomas Green, and his collaborators including Marian Petre, Rachel Bellamy and Luke Church, the above principles have been only a brief summary of the ways that I apply Patterns of User Experience[11]. I have provided an appendix in chapter 16, with suggestions for further reading.

Much of the early history of programming language design made the assumption that there would be one optimal way to program a computer, just like there is a standard mathematical notation for numeric digits or for algebra. Even today, there are programming language researchers who argue for one specific feature (usually their personal favourite) that must be included in every new language. Such researchers sometimes complain about unconventional languages, such as the graphical ones used by children, or perhaps refuse to recognise that some kinds of code (like the GUI or the spreadsheet) are really programming languages at all.

A comparative and human-centred approach to programming language design and critique allows us to recognise many more opportunities. In particular, recognising that a programming language is how we tell a computer what to do raises the possibility that we might tell computers to do different kinds of things, and say this in different ways, some resembling literature more than engineering. Understanding the way in which every user interface is also a kind of programming language, including all the design potential discussed in this section, helps us to see the opportunities for new Moral Codes.

This can also be a very different way of thinking about AI. For a typical AI researcher, the programming language is a thing they (the researcher) use, not something their customers or end-users should ever expect to see. AI research languages are often designed to support creative exploration by the AI researcher, but absolutely not creative exploration by the citizen or customer of the organisation deploying an AI product.

This is the reason why the design options for creating new formal notations are an essential ingredient for turning intelligent user interfaces into Moral Codes. Many companies superficially promise their customers a degree of control over what the computer will do. But if the display doesn’t look like a programming language or diagram, we should ask which aspects have been designed to support control. If an AI company offers only natural language interaction, we need to ask ourselves whether this product will allow us to tell the computer what to do, or whether the goal is for the computer to tell us what to do.



[1] I am not a historian of science, but gained some insight into the development of the “natural sciences,” as they are called in Cambridge, through my time serving on the Council of the Cambridge Philosophical Society. See Susannah Gibson, The Spirit of Inquiry: How one extraordinary society shaped modern science. (Oxford University Press, 2019).

[2] Alan F. Blackwell, “Wonders without number: The information economy of data and its subjects”. AI and Society (2022). https://doi.org/10.1007/s00146-021-01324-8

[3] This specific example alludes to the work of Max Bense, as reported by Bronaċ Ferran, Auto-poetics: Hansjörg Mayer’s titles of the nineteen-sixties. Unpublished PhD thesis, (Birkbeck College, 2023). https://eprints.bbk.ac.uk/id/eprint/51176/, p.166

[4] Brenda Laurel, Computers as theatre. (Reading, MA: Addison-Wesley, 2013).

[5] Hoesterey, Pastiche

[6] Thomas R. G. Green, "Cognitive dimensions of notations." In People and Computers V: Proceedings of the Fifth Conference of the British Computer Society, edited by Alistair Sutcliffe and Linda Macaulay. Cambridge: Cambridge University Press, 1989, 443-460; Thomas R. G. Green and Marian Petre, "Usability analysis of visual programming environments: a ‘cognitive dimensions’ framework," Journal of Visual Languages and Computing 7, no. 2 (1996): 131-174.

[7] See Daniel Moody, "The “physics” of notations: toward a scientific basis for constructing visual notations in software engineering," IEEE Transactions on software engineering 35, no. 6 (2009): 756-779; and for an assessment of how effective this approach has been, Dirk Van Der Linden and Irit Hadar, "A systematic literature review of applications of the physics of notations," IEEE Transactions on Software Engineering 45, no. 8 (2018): 736-759.

[8] Blackwell and Fincher, “PUX: Patterns of user experience”; and Alan F. Blackwell, “A pattern language for the design of diagrams,” to appear in Elements of Diagramming: Design theories, analyses and methods, ed. Clive Richards. (Abingdon UK: Routledge, forthcoming).

[9] Alex McLean, "Making programming languages to dance to: live coding with tidal," in Proceedings of the 2nd ACM SIGPLAN international workshop on Functional art, music, modeling and design (2014), 63-70.

[10] For further consideration of alternative design trade-offs between the priorities of business users and of computer scientists, see George Chalhoub and Advait Sarkar, "“It’s Freedom to Put Things Where My Mind Wants”: Understanding and Improving the User Experience of Structuring Data in Spreadsheets," in Proceedings of the 2022 ACM CHI Conference on Human Factors in Computing Systems (2022), 1-24.

[11]  Blackwell, “A pattern language for the design of diagrams”

Comments
0
comment
No comments here
Why not start the discussion?