Skip to main content
SearchLoginLogin or Signup

Chapter 10: The craft of coding

Published onSep 20, 2022
Chapter 10: The craft of coding
·

Chapter 10: The craft of coding

The message of this book is that the world needs less AI, and better programming languages. If we want to tell computers what to do, we need a language of some kind to instruct them with. Although it’s possible to use natural human language when interacting with computers, past attempts to replace programming languages with human languages have not been promising. Human language, after all, has been optimised over millennia for a single purpose, which is communicating with other humans. Human society is founded on language, as is science, the arts, business, scholarship, and of course the book you are reading, as well as all the professional work and academic research that has prepared me for writing this, and you for reading it.

It has often seemed as though the ultimate goal of natural language interaction research is driven by the challenge of the Turing Test - achieving online conversation in which a computer is indistinguishable from a human. In the introduction to this book, I argued that the Turing Test should be treated as a philosophical thought experiment rather than a serious engineering goal, and that the companies “winning” the test are able to do so only when they make their customers more stupid, while also needlessly consuming the precious resource of conscious human attention.

If we were to abandon the ideal of making computer dialog resemble human dialog, is there any other reason to believe that natural human language, which has evolved specifically for humans to talk to each other, would also be the right tool for programming computers? Making that assumption without good justification seems worryingly like the fallacies of skeuomorphism that I discussed in the last chapter. Why don’t we just design better programming languages that would be usable by a wider range of people?

I say this with some conviction, and I have been saying it for many years, but I have to admit many computer scientists and experts in human-computer interaction don’t agree with me. The justifiable reason for their scepticism is that, despite years of research into end-user programming languages, we have not seen a huge surge in the popularity of programming. Most computer scientists believe that programming languages will continue, in future, to be specialist tools for use only by professional programmers, and that the only practical option for regular people will be to use some variant of natural language rather than learn a new way of “speaking” code[1].

In the previous chapters of this book I have argued for the value of a relatively simple kind of programming, where we can avoid mundane repetition by using algorithms to help us with repetitive actions. I have also explained why the most successful programming languages for end-users do not look much like conventional text coding - including both the spreadsheet, which has transformed business by allowing people to create programs while looking at their own data, and the graphical user interface, which has made straightforward digital operations easier to think about in relation to physical experiences such as containment and object persistence.

Computer scientists have invented many specialist programming languages over the past 70 years, supporting many different ways of expressing algorithms, but most of these programming paradigms have never been made accessible through the kinds of design innovation seen in the spreadsheet or GUI. Some programming languages, like BASIC, Pascal and Python, were specifically designed to be more straightforward to learn and use, and became widely popular for business use after being taught in schools and universities. Educating students in particular kinds of code produces a workforce who feel comfortable with that way of thinking when they meet it in a professional setting.

However, there is still a big difference between the kinds of end-user programs I have described in the previous chapters, which if implemented in Python might require two or three lines of code (to automate some simple repeated action) or perhaps 20 or 30 lines (to complete a typical spreadsheet calculation). Professional programmers occasionally do these simple little jobs, but more often, they work with programs 300 lines long, or 3000, or in a huge team maintaining 3 million lines of code for an operating system.

At this scale, the choice of any one programming language is not the most important detail for project planning. Of course it is helpful to choose a language that works reliably, and is a good fit for the kind of problem you have, but those are specialist issues in computer science. More important to the working programmer is the kind of organisation needed for a large team to create anything so complicated. This distinction between the details of writing pieces of code in a particular language, versus contributing to the collective enterprise of a massive software project, is described by researchers as programming in the small versus programming in the large[2].

It is this distinction that underlies much of the scepticism about improving access to programming languages. Some popular programming languages, such as the spreadsheet, or the Scratch system widely used as a first programming language for children[3], could in principle be used to build large and complex systems. But these languages also include design features that would quickly become painful in a very large project, just as a champion cyclist would compete very badly on a child’s bike with training wheels. As these accessible languages have become better known, this has led to the inadvertent impression that ordinary people (who do find such tools straightforward) would never be able to handle the scale and complexity of problems that are the business of a professional programmer.

It’s true that society benefits from professional specialisation, but it’s a mistake to conclude that regular people should be excluded from instructing a computer in any substantive way. Supporting that agenda will certainly require the invention of new kinds of programming languages, perhaps as radically different as the spreadsheet and the GUI were from the familiar command line codes that preceded them. We need new inventions that support serious programming, but they won’t look like today’s programming languages.

In order to imagine that future, we need to ask more fundamental questions: What is it like to be a programmer today? What even is programming? In the rest of this chapter, I’m going to describe some of these human experiences of programming, drawing on my own perspective writing hundreds of thousands of lines of code, as well as the experiences of my students and of many professional programmers who I have worked with in companies around the world.

What is programming, really?

The first thing to address is that professional programmers don’t always work the way programming textbooks say they should. Textbooks are written by computer scientists who optimistically hope for a better future, a time when programming will be done in a more disciplined and rigorous way than it is today. One school of thought is that every program should be mathematically and logically verified to only do the correct thing in each situation[4]. This is an admirable objective, and sometimes possible, in cases where there is a logical and mathematical specification of the right thing to do. However, in areas of human life such as law, business, medicine and politics, where mathematics must be accompanied by interpretation, the purely mathematical/logical approach to programming has limited relevance … unless you can persuade the lawyers, politicians and so on that their existing forms of reasoning should be replaced by mathematical ones (which would itself be a kind of surrender to computers, in a way that not even mathematicians themselves do[5]).

The other school of programming education is described as software engineering, and argues that good project management is the key to building a successful system. Because I was trained as an engineer myself (although in electrical engineering, not computers), and have spent years building practical working systems for different kinds of clients, I have a lot of sympathy for this view, and have even taught it myself. The problem is that, while engineering project management is valuable in the right settings, it also turns out to be unhelpful in others. I’ve known engineers who try to project-engineer their families, or their social clubs, or even their hobbies. When done persistently, this isn’t popular, and seems completely ineffective in fields like law and politics. As a result, good engineers can find it less confrontational to restrict their enthusiasm for project management to their own hobbies, whether those involve building model railways or cataloguing a wine cellar.

The strange thing is that, although I do know how professional software project management works, and can even teach it, I don't choose to organise my own life in this way. I don’t have a model railway, and my “wine cellar” (a couple of dozen bottles under the stairs) is not very well organised. Even worse, my last really big software project did not use professional methods at all, because I started work on it without knowing what I wanted to achieve (in project management terms, I did not have a specification - a cardinal sin for a professional engineer). Perhaps surprisingly, a lot of the software that gets written in computer science departments also doesn’t follow the standard rules of software engineering. It is quite common for academic researchers not to know in advance what the result of their project will be, and to approach programming as an experiment rather than a contractual construction project.

This is as it should be, of course! We want researchers to do original things rather than simply follow someone else’s instructions. However, university students quickly recognise the double standard of computer science professors who tell their students “do as I say, not as I do”. This can cause confusion, when programming languages that are popular in universities for creatively “hacking” experimental investigations of an under-specified problem, get adopted in the outside world for serious engineering. Although not so dangerous, the reverse situation can also be wasteful, when academic projects are needlessly constrained because somebody attempted to apply an inappropriate business management process to creative innovation.

Power to the people?

These are reasons for scepticism, if this book were seen as advocating that more people should use programming tools in order to behave more like professional programmers - an attitude that has been associated with advocacy of “computational thinking”, or the arrogant suggestion that every subject would benefit from being more like computer science[6]. The textbooks and training courses used to educate professional programmers recommend principles of mathematical verification and engineering project management that could be useful in the right place, but probably have limited relevance to the ordinary lives of people who don’t structure their lives as if they were mathematical or engineering problems.

Nevertheless, I think there is scope for regular people to create more powerful software, including some elements of “programming-in-the-large”, but without forcing them to become engineers or mathematicians. This is based on analysis of the experiences reported by people who are expert programmers, but notacademic computer scientists or management professors. One of my favourite books is Code Complete by philosophically-trained software engineer Steve McConnell[7], which was the first time I saw a convincing description of what my own life as a professional programmer had really been like. After another 20 years studying programmers, and thinking about the core messages in Code Complete, I think there are a couple of things that define the essence of programming, as a practice that would allow many more people to instruct computers.

The first of these practices is naming, and the second is testing. Both are part of a way of working that many people describe as a “craft” of programming, rather than a science. Those who advocate this way of thinking about programming refer to professions such as carpentry, weaving or even jazz improvisation, to explain what it is really like to work with software as a craft material.

The abstract craft of making code

It seems slightly odd, to many people, if we describe software as “material”, when it is also in many ways immaterial[8]. But although code is just bits sent down a cable or stored on a disk drive, the day-to-day experience of a programmer is that code seems to resist what you want to do with it. Writing software involves constant small adjustments, as you type one thing, find out that it doesn’t work, try something else, realise that the whole idea was wrong, take a different approach, and so on. Sometimes coding seems more like carving a piece of wood, where the chisel cuts smoothly for a while before suddenly hitting a hidden knot or a change in the grain, jerking and splitting in an unexpected direction. Traditional craft skills involve recognising and responding to the characteristics of the material they work with, in a way described by design philosopher Donald Schön as a “conversation with the material”[9], and reflecting observations of scientific and technical work practices by thinkers including Richard Sennett and Andrew Pickering[10].

When writing, the code “material” speaks back to the programmer through various kinds of testing, as software development tools test the consistency of one part of the code against another, or run specific pieces of local code to confirm that they have the intended effects. The process of testing often results in surprises, and in changes to the original plan, especially when working on a problem (such as a research problem) that has not been fully specified in advance.

Much of the craft of programming relates to testing, and professional programmers spend far more time testing and adjusting than they do writing new code. But perhaps the most interesting implication of testing is its relationship to the specification process, where the programmer thinks about and abstractly describes what they would like the software to do, and which is the central concern of this book.

An essential element of coding is deciding what to call things. Day-to-day programming involves continually inventing names for local pieces of data storage, screen windows, network messages, and even parts of the algorithm itself. Steve McConnell dedicates a whole chapter of Code Complete to choosing good names, and emphasises the importance of names at many stages of the coding process. Inventing a name for something is an exercise in philosophical abstraction. Two or three words might summarise hundreds of lines of code. When chosen well, this name becomes the definition of what that code should do. 

Programmers constantly reuse libraries of code created by other programmers, where every element has a function determined by its name. The set of names becomes a language in itself, a way of thinking about the purpose and potential of the library. In a sense, every programmer of complex systems is defining their own language, a vocabulary that will allow their solution to be elegantly expressed.

When building complex systems, a new set of names might be needed as building blocks to construct more complex functions, which can then be combined with others, and so on. At each stage, testing must confirm that every function does what the name says, and that sentences combining these words work as expected. When programming in an exploratory way, it is not unusual to realise that the name initially chosen has not properly captured the abstract intention, and a different name would have been better. Expert programmers spend much of their time “refactoring” to change earlier names, and redefine the logical relationships between them.

The craft practices I have described here apply to all conventional programming languages, and will be instantly recognisable to expert programmers. It is interesting to note that both these craft elements - naming and testing - are not foregrounded by the end-user tools I have described so far, such as spreadsheets, GUIs, or automated macro facilities. As a result, many ordinary people use programmable tools without considering the most basic requirements for serious coding[11].

The question for the agenda of this book is whether those capabilities ever could be provided for use by people who are not programmers? It doesn’t seem like this should be forbidden on the grounds that ordinary people would be too stupid to use them properly. Ordinary people are able to create names when necessary, whether a music playlist, an online chat group or a document folder. Ordinary people are also perfectly accustomed to trying things out when doing routine craft work - whether tugging on a button that was just sewed on, or placing the first few books on a shelf recently screwed to the wall. It is the essential interaction between naming and testing that is not well supported by current end-user software tools, and not supported at all by AI systems. Today’s “intelligent” systems based on LLMs do not allow users to create new words within the model, or to test the consequences of doing so, despite the fact that those things were routine in programming languages.

Attention investment in the crafts of naming and testing

Previous chapters have emphasised the central concept of attention investment - the trade-off that simple automation tasks might save mental effort in the long term, but require some initial thought to get them right. According to individual preferences, some people might prefer never to think in advance, while others might do it too much, over-thinking, wasting time automating a task they will never need to do again. Effective tools can help people to make this decision better, and mixed-initiative strategies can use machine learning methods to suggest where further investment of attention would be beneficial.

The same principles of attention investment apply to the craft practices of naming and testing that are central to programming in the large. A name is an abstraction that requires thinking in advance to get it right, but with the pay-off that a single word can then be used in place of lengthy explanations. Testing involves abstract thinking about the ways that a named component might behave, and expressing these in terms of test cases that relate to possible interpretations of the chosen name. Testing, as every programmer understands, is an insurance investment against the work that would have to be done fixing things up if the program later goes wrong in ways they hadn’t anticipated.

Margaret Burnett at Oregon State University leads a programme of research into end-user software engineering that applies the principles of attention investment to help reduce the frequency of errors in spreadsheets. Their WYSIWYT (What You See Is What You Test) approach helps spreadsheet users think about what value they expect in a given cell, and analyses how formulas in other cells might cause it to be incorrect[12].  The team knew from research that very few spreadsheet users test their spreadsheets systematically, reflecting a specific attention investment decision: users prefer to make minimal investment in advance, accepting possible losses in future. Burnett’s team apply a design strategy called Surprise-Explain-Reward, which has some similarity to mixed-initiative interaction, but with an additional dose of educational theory[13]. Their approach monitors the spreadsheet in the background, much as with the background processing of Allen Cypher’s Eager system, watching for places where an error might occur. This is presented in a way that stimulates curiosity rather than unwelcome interruption, inviting the users to ask for an explanation. When a user chooses to engage, they are rewarded with evidence of the attention costs that have just been saved, cementing that knowledge for future use.

Similar research can assist users to invest attention in naming. Advait Sarkar and colleagues at Microsoft describe a study of the LAMBDA function in Excel, which allows users to define a new function within the spreadsheet and assign a name to it[14]. The basic spreadsheet model, giving fundamental priority to showing the user’s own data, does not typically encourage the definition of abstract names, whether for formulas, patches of the grid, or even naming rows and columns as has been done by some other spreadsheet products in the past. This team’s analysis of discussion on Excel user forums showed that, while some users recognised and welcomed the potential of assigning names with the more sophisticated LAMBDA function, others were concerned by the danger that a spreadsheet created by one person might use a name in a way that is not properly understood by another. The study concluded that this sharing and reuse of named functionality between people, and the responsibilities it introduces for consistency and testing, actually challenged the professional identity of people who saw themselves primarily as non-technical business specialists, when they found aspects of their work starting to resemble that of programmers.

It is easy to underestimate the challenge of naming, especially in relation to the standard ways of teaching programming, where the name is either unimportant (in a purely mathematical approach all names are equivalent), or has already been decided by someone else (in the classic “waterfall” model of programming, the programmer was often given a detailed specification including names already chosen by a systems analyst). Choosing a name well requires careful thought, as anybody who has named a baby will know! Perhaps one of the greatest differences between the professional and casual programmer, as recognised by the forum users whose comments Sarkar studied, is that a professional programmer far more often aims to write pieces of code that can be reused, whether by himself or by others. Code reuse is not possible without good names. But for a casual programmer who may write a single line of code to solve a specific job before throwing it away, there is no need to think of this task in a more abstract way, to apply it to other situations, to give it a name, or to test it for any other reason than ensuring that the immediate job is now done.

There seems to be a design opportunity here, to apply the principles of attention investment and mixed initiative interaction, but with a focus on the craft practices of naming. For many routine programming tasks, machine learning algorithms could easily be used to propose an appropriate name for the algorithm being applied, in the same way as auto-complete suggestions might offer a shortcut to a thesaurus. This automatically suggested name may or may not correspond to what the user thought they were trying to achieve. If the name proposed is obviously inconsistent with the user's intention, this could be the basis for a Surprise-Explain-Reward interaction. In response the user might then be able to choose a better name, which could in turn recommend improvements to the code - perhaps with code retrieved from a library, perhaps from elsewhere in the organisation, or perhaps from online resources and repositories like GitHub and Stack Overflow.

Applying principles of attention investment to these programming in the large issues offers a route toward collaboration and composition of Moral Codes. It might appear that the assumptions under which only companies or governments create code, and other people simply follow rules, are being disrupted by AI that seems to make its own rules. But in reality these systems are simply extracting value while consuming people’s attention. The ability to re-name and re-construct software, which is routine to professional programmers, has been removed from many AI-based products. Using machine learning algorithms to support naming and testing, rather than preventing abstraction with flat visual design, offers a far more attractive future.



[1] Geoff Cox and Alex McLean, Speaking code: Coding as aesthetic and political expression. (Cambridge, MA: MIT Press, 2012).

[2] Frank DeRemer and Hans H. Kron. "Programming-in-the-large versus programming-in-the-small." IEEE Transactions on Software Engineering 2 (1976): 80-86.

[3] Resnick, Maloney, et al, "Scratch: programming for all."

[4] Tony Hoare, "The verifying compiler: A grand challenge for computing research," Journal of the ACM (JACM) 50, no. 1 (2003): 63-69.

[5] Donald MacKenzie, Mechanizing proof: computing, risk, and trust. (Cambridge, MA: MIT Press, 2004).

[6] Peter J. Denning, “Remaining trouble spots with computational thinking,” Communications of the ACM (CACM) 60, no. 6 (June 2017): 33–39. https://doi.org/10.1145/2998438

[7] Steve McConnell, Code Complete: A practical handbook of software construction. (Redmond, WA: Microsoft Press, 1993).

[8] Malcolm McCullough, Abstracting craft: The practiced digital hand. (Cambridge, MA: MIT Press, 1998); Rikard Lindell, "Crafting interaction: The epistemology of modern programming," Personal and Ubiquitous Computing 18, no. 3 (2014): 613-624; Shad Gross, Jeffrey Bardzell, and Shaowen Bardzell, "Structures, forms, and stuff: the materiality and medium of interaction," Personal and Ubiquitous Computing 18, no. 3 (2014): 637-649.

[9] Donald Schön and John Bennett, "Reflective conversation with materials," in Bringing Design to Software, ed. Terry Winograd. (Reading, MA: Addison-Wesley, 1996), 171-189. https://hci.stanford.edu/publications/bds/9-schon.html

[10] Richard Sennett, The craftsman. (New Haven, CT: Yale University Press, 2008); Andrew Pickering, The mangle of practice: Time, agency, and science. (Chicago, IL: University of Chicago Press, 1995).

 

 

[11] It’s important to note here that I am describing the typical uses of these tools, not the potential for them to be used in other ways. The whole field of End-User Software Engineering is dedicated to helping regular users gain access to more of the capabilities enjoyed by professional programmers. Where commercial products such as Microsoft’s Excel span the boundary between casual, informal use on one hand, and large-scale business-critical applications on the other, they must support software engineering processes to some extent. I have worked for many years with teams at Microsoft Research whose goal is to enhance Excel in these ways, including my ex-student and colleague Advait Sarkar at Microsoft Research Cambridge, who has offered many insightful comments on earlier drafts of this book. See A. Ko et al. "The state of the art in end-user software engineering;" Simon Peyton Jones, Alan F. Blackwell, and Margaret Burnett, "A user-centred approach to functions in Excel," in Proceedings of the eighth ACM SIGPLAN international conference on Functional programming (2003), 165-176; Advait Sarkar, Andrew D. Gordon, Simon Peyton Jones, and Neil Toronto, "Calculation view: multiple-representation editing in spreadsheets," in Proceedings of the 2018 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) (2018), 85-93.

[12] Gregg Rothermel, Lixin Li, Christopher DuPuis, and Margaret Burnett, "What you see is what you test: A methodology for testing form-based visual programs," in Proceedings of the 20th international conference on software engineering (1998), 198-207.

[13] Aaron Wilson, Margaret Burnett, Laura Beckwith, Orion Granatir, Ledah Casburn, Curtis Cook, Mike Durham, and Gregg Rothermel, "Harnessing curiosity to increase correctness in end-user programming," in Proceedings of the SIGCHI conference on Human factors in computing systems (2003), 305-312.

[14] Advait Sarkar, Sruti Srinivasa Ragavan, Jack Williams, and Andrew D. Gordon, "End-user encounters with lambda abstraction in spreadsheets: Apollo’s bow or Achilles’ heel?," in Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC) (2002), 1-11.

Comments
3
?
Philip Wadler:

I would have liked a more detailed explanation of Surprise-Explain-Reward here. I followed the citation to find more details.

?
Philip Wadler:

I can’t find a mention of Surprise-Explain-Reward in this paper. It appears to be in this paper: WILSON A., BURNETT M., BECKWITH L., GRANATIR O., CASBURN L., COOK C., DURHAM M., AND ROTHERMEL G. 2003. Harnessing curiosity to increase correctness in end-user programming. ACM Conference on Human Factors in Computing Systems, Fort Lauderdale, Florida, USA, April, 305-312.

?
Perdita [email protected]:

I felt this chapter, coming after the one in which you discuss agile development, jarred somewhat - here you first almost equate SE to project management, and then go on to discuss a bunch of things that are not project management and are especially important in agile development, without connecting back to the earlier chapter. Could do with an edit to smooth over the writing history?