Skip to main content
SearchLoginLogin or Signup

Chapter 7: Lessons from Smalltalk – moral code before machine learning

Published onSep 20, 2022
Chapter 7: Lessons from Smalltalk – moral code before machine learning
·

Chapter 7: Lessons from Smalltalk: Moral Code before machine learning

Alan Kay’s Smalltalk system, which I introduced in the last chapter, was a huge project that pushed the bounds of commercial viability in pursuit of a distinctive vision. Kay’s vision combined the world-building goals of computer modelling, as in SIMULA, with Sutherland’s ambition for Sketchpad: “understanding of processes … which can be described with pictures”. Combining these two things results in structured descriptions of the world, represented through a system of visual notation, as a space of creative language and code.

These attributes were already valuable before the machine learning era, and Smalltalk is one of the most successful examples of the agenda I describe as Moral Codes. Smalltalk was explicitly designed to provide More Open Representations, Access to Learning, and Control Over Digital Expression. The most important aspect was the way Smalltalk supported freedom of expression and exploration, rather than imposing rules of what the user ought to do. In contrast, many software applications are designed to make the user follow a specific sequence of steps, or follow a user journey toward a predefined goal. Even programmers find it easier to code an existing algorithm than to specify something that is poorly defined. Setting out in a different direction, if software needs to be modified along the way, can be like wading through treacle. Making complicated software work differently is hard for a programmer, and almost impossible for regular users. My PhD supervisor Thomas Green, developing the field of cognitive ergonomics, called this treacle-like experience viscosity, which he identified as “a sticky problem for HCI”[1].

Sadly, much of the progress in computer science research and software product development, in the 40 years after Smalltalk, has failed to properly understand its lessons. For example, poor understanding of the different roles played by pictorial and textual elements of the graphics display led to the pursuit of visual programming languages that described algorithms in purely pictorial form, with no text at all. The research agenda was driven by excitement about the creative possibilities of the graphical display, and also by the apparent simplicity of operating a computer by direct manipulation[2].

The theoretical justifications for these design explorations were sometimes naïve. A typical research paper might observe that “a picture is worth a thousand words”, using the proverb to justify replacing word-based source code with a diagrammatic alternative. The arguments did not hold water, even from a common sense perspective[3]. If pictures are always superior to words, why do we choose to write legal contracts in words rather than pictures? Why do we speak to each other in words, rather than just drawing what we mean? Even the proverb was dubious. It wasn’t ancient Chinese wisdom, as the scientists believed, but invented as a promotional slogan by a San Francisco advertising salesman in the 1920s[4]. The fallacy of confusing illustration with abstraction is not a particularly new one – in Gulliver’s Travels, Jonathan Swift made fun of the philosophers of Lagado who tried to communicate unambiguously by showing each other objects instead of using words.

There certainly are cognitive advantages associated with pictorial and diagrammatic notations, and the Smalltalk user interface benefits from these. However, all such design choices represent trade-offs. Pictures are better than text for some purposes, and worse for others, so understanding the nature of those advantages is crucial. A key principle relates to the “frame problem” which was famous in GOFAI planning systems – how can a reasoning agent identify just the part of the world within which relevant causes and effects occur? Simple planning involves reasoning about the effects of your own actions, but how can you be sure that other things will not change at the same time? Pictures on the computer screen help with this problem by presenting the illusion of a closed and stable world, containing bounded visible objects that, like physical objects in the real world, do not change or move around by themselves. This allows the user to think about the consequences of their actions in relation to the visual persistence of what they can see.

Cognitive scientists Keith Stenning and Jon Oberlander explained that these benefits come when a notation is designed for limited abstraction[5]. Diagrams and pictures are different from abstract notations like symbolic algebra, where the letter “X” might represent anything at all. At least in algebra, X is going to be a number, but in program code X might be anything in the world, possibly something mentioned in some other piece of source code that could be hundreds of pages away. In such unlimited abstraction representational systems, it is difficult for users to reason about the consequences of their actions. I would be worried If a visitor wanted to light a candle, but offered the abstract formulation “I’m thinking about a thing in your house that I call X - can I set fire to it?”. Some computer commands seem almost as dangerous, for example the command ‘delete’ followed by an abstract name. At this point, a little inefficiency might be a good thing, causing the user to hesitate and think again. Thomas Green’s sticky problem of viscosity is not always undesirable - sometimes we need a little treacle.

The real achievement of the Smalltalk world is the way it provided a limited abstraction representational system: integrating abstract notation with a visual representation, in a way that helps users to reason about the effects of their actions, and allows a degree of freedom within practical constraints.

The business and pleasure of Smalltalk

The Smalltalk environment, with its potential for radically rethinking the relationship between notation and abstraction, had many consequences that went beyond the original ambitions of the project. The Xerox Star workstation, famous as the first personal computer, was too expensive to achieve the widespread success eventually seen in products from Apple and others. But it certainly established the basic conventions of icon-based direct manipulation that delivered the cognitive advantages of limited abstraction representational systems – the GUI as we now call it.

Similarly, Kay’s Smalltalk popularised many coding techniques still used by programmers today. It aspired to a kind of simplicity and creativity that could be used by children of all ages, although as it turned out, Smalltalk was adopted mainly by professional programmers. School coding lessons continued for years to use programming languages far more basic than Smalltalk,  like Seymour Papert’s LOGO, or one that was literally called “BASIC.” Later projects led by Kay and his colleagues did eventually extend the Smalltalk vision to children, through the Squeak dialect of Smalltalk, the eToys simulations programmed in Squeak, and then the Scratch language, now widely used as a first introduction to programming in many countries[6]. There are interesting design lessons from these descendants of Smalltalk, including Kay’s DynamicLand project, popularised in video demonstrations by Bret Victor and other colleagues. But before considering the future design opportunities, it is useful to consider how professional programmers started to apply Smalltalk principles when it first became popular.

It turned out that the Smalltalk language, with its approach to representing real world concepts and entities as “objects” following the early design insights of Doug Ross, Ivan Sutherland, Dahl and Nygaard and others, was an effective tool for many kinds of programming. Not just the development of new systems with graphical user interfaces, where the “objects” of the programming language corresponded to the pictorial world of direct manipulation, but also any system that needed to record and operate on defined entities – whether customers, bank accounts, or parts of a car – having features and relationships between them.

As personal computers became more powerful, versions of Smalltalk became attractive tools for regular business programming, leading to later descendants and hybrids such as the now-familiar C++, Java, C# and Python. But whereas the early versions of Smalltalk had been created by expert computer scientists in the research environment of Xerox PARC, the business applications of “object-oriented software development”, as it has become known, introduced significant new challenges.

The creative philosophy underlying the Smalltalk system was that all aspects of the system would be created in Smalltalk itself, so that anybody working in Smalltalk could make creative changes to the tools they were using. A programmer who was new to Smalltalk, if they saw a cool trick in the way that one of the icons of the Smalltalk editor worked, could simply call up the source code to that object, and use this as a template for their own new project. Being a Smalltalk programmer meant living in the world of Smalltalk, and potentially changing the language around you. The same philosophy has had radical consequences elsewhere, for example in the early expansion of the World Wide Web, when every creative innovation in web page design could immediately be understood, replicated or enhanced simply by reading the HTML source code of a page you admired, and adapting it for yourself.

The agile design philosophy – making code that can change

As a creative philosophy for children, artists and researchers, this potential for extension and evolution of the computational world you are “living” in made good sense. But how could such freedom possibly be effective in the world of business or engineering? Does every business need creative freedom and flexibility, or should it perhaps apply a judicious amount of viscosity, reflecting cautious review procedures? For companies who saw the technical potential of the Smalltalk approach, but struggled to see how it might be practical, there was an urgent need for expert consultants to help translate the opportunity of the Smalltalk model into engineering and business reality.

Smalltalk developers Kent Beck and Ward Cunningham addressed this problem in ways that have had impacts well beyond the field of software engineering. Their business response to the Smalltalk philosophy of creative exploration has now become a mainstream approach called agile programming, in which programmers, their customers and other stakeholders work together to continuously construct, review and evolve software that will eventually meet their needs. Although obviously inspired by Smalltalk, agile methods such as Extreme Programming (XP)[7], and the SCRUM method for managing a project with evolving requirements[8], have now become major businesses in themselves, sometimes completely divorced from object-oriented programming, or even from the software industry.

The Agile development philosophy is radically different in its use of notation, by comparison to earlier approaches to software development. In my early career, a software project started with a contract to deliver a particular application for a fixed price. Because the price was fixed, there was a limit to how much time programmers could work on it – sometimes estimated by the number of “function points” like individual operations or commands. But the system didn’t exist yet, so the contract was often vague. It didn’t list every function. In fact, writing down and defining all those details would be a big job in itself, sometimes requiring months of specialist work. So the first task after signing the contract was to analyse and document every feature that would be needed, for a system that nobody had yet seen. Once that list was ready, the programmers started work. Unfortunately, this often turned out badly when customers discovered after the system was delivered that they had forgotten to mention something important.  Some even realised, when they saw the working software, that a completely different solution would have been better. Customers often demanded to get a product that they were more happy with. Programmers complained that they had spent all their budget and had no time to make further changes. Lawyers on one side would argue that the customer had signed the contract specification, so must pay for what they had commissioned. Meanwhile, those on the other side would say it was unreasonable to expect a customer to read and understand detailed technical documents, especially if there was so much detail that it felt like they were doing the coding work themselves!

These challenges are unavoidable when creating a complex product with no prior example of what it should look like. The situation is related to the problems discussed in Chapter 5 of specifying the user goal in advance, which is an unhelpful approach to any wicked problem that cannot be precisely defined. Attempts to regulate project management in the first decades of the software industry led to the “waterfall model,” where each phase of the project delivers an increasingly detailed specification document that must be signed off before sending it downstream to the next phase. Waterfall projects were expensive and bureaucratic, often involving special notational tools to document the designs at different levels of detail, with flow charts, database layouts, screen mockups and other diagrams to help customers and programmers build a consensus. Despite all these tools, many waterfall projects resulted in notorious budget overruns, failures to deliver, and litigation on all sides. A pragmatic compromise advocated a “spiral model”, where initial versions of each document would be used to create an early prototype, which customers could experiment with in order to adjust their requirements, before revising the documents and doing more development, perhaps with several cycles until the development contract finally ended.

When I was a young software engineer in 1980s New Zealand, the creative world of the Smalltalk researchers at Xerox PARC seemed utopian. The August 1981 issue of Byte magazine was dedicated to Smalltalk, and I read it from cover to cover. At over 400 pages, this collection of technical and philosophical contributions from the Smalltalk team was the size of a bible, and seemed to me nearly as visionary. In contrast to the Smalltalk vision, my own day-to-day work, in that year and also for years to come, followed the waterfall processes of drawing diagrams, writing specifications and negotiating with clients, rather than living in a flexible abstract world of representations that could be freely explored and creatively modified.

Wikipedia – when knowledge becomes agile

The Smalltalk developers on the West coast of the USA who brought the Smalltalk philosophy into their software consulting work were uncomfortably aware of the ways that accepted practices of waterfall-style project management would hobble the creative potential of the Smalltalk tools. What is the point of a flexible and playful programmable environment, if you spend months documenting and diagramming in advance every detail of the code you are going to write?

This is a fundamental problem of notation – if one person is writing software for another, how can they agree in detail on the goals? And if people struggle to agree with each other, can we really expect an AI to do better? Decades of research have been dedicated to the invention and refinement of different kinds of diagrammatic modelling languages, each intended to more clearly communicate functions of the system that might be important to a user, while also specifying unambiguously what the machine should do. Some of those diagram styles are more suited to waterfall project management, and some more suited to iteration in a spiral. But the philosophy of Agile programming was more radical – somehow, customers had to be welcomed into the Smalltalk world of changeable software, even working alongside the programmers as they explored together what the system might be able to do.

Ward Cunningham responded to this challenge with a completely different notational approach. Abandoning the search for intuitive yet formal diagrams, he decided to treat specification text like pieces of Smalltalk code – separate objects, linked together by their logical relationships, arranged into an abstract “world” that you could live in while writing them, so that anyone could take a look and make creative changes, just like in Smalltalk. Coming soon after Berners-Lee’s World-Wide Web, this simple text document editor was built-in to a web browser, so that anybody reading a page could quickly change it if they saw something that needed to be different. The original WikiWikiWeb, named for the Hawai’ian expression to act quickly, can still be found on the web server at Cunningham’s consultancy company – though it now has such classic status that it is more a museum exhibit than an active management tool. Perhaps the closest inheritor is the Markdown format familiar to GitHub, StackOverflow and Reddit contributors.

Decades later, this Wiki philosophy has transformed human knowledge. As the most-consulted reference work in the world, Wikipedia is fundamentally different from the time when encyclopaedia companies sold knowledge that was compiled and bureaucratically certified by committees of experts. Kids of my generation dreamed of being able to afford an Encyclopaedia Britannica, and I spent hours in the corner of my school library where it was kept. Who could imagine, 50 years ago, that the world’s best encyclopaedia would be created by volunteers, inventing their own systems of debate, reward and recognition, all documented and managed within the wiki itself? If you haven’t experienced making an edit to Wikipedia, I strongly recommend it. Find a page that you know something about, look at its change history and local guidance, and make a change yourself. Anyone can do it, and the process of what happens next will provide genuine insight into what Wikipedia really is (and indeed, into one of the main objectives of this book, which is to celebrate and enable Creative Opportunities for Digital Expression).

Wikipedia demonstrates the creative and intellectual consequences of object-oriented software development far beyond the world of conventional programming languages[9]. It is also an example of the philosophy that programmers can live among their own tools, modifying and adapting these as they work. Cunningham and Ward’s experiences in bringing that agile philosophy to the bureaucratic world of business and organisations has resulted in new hybrids between traditional programming and the notational conventions of hypertext, business documents and organisational specifications.

It’s worth noting at this point that the WikiWikiWeb, like the World-Wide Web itself, succeeded in particular because it was so simple. I’ve used public wikis a lot in my own work at the university, to encourage students, collaborators, and anyone else who might be interested to see research as work in progress rather than being set in stone. Unfortunately the simple markup language and editing commands do make all this material look just a little bit amateurish and home-brewed – just what it is! But although I’ve described wiki pages as a “notation” for software engineering, they are a lot less sophisticated than the many kinds of diagram editor, visual programming languages, and other special tools that have been created by the research community. There is an attention investment trade-off here between abstract expressive power on one hand and usability on the other, which I hope is starting to seem familiar from earlier chapters of this book.

Living in abstract worlds – the architectural patterns of moral codes

We can get more insight into the nature of the trade-off by drawing on yet another contribution arising from the work of Kent Beck, Ward Cunningham and their friends. You might think that the invention of agile project management, and of the technology behind Wikipedia, would be sufficient laurels for these innovators to rest on (not to mention the invention of CRC cards, which I’m not going to discuss any further, but are still one of the best modelling languages for conceptual design of object-oriented systems in my opinion). These are impressive and talented leaders, although I’m sure they would say that it is the Smalltalk philosophy itself (and before that, SIMULA, Sketchpad, Dynabook and so on) that showed the potential for treating a software tool environment as an abstract place where the programmer could “live” and work, inspecting and modifying the structures around them.

In 1987, Beck and Cunningham presented a paper at the international conference of object-oriented programming specialists, in which they reflected on their experiences from the perspective of architectural philosopher Christopher Alexander[10]. Alexander had studied as a mathematician, then as an architect, and then as an anthropologist after he came to realise that the structure of buildings is ultimately determined by the needs and habits of humans, rather than simply mechanical solutions to a functional and aesthetic design brief. Drawing on his mathematical background, Alexander developed the abstract concept of a “pattern language”, which systematically describes the ways in which towns, buildings and rooms are created and evolve to accommodate human needs.

Beck and Cunningham saw that this kind of abstract specification of lived experiences could also be used to define the properties of the software environments where they and their colleagues “lived,” in an abstractly notated computational world. Their observations have been so compelling, and the significance of these insights to the work of programmers so clear, that pattern languages are now far more often used in the work of software engineers than they are by architects. Although Alexander’s work is respected among academic architects, it has not become the standard approach to architectural design that he originally imagined. In contrast, software design patterns have become a central element of university teaching and professional practice for all programmers using object-oriented tools.

Despite the popularity of software patterns (and research communities who continue to develop, promote and refine them), this popular reception has actually shifted away from Beck and Cunningham’s original insight, and indeed from the insights that we could have taken from Christopher Alexander, on how to think critically about structuring the world we live in. To genuinely realise the potential of the Smalltalk philosophy, we need to return to the concept of Pattern Language, outside of the simple mechanics of software construction where they have become so successful, and think again how it applies to all these other tools and technical opportunities – including design notations, wikis and other documents, graphical user interfaces, and even social structures of agile project management and collaborative knowledge curation.

In many ways, that could be described as the purpose of this book. The Smalltalk team revolutionised the everyday practices of programming, but they did this work before the development of machine learning algorithms that fuelled today’s AI boom. If the Smalltalk philosophy is to continue, we need a new pattern language that builds on the potential for new intelligent tools. Computer scientists have come to think of “software design patterns” as little more than a handbook of construction tricks, forgetting Beck and Cunningham’s more radical insights from 1987[11]. Alexander himself, when invited to speak to the programming community, observed that their development of design patterns appeared to have missed the moral purpose that was central to his work[12]

This book aims to recover moral purpose in programming through more appropriate patterns for interaction - the very point that has been missed in the software industry more widely, including those parts describing themselves as “artificial intelligence”. New combinations of visual notation, mixed initiative interaction and machine learning can deliver new kinds of user experience, and these are the patterns we need to understand. I’ll be returning in more detail to these patterns of user experience in chapter 14, after giving some examples of how programming advances today are integrating machine learning into the homes people make in code.



[1]  Thomas R.G. Green, "The cognitive dimension of viscosity: a sticky problem for HCI," in Proceedings of the IFIP TC13 Third International Conference on Human-Computer Interaction (1990), 79-86.

[2] Ben Shneiderman,“Direct Manipulation: A Step Beyond Programming Languages,” Computer 16, no. 8, (Aug. 1983): 57-69.

[3] Alan F. Blackwell, "Metacognitive Theories of Visual Programming: What do we think we are doing?" in Proceedings IEEE Symposium on Visual Languages (1996), 240-246.

[4] Wolfgang Mieder, ""A Picture is Worth a Thousand Words": From Advertising Slogan to American Proverb." Southern Folklore 47, no. 3 (1990): 207.

[5] Keith Stenning  and Jon Oberlander, "A cognitive theory of graphical and linguistic reasoning: Logic and implementation." Cognitive Science 19, no. 1 (1995): 97-140.

[6] Mitchel Resnick, John Maloney, Andrés Monroy-Hernández, Natalie Rusk, Evelyn Eastmond, Karen Brennan, Amon Millner, Eric Rosenbaum, Jay Silver, Brian Silverman and Yasmin Kafai, "Scratch: programming for all." Communications of the ACM 52, no. 11 (November 2009): 60-67.

[7] Kent Beck, Extreme programming explained: embrace change. (Reading, MA: Addison Wesley, 2000).

[8] Ken Schwaber, Agile project management with Scrum. (Redmond, WA: Microsoft Press, 2004).

[9] Although beyond the scope of this book, philosopher Brian Cantwell Smith offers a deep critique of the ways of thinking that might be considered to underpin both the concept of any encyclopaedia (including Wikipedia), and the epistemological foundations of object-oriented programming. See Brian Cantwell Smith, On the origin of objects (Cambridge, MA: MIT Press, 1996), and also the development of these ideas in relation to AI: Brian Cantwell Smith, The promise of artificial intelligence: reckoning and judgment.  (Cambridge, MA: MIT Press, 2019).

[10] Kent Beck and Ward Cunningham. Using Pattern Languages for Object-oriented Programs. Tektronix, Inc. Technical Report No. CR-87-43 (September 17, 1987), presented at OOPSLA-87 workshop on Specification and Design for Object-Oriented Programming. Available online at http://c2.com/doc/oopsla87.html/

[11] Alan Blackwell and Sally Fincher. "PUX: patterns of user experience." Interactions 17, no. 2 (2010): 27-31.

[12] Christopher Alexander’s keynote delivered to the 1996 OOPSLA conference observed: “I think that insofar as patterns have become useful tools in the design of software, it helps the task of programming in that way. It is a nice, neat format and that is fine. However, that is not all that pattern languages are supposed to do. The pattern language that we began creating in the 1970s had other essential features. First, it has a moral component. Second, it has the aim of creating coherence, morphological coherence in the things which are made with it. And third, it is generative: it allows people to create coherence, morally sound objects, and encourages and enables this process because of its emphasis on the coherence of the created whole.” Transcript of live recording made in San Jose, California, October of 1996, at The 1996 ACM Conference on Object-Oriented Programs, Systems, Languages and Applications (OOPSLA). Available online at http://www.patternlanguage.com/archive/ieee.html (retrieved 12 August 2022).

Comments
0
comment
No comments here
Why not start the discussion?