BONUS: a circle-centric reading of software development through the 1990s, plus screech owls
Download MP3Welcome to Oddly Influenced, a podcast about how people have applied ideas from *outside* software *to* software. Bonus episode: a circle-centric history of software engineering and the agile counterreaction, plus screech owls.
This series is about Michael P. Farrell’s 2001 book /Collaborative Circles: Friendship Dynamics and Creative Work/. If you follow this podcast in real time, you’ll have noticed that I’m getting nowhere with my promise for an episode on “what advice the book might have for a team who is discontented with their status quo and is looking to discover a better way of doing their work.”
To break – or at least avoid – this writer’s block, I’m going to tell the story of one particular kind of collaborative circle: the loosely connected network of teams that created what were then usually called “lightweight methodologies”. In many ways, I suspect that network was similar to the history of first-wave feminism in the United States, which also featured a number of groups, geographically distributed, who got together at periodic intervals at conferences. (In the case of lightweight methodologies, these were the academic OOPSLA conference (for Object-Oriented Programming Systems, Languages, and Applications) and the grass-roots PLoP conference (for Pattern Languages of Programs). In the case of feminists, there were the Seneca Falls Woman’s Rights Convention and later conferences. The most radical feminist circle tagged themselves the Ultras. In the case of software, I’d call the Extreme Programming teams in Detroit and London their equivalent.
However, more research – and someone with at least a little actual training in history – would be needed to to tease out the similarities between the two movements.
For now, what I want to do is focus on some sort of generic pre-Agile team who felt trapped in an orthodoxy that was clearly past its prime but was still clinging relentlessly to what the team felt to be an outmoded vision. How did that feeling get sharpened into particular critiques? How did that reaction generate a different shared vision, and what were its key characteristics – what Imre Lakatos (episode 7) would have called its “research programme”? How was the shared vision instantiated in specific techniques?
I was not a member of any such team – I was at best a peripheral observer at the time. But I’ve heard a fair number of war stories from people who *were* members.
≤ music ≥
I’d date the “status quo” period in software as beginning in 1968, at the famous first NATO conference on Software Engineering. You could argue for an earlier date, such as the circa-1956 Project SAGE air-defense system, but the 1968 conference popularized – or solidified? – concepts into what starts to seem like a common vision, a field-wide status quo.
I was only nine in 1968. I was first paid for programming in 1975 but I had no direct contact with the status quo until I started my first post-college job in 1981. I bought in pretty much wholeheartedly from then until the mid-90s, so I think I can describe that status quo reasonably well.
Proper software development was conceptualized as the creation of a set of logically-related documents. The common documents were:
First, a *requirements document*. It is, roughly, about answering the question “why do you – the customer – want this thing?” So a requirements document for a scientific calculator might contain the sentence “it shall have a low-effort way to calculate a square root”. That requirement is derived from the calculations a typical user will typically do.
The *specification* is about “what”. A complete specification would let you predict the results of any interaction with the calculator. Well, maybe not interacting by dropping it from a second-story window – leave that to the mechanical engineers – but the specification would include things like what should happen if you try to take the square root of a negative number. And it should include a description of the required accuracy of the results.
During the creation of the specification, you might expect an argument about whether there should be a special “take the square root” button or whether the “take the nth root” button is low-effort enough to satisfy the requirement, even though it requires two more keypresses than a dedicated square-root button would. The point is that the specification is supposed to “satisfy” all the requirements, as determined by some person or committee with authority.
The “how” is handled by some combination of design documents and code. It was typical to have something called an “architectural design”. Nowadays, an architectural design for a Macintosh calculator app might be a single page that says “We’ll use SwiftUI”. One for the web might be “we’ll use Rails” or “We’ll use Phoenix”. Once you’ve said that, you know how the app will be structured: you’ll know that you’ll be writing things called “views” and things called “controllers” and you’ll know how views interact with controllers. Programming is plugging things into the predefined framework.
Back in those days, there were fewer comprehensive frameworks, so it was common to invent a new one for each project. The specification and architectural design are logically related in that you must be able to implement the specification using the architecture. If the architecture provides no way to route a button press to code that calculates a square root, it’s not good enough.
In 1981, a lot of systems were still written in assembly language. The jump from the section of the specification describing what the square root button should do over to the particular machine code that does it was often considered too big, so there was a *detailed design* in between that described the “how” at an intermediate level of detail. You may have heard of flowcharts or pseudocode. They were used for detailed designs. After the detailed design was finished, it would be hand-translated into assembly code.
Nowadays, those sort of detailed designs are written in languages like C or Java or Elixir, and it’s the compiler that translates them into assembly language. But the idea of a detailed design lingered on as whatever document (if any) that’s more detailed than the specification but less detailed than the code.
You should be able to “trace” every sentence in the specification into the detailed design that implements it, and you should be able to justify every line of code by pointing to something in the specification that requires it.
Because these relationships are logical, you *could* write the documents in the reverse order: start with the code, then write the design afterward, then the specification, then the requirements. You might think that’s stupid. If you’ve got the code, why do you need the rest? For the answer to that, see Parnas and Clements’ famous 1986 paper “A Rational Design Process: How and Why to Fake It”. You see, the code alone is assumed to be too hard to maintain. People changing it will need to understand the “why” and “what” that lies behind it, and they’ll need a better explanation of “how” than code alone can supply.
In practice, people preferred to write the documents in roughly the order I gave. First do the requirements document and get it approved. Then do the specification, and get *it* approved. And so on. It didn’t have to be in lockstep: you very well might be working on the architectural design before the specification is approved, reasoning the specification won’t change so much as to break the architectural design. And you can start working on a detailed design early if you know its corresponding part of the specification is solid.
≤ short music ≥
As always with humans, that dry process got tangled up with pre-existing moralism and the creation of new morals for people to follow and believe. We seem especially prone to that in software; I don’t know why. Because the description of documents was a little dull, let me spice up the morality part of the common vision by starting with a digression.
The Alphabet of Ben Sira, written somewhere between the 8th and 10th centuries CE, contains some history that didn’t make it into the canonical version of Genesis.
“After God created Adam, who was alone, He said, ‘It is not good for man to be alone.’ He then created a woman for Adam, from the earth, as He had created Adam himself, and called her Lilith. Adam and Lilith immediately began to fight. She said, ‘I will not lie below,” and he said, “I will not lie beneath you, but only on top. For you are fit only to be in the bottom position, while I am to be in the superior one.”
Lilith then flies off and, in some interpretations, turns into a screech owl. See the show notes if you’d like some merchandise with a drawing of a pissed-off screech owl speaking a word bubble that says “I will not lie below.”
OK, I mainly just wanted to share the story, but it gives me an excuse to describe a theory of how moralism infected software engineering. It’s based on the observation that people seem really fond of binaries like up vs. down, left vs. right, man vs. woman, and presence vs. absence. There seems to be a visceral desire for those categories to be distinct – no overlap, no intermediate cases. And people are generally not fond of expanding the number of categories to, say, three.
It’s usually the case that one binary “dominates” the other. Sometimes that’s meant literally, as men dominating women. Sometimes it’s a loose synonym for “better in some way”. You can say “presence” dominates “absence” because absence is created by taking something away from the presence. Sometimes, it seems to me, which side is dominant depends: I think part of the appeal of the specification is that it’s all about the *outside*, not contaminated by the ickiness of the inner details – in much the same way that we want all the squishy bits of our body to stay inside, out of sight. In other cases, “inside” is dominant. It’s better to be inside in your cozy home than outside in the storm. It’s better to be an insider than an outsider in a social group.
Even in cases where you could describe a binary opposition in neutral terms, dominance has a way of creeping in. A classic example is that the English word “sinister” is derived from the Latin word for “left”. One possible etymology is that the left hand is the weaker, clumsier hand for most people. So, thanks to our infinite capacity to overgeneralize when it helps us pick sides, a single case where “rightness” is preferable to “leftness” becomes a connotation used all over the place.
I’m going to go out on a limb and say Adam’s outrage at Lilith was that kind of generalization. “Up” is the dominant half of the up vs. down binary. You fall down if you’re hurt. You lie down if you’re sick. Christian heaven is “up there.” Most relevantly to sex, think of wrestling. The one on top is definitely dominant. *Therefore*, the woman being on top in sexual intercourse would imply the woman is dominant, period, violating the rule that “man” dominates “woman” and “husband” dominates “wife” (and, for that matter, that “first created” dominates “created later”).
This rejection of the female-superior (huh, interesting word, “superior”) – this rejection of the female-superior position wasn’t just a quirk of one Jewish author of the Middle Ages. It was common wisdom for both the ancient Jews and the ancient Romans. I forget which culture thought woman-on-top caused diarrhea in the man: you see, once he got on the wrong side of the up/down dichotomy, he’ll naturally get on the wrong side of health vs. sickness.)
All this is terribly silly, of course. But we’re a silly species.
≤ short music ≥
So back to our documents. Both specification and code are documents. A non-silly species wouldn’t see a reason to place them in a hierarchy. But…
The specification is better than the code.
As I noted before, the specification describes the outside, avoids the inside.
It’s *abstract*, which is better than concrete or detailed. That’s because we were all, in those days, terribly envious of “the queen of the sciences”, mathematics, which is all about abstraction.
It’s natural to think of the specification as dominant, because the code has to “satisfy it” – even though you *could* reverse that and say the specification must, subserviently, *explain* the code or justify the code.
“What” is better than “how”, partly because the “how” necessarily contains extra details but also because “eternal” is better than “bound by time”. In the show notes, I link to a talk the influential computer scientist Edsger Dijkstra gave in 1985 where he’s spitting mad about people anthropomorphizing programs because that means we identify with them, and since we “see ourselves as existing in time”, that identification leads us to try to reason about programs by thinking about execution paths instead of “by manipulating one’s program text as a formal object in its own right […] in which time has disappeared from the picture.”
(As an aside, I think it’s quite a stretch to think that anthropomorphizing programs is the actual reason we think they exist in time – the fact that they start, run (oops, that’s anthropomorphic), I mean execute (kind of also anthropomorphic, derived from things like “executing a will”), they… um… programs… do some things, and, sometimes, halt. You’d think that would be enough to bring time into the picture, but it’s also human nature to link badnesses. If you don’t like anthropomorphism, and don’t like reasoning about programs by mentally executing them, it’s all too human to make a shaky inference that one badness caused the other. “It all hangs together, man.”)
Dijkstra was, I think, a Platonist: his “[we] *see ourselves* as existing in time” (my emphasis) is a tell; it implies that we *don’t* exist in time. It’s a mistake to see ourselves as existing in time. We exist in something like that higher world that Plato called the World of Forms 2500 years ago. The world of Forms includes the Ideal Form of everything. In his wonderful podcast “The history of philosophy without any gaps” Peter Adamson uses the example of giraffes. The vast number of giraffes in the world are all derived from the single Ideal Form of Giraffe, in much the way that programs are supposed to be derived from specifications. To Plato, the Form of the Giraffe is more *real* than the actual giraffes in our world. True understanding of giraffes is had by grasping the Form of the Giraffe with your mind, making the Form of the Giraffe a more worthy object of study than actual giraffes, which are icky like code is icky like your intestines are icky.
It makes sense for Dijkstra to be a Platonist-in-practice. As Davis and Hersh say in their 1981 book, /The Mathematical Experience/, most mathematicians are Platonists, in that they believe – but might not admit, even to themselves – that the nouns mathematics talks about – functors, fields, quaternions – are truly real in someplace like the World of Forms. And Dijkstra thought of himself as *better* than just any old mathematician. Elsewhere, he wrote, quote “Programming is one of the most difficult branches of applied mathematics; the poorer mathematicians had better remain pure mathematicians.”
OK. Does all this mean I think some engineer in Hewlett-Packard, writing the specification for a calculator in 1985, thought he was capturing the true Platonic Form of the calculator? No. But I do think he likely thought of himself as doing work that is *better* than programming. His was a higher calling (there’s that “up is good” again) – a higher calling than that of slinging bits around. That’s an attitude that filtered down from thought leaders like Dijkstra and our general envy of more successful fields like mathematics.
As an attitude, it didn’t sit well with some programmers. As Pete McBreen once said, “the Agile methodologists are methodologists who like to program”. That wasn’t universally true, but for many, the idea that code had lower status kind of grated. (Farrell notes that collaborative circles typically attract people whose enthusiasms or very selves are looked down upon by the status quo. Think of Tolkein’s Inklings, with their unfashionable fascination with epic poetry and Norse mythology. Think of the Ultras, who had a formative moment when one of them wasn’t allowed to speak at the World Congress of abolitionists but was grudgingly allowed to sit at the back of the room – at the literal margin – behind a gossamer screen.)
A typically move for people who are on the lower side of a binary opposition is to “invert the hierarchy”, essentially saying “you think we’re lesser, well, we think you’re lesser”. This predisposed our programmers to prioritize code and to try to find ways that code could be used to perform the functions of a specification. Hence, for example, the emphasis on “intention-revealing names” and Domain-Driven Design’s emphasis on a “ubiquitous language” that captures within the code a model of the system being automated – something that would ordinarily be part of a specification. And I’d argue that rejecting assumptions behind the dominance of the specification – that thinking about the outside should be separate from thinking about the inside, and that the outside view of, say, a class, method, or function stands somehow Platonically separate from the inside view – rejecting these made it easier a think of, and accept, test-driven design, which deliberately mixes up the inside and the outside. And discomfort with the, um, hegemony of the specification weakened the desire to jump first to abstraction, making it acceptable to think that a set of well-chosen examples (that is to say, tests) might work as well for human understanding as a more abstract API description.
I’m not saying, mind, that opposition to aspects of the specification and other documents *drove* the creation of techniques like TDD, only that the rebellious attitude toward the value judgments associated with the status quo, made their creation *easier*, reduced mental friction.
≤ music ≥
Now I want to look at a major consequence of the logical-document approach: a particular attitude toward change.
Suppose a project is nearly done, and you discover a single requirement is wrong. You have to correct the requirement, and in such a way that you don’t inadvertently break other requirements – like having an old requirement and the corrected one be contradictory. Then you have to correct the parts of the specification that can be “traced” as downstream from the corrected requirement. Then there might be design documents that had to be updated, and that single upstream mistake might mean changes in many scattered chunks of code. That could all be very expensive.
In contrast, a mistake that’s *just* in the code is much cheaper. If it’s a mistake in the implementation of something the specification calls for, you fix it and move on. No need to change an upstream document.
Similarly, a mistake that’s corrected in the requirements document *before* any downstream document is created will be cheap to fix.
Put all this together and you get the notion that you should try *really hard* to get an upstream document right before making something that depends on it. That will be the most bang for your error-prevention buck.
In practice, reducing errors in two documents – the requirements and specification – split off from reducing errors downstream. We ended up with two different constellations of techniques. I speculate that this was a hangover from the early roots of software engineering in military contracting efforts like the SAGE air defense network. Military procurement was done by having the military write requirements documents and possibly also a specification, then asking contracting companies to bid on it.
After the contract was signed, problems in the upstream documents were very much the military’s problem. The contractor could justifiably say the military should pay extra to cover the cost of the change. In contrast, errors in the design or code are solely the contractor’s problem: rework cuts into their profit.
Now, this doesn’t exactly make sense when everything – requirements through code – is part of the same organization with one balance sheet that includes everyone’s salaries, but such boundaries are awfully sticky. For example, why is dentistry a completely separate profession from medicine while dermatology is inside medicine. Teeth and skin are both part of the body. They’re both mostly on the outside. They both have implications for overall health. Tooth decay and gum disease can contribute to (or cause) premature birth, pneumonia, and endocarditis. So why are dentists different? And why do so many countries’ health plans provide less support for tooth health? Well, to quote Jerry Weinberg, “Things are the way they are because they got that way.”
So working on requirements and specifications was its own separate subfield. And, like medicine is higher status than dentistry, that subfield was higher status than programming. The central tenet of the upstream subfield was you have to *try hard*, you have to have good techniques, and you have to be *rigorous* (if not formal, in the sense of being mathematical).
Few people actually thought you could ever achieve the ideal of never revisiting a document once it had been reviewed and approved. But many thought you could get *closer* to the ideal, and that failing to try was something of a moral failing. (To take another peek into binary oppositions, we have “correct vs. incorrect”, “proactive vs. reactive”, and “prevention vs cure”. My hunch is that the desire to be on the better side of those dichotomies helped head off questions like “is this really *working*? Are we spending more to prevent errors than it would cost to just fix them?”
The result was a shared vision that change, in the context of a software project, is a bad thing, full stop, nothing *but* the consequence of error. That doesn’t mean errors and their changes can’t be an object of study, but when there’s a change to a requirements document after it’s been approved, what you want to learn from it is how to *prevent* the need for that type of change in the future, how to be more correct up front.
The proto-Agile people had two problems with this whole system of thought.
First, as programmers migrated into commercial software, be it software for sale (like Excel) or for internal use (like financial trading software), the idea that major changes would be avoided by Thinking Really Hard began to seem increasingly absurd. “Change as error” just didn’t match real experience. The world is always changing, and you can’t make the world stop while you perfect your requirements document.
This led to the attitude that the status quo had gone way too far in the direction of minimizing change. Rather than thinking change is bad, what would happen if we thought change is *inevitable*?
Second, the proto-Agilists agreed change is expensive – sometimes. But sometimes it’s not. Methodologists with their “cost of change curves” were dealing with averages. And, as the great statistician John Tukey put it in his classic and idiosyncratic /Exploratory Data Analysis/, if you want to understand a process, you can’t just look at the “central tendency”: you have to look at the outliers and ask what makes them special. Programmers encountered code that was, surprisingly, *not* hard to change. Code that seemed… practically *poised* to accommodate new requirements. What made that code special? Could we learn to make such code more common?
This led to the attitude that change isn’t *bad*, it isn’t just *inevitable*, it’s a positive *good*. The reasoning goes like this: what you have to learn from a change is how to make that kind of change more easily next time. The more unexpected changes the team has to cope with, the faster it and the software can be “tuned” to the typical changes the software owners request. That is the explicit goal of the techniques described in Kent Beck’s 1999 /Extreme Programming Explained: Embrace Change/. The fastest way to deal with, say, unexpected requirements is to make *all requirements* – other than the ones you’re working on right now – unexpected.
There was a final shift. I don’t remember if it was explicit or implicit in Beck’s book, but the next extension to the emerging shared vision was “If it hurts to do it, do it more often”. That, I think, marks the most radical difference between what came before Agile and what came with Agile.
Once Agilists had the common vision of “change is opportunity” and “if it hurts, do it more often”, they were free to invent a variety of techniques that have changed the world of software. Old fogies like me might grumble that modern software development has ignored a lot of the important parts of Agile, but I think it’s undeniable that the pre-Agile status quo of document-centric development has been disrupted, that Agile has made it *feasible* for teams to operate in a new way – in the same way that the French Impressionists made it feasible for painters to paint in a new way.
I think it’s highly likely some of these techniques – things like continuous integration and frequent releases – would have happened without Agile, just because of the technological imperative of distribution via the internet. But it *didn’t*. So the collaborative-circle-like rebellion against the document-centric status quo is historically meaningful.
≤ music ≥
What have I accomplished? I dunno. What I hope to have accomplished is to give you an example of what it’s *like* to be a member of a collaborative circle, and to give you a feel for the dynamics of rejecting the status quo.
If I can make it work, I’ll next do a history of the Context-Driven school of software testing. I got involved in it earlier on in its development than I did with Agile. A history will be touchier, because the history of that collaborative circle is more contested. That’s to say: it disintegrated in a way Farrell says is typical of collaborative circles who “go public” and make a big splash.
Speaking of “contested”. As I hope I’ve emphasized to the point of tedium, this episode is one person’s abstraction of what he saw and has heard and has read. If you’d like to add to, or contradict, this history, I’d be interested in hearing from you. I have – I think – started to establish a tradition of interviewing people who say, gently, “Well, actually, Brian, you’re wrong about…”
But whether that’s you or not, thank you for listening.