Theories of What? or: Richard Rorty Weighs in on TDD ("Packages", Part 3)

Download MP3
Why did so many biologists shrug and accept the proto-oncogene theory of cancer, while most programmers rejected TDD – and rather fiercely? Using an idea from Richard Rorty, I suggest that (part of!) the reason is because two different kinds of theory were at play. Part three of a series.

Welcome to oddly influenced, a podcast about how people have applied ideas from *outside* software *to* software. Episode 4: Characteristics of “sticky” theories, or: what went wrong with TDD anyway?

This is the third of four episodes on Joan Fujimura’s idea of “packages” that spread scientific theories and technology together.

[music]

Let’s start by looking at what’s desirable in a theory that can spread rapidly, using the proto-oncogene theory for examples.

Imagine you’re a developmental biologist wondering about how a just-fertilized egg cell eventually divides into liver cells that are not much like skin cells – even though liver cells and skin cells contain exactly the same DNA.

You’ve been studying this problem your entire career. Then you’re presented with the combined package of recombinant DNA and the proto-oncogene theory. What do you do? Most likely, the theory is compatible with the theories you’ve been using as you keep pushing the boundary of science outward in your chosen direction. So why wouldn’t you use the associated technology if it helped you do your work? And why wouldn’t you tacitly accept the proto-oncogene theory as true, at least until you run into a problem that causes you to doubt it? Because it’s an *addition* to your suite of theories, it seems perfectly sensible to add it as another default.

This is not quite the high school picture of science. What should have happened, according to my high school physics teacher, is that the proto-oncogene theory was proposed, and it battled it out with competing theories. Only *after* that would it be broadly accepted. Fujimura’s claim, if I understand it correctly, is that the proto-oncogene theory got widely accepted before that battle was finished, just because it piggybacked on a new and useful tool.

By the way, my high school physics teacher believed that flying saucers were real, and staffed by demons from hell. Ah, the 1970’s. You really had to be there.

Anyway.

The point here *isn’t* that it doesn’t matter whether a theory is true. The point is that other things matter too.

Something that also matters is that core concepts in the theory are flexible, capable of different interpretations. In the terminology of episode 1, they’re *boundary objects*. Consider the word “gene”. If you’re a scientist who studies proteins and how they do what they do, you’re content to think of a gene as a stretch of DNA that creates (or “expresses”) a single protein. But some cells never express some of their genes. And, generally, cells carefully control whether they express any particular gene at any particular moment.

Other scientists than these study *how* the cell regulates the expression of genes. Of course, there are a variety of ways – remember, your cells are a gross kludge.

One way is that a gene is regulated by a stretch of DNA that’s far away from it - perhaps even on a different chromosome. The gene only expresses itself to create a protein when some biological process brings that distant DNA up close to the gene, so that the two strands of DNA can bond in a way that turns the gene on. As far as I can tell, at least some developmental biologists don’t think about genes as being contiguous stretches of DNA; instead, they think of separate regulatory and expressive parts.

These two different views of what the word “gene” means is classic boundary object work.

And “gene” isn’t just a boundary object to scientists; it’s also a boundary object for other people. Fujimura quotes someone looking for Congressional funding as saying quote “When I point my finger at a Congressman, I say, ‘Mr. So-and-So, you and I both have genes in us, which we believe are the genes that are responsible for causing cancer.’ It gets their attention […] If I tried to explain molecular genetics, they’d fall asleep on me.” This is another classic use of boundary objects, focusing on genes as objects of research that deserve funding. It’s very reminiscent of how episode 1’s Alexander and Grinnell dealt with with California state legislature, and with the University of California board of trustees. It’s how software teams *cannot avoid* dealing with corporate executives when talking about software and its features and its implementation.

Here’s the key point: to be easily adopted, a new theory should be something that *fits alongside* existing theories. It shouldn’t disrupt (too much) existing beliefs, and it especially shouldn’t disrupt (too much) existing careers.

TDD had limited success at colonizing programmers the way the proto-oncogene theory colonized biologists, congresspeople, and the popular press. I want to explain why. And I want to do that by looking at TDD as a theory.

As a theory of “how to do your day-to-day work”, TDD had to *overturn* an existing theory (that programmers *should* test, even if they often didn’t) and replace it with a new theory: that programmers should design by implementing successive specific examples in a tight feedback loop.

Because it was replacing rather than supplementing, TDD was bound to meet more resistance, but I think there’s something else going on, which is that TDD is part of a larger theory, and that theory was attempting to replace another large theory that was about more than just details of programming practice. I’m going to talk about that by using the philosopher Richard Rorty’s horribly named idea of “final vocabulary”.

On page 73 of his book /Contingency, Irony, and Solidarity/, Rorty writes:

quote
“All human beings carry about a set of words which they employ to justify their actions, their beliefs, and their lives. These are the words in which we formulate praise of our friends and contempt for our enemies, our long-term projects, our deepest self-doubts and our highest hopes... A small part of a final vocabulary is made up of thin, flexible, and ubiquitous terms such as 'true', 'good', 'right', and 'beautiful'. The larger part contains thicker, more rigid, and more parochial terms, for example, 'Christ', 'England', ... 'professional standards', ... 'progressive', 'rigorous', 'creative'. The more parochial terms do most of the work.”

My claim is that TDD reflects a larger theory that violates many people’s final vocabulary, their theory of *themselves* and of their identity. That’s because it’s inextricably embedded in that style of work called “Agile”. It does not stand alone.

To illustrate, here are some statements:

A tweet by Woody Zuill says: quote “It is in the doing of the work that we discover the work that we must do. Doing exposes reality.”

Kent Beck’s /Smalltalk Best Practice Patterns/ has this on page 3:

quote
“Did you know that your program talks to you? […] If you’re programming along, doing nicely, and all of a sudden your program gets balky, makes things hard for you, it’s talking. It’s telling you there is something important missing.”

Ward Cunningham, a programmer highly respected among Agile programmers, has referred to “molding a program” as a direct analogy of the way a potter molds clay.

And quote “listen to the test” has become a cliche that you can easily find in any number of articles.

Now. I don’t think those of us who are fans of TDD appreciate how *weird* these quotes sound to many people. For the rest of the episode, I’m going to speculate about why, about what differences there are in their final vocabularies, finishing with – alas – not that much, ah, actionable advice.

The first distinction I want to call attention to is that these Agilists are all describing themselves as reacting; that is, as being *reactive*. In US culture, at least, being reactive is bad. Being *proactive* is good. The query “how to be proactive” gets 299 million Google hits. I was initially shocked that “how to be reactive” got 78 million *more* results – until I noticed the titles of the top hits:

12 Techniques For Being Less Reactive And More Intentional
10 Tips to Change From Reactive to Proactive in Situations
How To Stop Being So Emotionally Reactive
What is Emotional Reactivity and How to End the Cycle

Hmm. Thanks for deciding what I really wanted to know, Google.

So I want to claim that Agile takes a reactive stance toward code and design and, well, lots of things. And, to a lot of people, that signals “bad”. People whose final vocabulary includes “proactive” don’t want to adopt techniques that make them take a reactive stance (*especially* toward code – more about that later.)

First, though, I want to head off an objection.

It should seem obvious that no one is purely reactive or proactive, introverted or extroverted, or a pure anything. We’re all on a spectrum. Except… we don’t act like we are. We say we’re introverted, not that “in most situations, my behavior is more introverted than the average person’s behavior would be in an identical situation”. When it comes to words in our final vocabulary, words wrapped up in our self-identity, it’s all or nothing.

That seems especially so because it’s extremely common for one word in such binaries to be favored over the other, as “proactive” is over “reactive”. And when people who identify with the favored word, sense other people are trying to elevate the status of the disfavored word, they tend to react to it as a personal attack.

So when I say something that sounds to me like “In many programming situations, it would be better on average if more people moved to a more reactive stance”, a lot of people will hear me saying “You are a control freak; loosen up, maaaan.” and get mad. If you find that happening to you during this episode, I apologize in advance. I do happen to prefer my identity as a reactive person, but I’m not *intentionally* insulting you.

Moreover, I’m going to be listing five pairs of words, and claiming that there are a lot of programmers clustered around final vocabularies that contain mostly or exclusively the first word in the pairs; whereas for people like me, it’s mostly or exclusively the second. That does not mean I think people in the first cluster are bad people or bad programmers or should change their identities or *anything*. It is demonstrably the case that great software has been written by people in both clusters. We’re not looking here, at clearly inappropriate characteristics like a disorganized accountant, a trial lawyer who shrinks from conflict, or an indecisive emergency room doctor.

Okay?

By the way, it hasn’t escaped me that I’ve taken a multidimensional space and simplified it down to two clusters that I’m casting as being in opposition. Those darned binary oppositions; they’ll getcha every time. The real clustering is probably more nuanced.

The next opposition to discuss is “design” vs. “code”. To some people, the design is more *real* than the code. I used to be one of them. I still have a sheet of paper from 1982 on which I wrote that the specification is real; the code is just a consequence. In the 1980s, my coding style was to think about the problem and think think think until I couldn’t stand thinking any more, then force myself to think more until finally (I imagined) I could sit down and the design would just spill out of my brain into my fingers, onto the keyboard and into the computer, represented there as – less important – code.

Let’s just say my final vocabulary has changed since then.

I think this notion of “a more real reality” dates back at least to Plato’s theory of Ideal Forms. To Plato, there are non-physical Forms that are the essences of all things, from Giraffes to Virtue to the Good. What we see around us are just imperfect imitations created out of less good stuff like matter.

This is perhaps the most successful meme in history and has persisted to modern days. I believe it was in Davis and Hersh’s /The Mathematical Experience/ that I read that, while most practicing mathematicians *profess* that mathematics is a symbol manipulation game that follows rules of logic, almost all of them are really Platonists who believe in their guts that mathematical abstractions are *real*. A rectangle, or a functor, or a monad is a clear-edged *thing* that’s separate from both the external world of matter and the internal world of consciousness.

And I think that, absolutely a huge percentage of programmers implicitly or explicitly believe the same.

To illustrate, suppose I suggested that there could exist a chunk of really good code that simply doesn’t have *a* design that floats separate from it: that there’s Juan’s understanding of the code – call it his understanding of its design if you like – which is different from Gilbert’s understanding and Amita’s understanding. It’s probably really good if there’s a lot of practical overlap between those understandings, but does it matter if they’re not *identical*? that there isn’t something independent of these individual brains that just *is* the design?

If that thought bugs you, you might be a Platonist.

If it *doesn’t* bug you, if you can see “the design” as a boundary object that supports collaboration while not forcing complete agreement, you might like TDD.

Next, I’ll talk about two related pairs, what I’m going to call Whole vs. Sufficient, and Finished vs. Ready.

Whole is that property where, as Einstein didn’t actually put it, a design is “as simple as possible but no simpler”. (What Einstein actually said was quote “make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.”). Wholeness is the idea that the design contains nothing it doesn’t need, but it contains everything that it *does* need. This is akin to Thoreau’s “Our life is frittered away by detail... simplify, simplify.” It’s very much an aesthetic judgment – and a moral one: the word “wholesome” is derived from “whole”, which in Old English had connotations of “undamaged”, “unhurt”, and “healthy”.

Something that’s Whole is necessarily Finished: there is nothing left to add. A later addition – or, especially – a change feels like a failure. You should have thought harder, or gathered more requirements, or found that paper that described the API you needed. You should not have left unfinished work for someone else.

The contrasting final vocabulary of Sufficient and Ready emphasizes the certainty of future change. Wholeness in the aesthetic sense is a quote “nice to have” rather than a *requirement*. What matters is not the elegance of the design but rather code that is written to support elegant, simple, graceful changes when change becomes necessary. It’s about verbs – human actions – rather than nouns – things.

Indulge me in one more juxtaposition: Thought vs. Conversation.

Agile can seem like King Crimson’s song “Elephant Talk”. I figure I’ll get in trouble if I play a snippet, so you’ll have to suffer my rendition of verse 3:

Talk talk talk, it's only talk
Comments, cliches, commentary, controversy
Chatter, chit-chat, chit-chat, chit-chat
Conversation, contradiction, criticism
It's only talk

Classic Agile has pairs talking through programming issues. It has all those pairs in a team room where everyone can hear everyone else so that spontaneous conversations erupt. It has (as Kent Beck put it) the *code* in conversation with the people. Talk, Talk, Talk, it’s only talk.

This is in contrast with programmers who mostly want to go off and think. Rich Hickey’s talk, “Hammock-Driven Development”, is a now-classic exposition of that bit of final vocabulary. I link to it in the show notes and also link to a good summary.

I could list more binary oppositions, but I’ll leave that to you.

So what are we to do about the fact that people are touchy about being told to do things, or think thoughts, that are violations of their final vocabulary? While it’s possible to change a final vocabulary – I’m evidence of that – it’s not easy and I don’t think there’s any practical or legal way for us to do it reliably to other people.

So one answer is just to give up. Your theory or tool or package appeals to some people more, for reasons fundamental to their identity. Whether your thing’s Ideal Form up there in Plato Ideal Form Land is really, objectively, essentially the Ideal theory for the Platonically Ideal programmer is a kind of pointless thing to talk about, because we don’t live there. Concentrate on spreading your thing to people whose final vocabularies are a better fit. I hope my pushing the idea of “final vocabulary” into your brain will help with that.

Another way is to concentrate on words you have in common. As an example, consider the Extreme Programming slogan “You Aren’t Going to Need It”, YAGNI for short. It’s invoked when you’re writing some body of code that is to serve a client (a person or some other code). YAGNI says to add nothing to the interface that isn’t *explicitly* called for by the client *right now*. For example, if the client currently will never delete records, don’t implement a `delete` function. Even if you’re *sure* that next month the client will want to delete.

YAGNI triggers various words in a final vocabulary, especially Wholeness and Finished.

Since I don’t believe in Wholeness and prefer to think of no job as ever really Finished, that doesn’t bother me. But arguing that other people shouldn’t be bothered by what doesn’t bother me has not ever been a winning strategy for me. Instead, I have in the past finessed the issue by appealing to words that are in pretty much everyone’s final vocabulary: Efficiency and Pragmatism.

If you ask people what bothers them about YAGNI, they won’t say “because it violates my deep-seated belief in wholeness”. They’ll say something like “it’s wrong (that is, inefficient) to pay people *once* to write code you *know* will be insufficient, discover it’s insufficient, then pay them again to rip out the old code and write the code they should have known to write in the first place.”

My counter to that is to frame YAGNI as a bet. If you add some code that’s not yet required because quote “we’ll need it someday” unquote, you’re betting that, first, you actually *will* need it someday, and that, second, you’ll have anticipated the right details of the need and its solution – anticipated them at a point when, as Cem Kaner used to say, you know less about the project and its needs than you ever will again. I, on the other side of that bet, am willing to lay down money that the need will never arise, or – if it does – that the code you wrote won’t be right and you’ll have to rip it out anyway. On average, I’ll claim, I’ll win more money from my side of the bet.

This pushes against a belief in Proactive Thought, but a Pragmatic Person has to admit it’s at least possible that, in this imperfect world, there are situations in which *I* would be making the right bet. Therefore, it’s practical to learn how to take both sides of the bet: to be skilled at both YAGNI and not-YAGNI, and to learn how to better predict which applies in a given situation.

Putting it that way seems to have been successful enough, often enough, for me. I’ve used it to give people the space and freedom to try YAGNI out and see how it feels.

Your mileage may vary.

Dragging in Rorty’s idea of “final vocabulary” has taken us away from Fujimura and packages. I hope the digression has nevertheless been useful to you, or sparked some ideas. If all goes well, I’ll next wrap up packages by looking at the negative effects of their infectiousness *and* give you the opportunity to listen to someone other than me. The Oddly Influenced podcast’s first interview episode!

As always, thank you for listening. I really appreciate it.

Theories of What? or: Richard Rorty Weighs in on TDD ("Packages", Part 3)
Broadcast by