E1: Boundary Objects

Download MP3
Boundary objects are an idea from the sociology of science. They are about how people use ambiguous nouns – or things – to coordinate the work of people with different backgrounds and interests (like, say, programmers and product owners).

Welcome to oddly influenced, a podcast about how people have applied ideas from *outside* software *to* software. Episode 1: Using ambiguous nouns – ambiguous *things* – as a tool for coordinating work.

Since this is the first episode, let me briefly explain the idea of the podcast. Mostly I want to interview other people about how something they read triggered an Aha! moment that led them to change the way they did software. And how that worked out for them.

But while I’m figuring out the mechanics of podcasting, it’ll be just me - that is, Brian Marick - talking about my own influences.

This episode will be about the idea of “boundary objects”. It’s the beginning of a three-part series on how different social groups interact to do science, and what we might learn from that.

“Boundary object” is an idea that caught on in the sociology of science around the early ‘90s. I first read about it in a paper that rejoices in the catchy title of “Institutional Ecology, 'Translations', and Boundary Objects: Amateurs and Professionals in Berkeley's Museum of Vertebrate Zoology, 1907-1939”. It was written by Susan Leigh Star and James R. Griesemer.

Here’s the quick definition:
1. People collaborating on projects can be divided into groups (“social worlds”, in the jargon). Think of testers and programmers on a software project.
2. Those groups have different values, goals, ways of looking at their jobs. Yet they are collaborating to accomplish one single thing in the world.
3. One way to organize collaboration is to highlight certain words or things – those are the boundary objects, so-called because they lie at the boundaries between social worlds.
4. These boundary objects are a bit delicate because they have to accomplish two things at once.
4.1. When people from different social worlds use a boundary object with each other , they have to agree that they’re talking about the same thing, yet…
4.2. The different interpretations they give to that word have to be flexible enough that they don’t cause arguments, wasted time, and so on.

The rest of this episode gives examples and a few ideas for how to make boundary objects work.

I want to note that that paper wasn’t my only source. I’ve scavenged from several sources, so if any of what I say is untrue or stupid… blame me, not those authors.

So, the Museum was founded in 1908. It was not to be a public museum; to this day, there are no public exhibits. It’s a research museum. To see what that means, let’s look at the context of the time.

1908 is only 49 years after Charles Darwin published /On the Origin of Species by Means of Natural Selection/, and maybe more like 35 years since Darwinian evolution had become widely accepted – at least among scientists. That opened up a whole new set of avenues of research. Suddenly, biology was about a different sort of thing – change – than it had ever been about before. Darwin convinced people that change happened, but now scientists wanted to know the details: what pushes a species to change, and how?

The reason isn’t inside the species; it’s *around* the species. A species changes if its physical environment changes, or in response to changes in the species that it interacts with.

California at this time was a great place to study change, because its ecologies were visibly changing, and changing *fast*. It was a public issue, and for good reasons. Put it this way: today, only 2% of plants in California are actually native to California; the rest are imports. (I don’t know if that 2% is 2% of *species* in California, or 2% of *biomass*, or what. Whichever: if you want to watch the process of evolutionary change, California was clearly a good place to do it.)

And that’s a major reason the Museum was founded. And, spoiler alert, it worked out. For example, data from the Museum has shown that the Alpine chipmunk, a mountain species, has moved up as California has warmed. That is, the early Museum data showed its vertical range was between X and Y feet of elevation. And over the past century, both X and Y have increased: the chipmunk has been heading upslope as its environment changed. Moreover, scientists examining the Museum’s hundred-year-old stored specimens have identified the genetic changes that allow the chipmunk to survive in a lower density of oxygen.)

So that’s neat: sometime not too long after 1908, some person caught a chipmunk, recorded a lot of information about it – like where the chipmunk was, on a topographical map. That person also wandered over the countryside, marking where chipmunks were. Later, someone used the topographical map to calculate the range of this group of chipmunks. Then *someone* filed the information and the corpse somewhere. Perhaps a century later, old data on chipmunks was compared to recent data and, because the chipmunks were seen to have moved higher, someone was inspired to pull a chipmunk corpse out of a drawer somewhere and scrape off some DNA.

Now here’s an important thing: the person who *caught* the chipmunk is very likely not the person who used the records of all the chipmunk sightings to calculate their range. Let’s call those people *the collector* and *the scientist*. They are collaborating together – successfully – even though they come from two different *social worlds*.

“Social worlds” is a term of art in sociology, one that I didn’t find a good definition of. But it goes something like this:

Humans are social animals who talk to each other. You and I perceive the world, but we interpret the photons that strike our eyeballs in the light of previous social interactions. It’s people around us who tell us what to value, how to value it: what’s interesting, what’s boring; what to pay attention to, and what to pretend doesn’t exist; what has *meaning* and what doesn’t.

Differing social worlds is what makes it possible for two entirely sincere people to interpret the same event in not just opposing but in seemingly completely disconnected ways.

So how is it that the collector and scientist collaborated to produce something whose value lasted a century? The answer the paper gives is that they use *boundary objects* to help them coordinate.

The first boundary object to look at is the collected specimen itself. It *means* something different to the different people. To the scientist, the specimen is mainly data. Our authors quote a biologist as saying “Without a label, a specimen is just dead meat”. But to the collector, the specimen is *mainly* that dead meat, and it represents a bit of California’s natural heritage that’s being preserved for the benefit of later generations. It’s also the end product of a whole ritual or hobby: going camping, chatting up locals to find where animals are, sneaking up on those animals and getting them to stay still long enough to be collected. The data associated with animals isn’t *central* to collectors; it’s more like a chore they do to make the scientists happy.

How are they persuaded to do that chore? That was the challenge faced by Joseph Grinnell, the first director of the museum.

Had I - at, say, age 31 - been Grinnell, I would have attempted to educate collectors, to teach them to value those attributes of a “specimen” that *I* valued, to have them care most about what I cared most about. And, just like a zillion other technologists who want to “fix” users, I would have failed badly. I’m sure they all would have quit collecting for me.

Grinnell succeeded – even though he was 31 at the time – and the authors give two reasons:

First. He focused on what people should *do* – on how they should write things down – rather than on what they should care about. The Grinnell System for collecting and documenting specimens balanced ease of use and thoroughness so effectively that it’s still used today.

Second. How people worked with specimens was *negotiated* in the process of doing the work. It was *not* predefined. A workable compromise had to be *discovered*. (A bit of foreshadowing here: we’ll come back to that someday and discuss Schön’s /Educating the Reflective Practitioner/. The broader point is that humans can’t reliably think about “doing” in the future tense, but only by observing the results of *actually* *doing* something, preferably in a tight feedback loop.)

I think the paper leaves the definition of boundary objects a little vague, so let’s look at another example: the state of California.

The State of California seems like a pretty abstract thing. But an important fact about boundary objects is that humans readily “thingify” abstract ideas and then treat them as if they are real objects in the world.

So, what did “California” mean to different people involved in the museum?

The *collectors* were motivated by preserving the record of specifically CALIFORNIAN wildlife. Which makes no sense. Wildlife doesn’t respect political boundaries, so why should the science? Because the collectors cared about California. It was their home, and they wanted to care for it. Oregonians can take care of Oregon.

Turning California into a boundary object also motivated the *trustees* of the University of *California*. For one thing, public universities are supposed to serve the interests of the states that fund them. Being able to point to a Museum that’s all about *California* biology gained the University points in the State legislature. Moreover, California elites at the time were obsessed with showing they were just as good – better, even! – than the snooty east coast elites. Saying that Harvard has old-fashioned museums while Berkeley has a cutting edge museum no doubt also helped make the legislature happy, and happy legislatures produce happy trustees.

In a very real way, the museum itself was a boundary object. It was a token in a status war with the East Coast. This actually annoyed Annie Montague Alexander, who was the rich person who paid for the museum and was, in effect, its Chief Operating Officer in its early years. In a letter to Grinnell, she fumed about how all the trustees seemed to care about was “hey, look, we’ve got this well-funded museum that’s better than East Coast museums” and not at all about what the Museum was actually *doing* *for* *science*.

Grinnell’s response was basically to ask Alexander not to insist that the trustees attach the same meaning to the Museum as he and she did. To paraphrase: “They don’t need to be convinced to value what we value; what matters is that what they value is *compatible* with what we value.”

That’s the point: Boundary objects harness humanity’s obsession with “thingness” to create focuses of attention that people coordinate around, even though they disagree somewhat about what those things *are* and what they *mean*.

So. Now let’s look at applying the idea of boundary objects to software.

When I first read the paper, my mind jumped immediately to acceptance tests. Especially in the early days of Agile, acceptance tests were the objects that programmers, testers, and product owners talked about as a means of coordination.

Product owners want acceptance tests to be clear descriptions of what they want, because programmers do a lot better at understanding the general requirement when they’re given specific examples.

Testers want acceptance tests because they can use variations of them to demonstrate that neither the programmers nor the product owners are really clear in their minds about what a new feature should do.

Programmers want acceptance tests because the product owners are infuriatingly vague about what they want and, when they’re presented with a working feature, keep saying “that’s not really what I meant”. Acceptance tests provide a nice binary pass/fail metric that we can use to say “See! We did what you said.” Now, more enlightened teams know that it’s expected and fine when a product owner’s response to a finished feature is “that’s what I said, but it turns out not to be what I meant”. It’s certainly better than just accepting the wrong feature. But acceptance tests are useful when they make such conversations less frequent.

Here’s a relevant story:

Once I was consulting for a shop that used Ward Cunningham’s FIT testing framework. When you used FIT, tests were written as HTML tables. Tables are a nice format for lots of tests. They’re *scannable* - you can quickly review a table to see how one test differs from its neighbors. Because of that, it’s harder to leave out special cases.

Here I’ve edited out a rant about how horribly underused tabular formats are in unit tests; it’s a pet peeve of mine. You’re welcome.

In this particular shop, tests were written by independent testers in conjunction with the product owner; he’d talk with at least one tester and at least one programmer to lay out the main cases. Then the testers would write those ideas down in HTML tables, but also add other cases – often finding odd combinations of inputs where the product owner would have to decide what a sensible output would look like. Then the programmers would change the product to match the tests.

That all seemed to be working well.

But one time I was walking out of the team space at the end of the day when I overheard a programmer say “Now I have to rewrite the FIT tests in JUnit”. That was weird enough that it brought me up short: I stopped to ask him what he meant.

Here’s the story:

The testers were used to programming test-first. They wrote unit tests in Java, ran them with the JUnit framework, debugged them when they failed unexpectedly – and did all of that within their development environment, Jetbrain’s Intellij IDEA. They had a nice tight feedback loop.

The acceptance tests, though, were written in HTML, not Java. Working with them meant stepping outside that familiar feedback loop. That was awkward enough that this person took the HTML tests, translated them into Java, ran them under JUnit, got the code working, then *additionally* did the work of making the equivalent HTML tests pass under the FIT framework. That extra work added no value.

So: No no no, was my reaction. Ahh! Duplicate work. I suggested that we take his JUnit tests and edit them to make them more human-readable. So, for example, we didn’t use Java’s stupid convention of smashing words together, separatedOnlyByCapitalLetters, to make human-UNreadable names. We instead separated_words_with_underscores. Easier to read for a product owner (anyone, really). And we violated coding conventions to make the code look more tabular. Then we called the product owner over, explained what we were doing, showed him the tests, told him to ignore commas and semicolons and funny characters like curly braces, and asked whether the tests made sense to him. They did.

It was not long after that that the majority of tests were written (by the testers) in a sort of pidgin Java. The programmers would take those tests, fix the syntax errors, and code away just as they liked.

I think that’s a good example of how people can coordinate effectively, using boundary objects. We called them acceptance tests, but some might say they’re *really* unit tests. To which I reply: NO. We will not have this conversation. Those things over there mean different things to different people, and That’s Fine. We don’t need a single universal definition. What matters is this team, getting the work done, together.

Now I want to be a touch heretical. Current mainstream Agile assumes that development teams need to… must… are *morally* obliged to… align themselves with business value, with the needs of the corporation, with… people in different social worlds.

I think that’s a mistake. I think it’s like if Grinnell insisted that the collectors buy into his particular description of his particular research program. I think that wouldn’t have worked, and I think the “we must all focus on business value” style of Agile won’t work either, not in any long term, not as well as accepting that different values are valid.

It’s just inevitable that different groups of people live in different social worlds, with different values. Programmers care about code that’s easy to work with. The business doesn’t. The current style leaves programmers with two bad choices:

1. Convince product owners that catering to programmer wishes is just Good Business. And yeah, maybe it is. But still, doing that is roughly as pointless as Grinnell trying to persuade collectors that they really should care more about bookkeeping than about animals, and not care about having fun in the outdoors. It’s just… not… going to stick in the brains of most people from the business world, not when the pressure’s on.

Choice 2. Basically, abandon your interest in a good codebase whenever the business feels Really Strongly About Something, like a feature that has to be added Now Now Now. I think that leads to a crappy codebase, sooner or later.

Early Agile was characterized by a push to establish the development team’s preferences as *relevant*. I think early Agile was very much about a trade: let us (the development team) develop as we like. In return, we promise to make you happier than you’ve been before. It was about accepting other people’s social worlds as… respectable (for them to inhabit) but not *definitive* (that is: it’s OK for us to care about different things.) It did not require the development team to *inhabit* a different social world, and I doubt whether the current tendency to do so is a good idea.

One of the things that makes boundary objects catchy is that they’re thing-like. Even something like “California” is thing-like. As creatures with opposable thumbs, a big chunk of our brains is tuned to working with things: picking them up, turning them over, shaking them to see what breaks. So it’s appealing to theorize the process of work around the metaphor of boundary objects: it’s just as seductive as an abstract superclass in Java or a Haskell typeclass. One concept to rule them all and in abstraction bind them.

However, as an analytical category, boundary objects are a bit too passive to explain everything we want explained. For example, the shared meaning of a boundary object is arrived at through doing the work. Which means via talking – but the theory of boundary objects doesn’t say anything about how to make that talking work well. And its story about incentives feels a bit weak. There must be more to say about why the collectors decided to put up with the Grinnell System for collecting.

We’ll get to topics like those in upcoming episodes. Peter Galison’s idea of “trading zones” is all about the talking part. Joan Fujimura’s “packages” are about persuasion.

But, for the moment, that’s all. Thank you for listening.

E1: Boundary Objects
Broadcast by