E28: /Governing the Commons/, part 4: creating a successful commons
Download MP3Welcome to oddly influenced, a podcast about how people have applied ideas from *outside* software *to* software. Episode 28: /Governing the Commons/, part 4: creating a successful commons
In the last episode, I teased you with a description of the canal system at Gal Oya. In the 30 years after its creation it had become a “hydrological nightmare”: broken by consistent under-maintenance and uncontrolled use. Its prospects looked bleak because of corruption and incompetent administration by outsiders, ethnic and class tensions, and occasional violence. Nevertheless, it did recover into what Ostrom described as a “fragile” commons that was seriously better than before. For all I know, the commons has since broken down – sometimes they do – but real improvement really happened.
The first half of this episode is about how it happened. Then I provide some tentative thoughts about how a similar thing might happen with your team’s code base. I emphasize that these thoughts are tentative because – remember – commons governance is not “one size fits all”. Not only is a software codebase different than a canal system or a forest, each software codebase is probably importantly different from the others.
Nevertheless, I feel I ought to say *something*.
≤ music ≥
To start, let me describe the context for Gal Oya. The story only really applies to one of the three regions within the larger complex, but I’ll just say “Gal Oya” for simplicity. Even that region is much larger than almost any software codebase: hundreds of miles of canals serving (in theory) some 19,000 farmers.
In response to Gal Oya’s continuing failure, a four-year project was created to make improvements. Those improvements were both infrastructural (fixing broken canals and machinery) and institutional.
The institutional plan was textbook /Seeing Like a State/: apply regulations to make farmers do the right thing. Those regulations would be enforced by the government’s Irrigation Department – that is, by the people who the farmers already despised.
That would have made for an unpromising start, except that the people implementing the plan didn’t follow it. I’ll describe that in a bit, but first let’s look at the literal landscape.
There were three levels of canals, which I am going to name Little, Middle, and Big.
* Little canals, when working, provided water to 12-15 farmers.
* Middle canals fed the little canals and would serve 100-300 farmers. Using my powers of advanced mathematics, I calculate that, at the extremes, each middle canal fed 7 to 25 little canals. Most likely the actual numbers were somewhere in the middle.
* There were four Big canals that fed the middle canals. Numbers weren’t given but I estimate 20-something middle canals per big canal.
The first step in the project (as implemented, rather than as planned) was focused on the little canals and on breaking down distrust – both farmer distrust of the government and farmer distrust of each other. (Remember, even though we’re talking about 12-15 farmers that work right next to each other, they were actively competing for the same scarce water.)
I noted in the last episode that ordinary human social interactions – that is, the way I instinctively see it: wasting time – tends to build trust. It’s easier to think of people as enemies when you know nothing about them and are discouraged from thinking of yourself as having anything in common with them. Remember, the farmers were of different ethnic and language groups and had come from different parts of the country.
However, more was done than just organizing mixers and meets-and-greets. The project hired a bunch of outsiders, designated Institutional Organizers (or IOs). These people were largely college graduates – people that Sri Lanka at the time had too many of, but who were of relatively high status in the culture. (Something like the respect my traditionalist German parents had toward schoolteachers.) Notably, they picked people who’d come from agricultural backgrounds, especially large planned settlements like Gal Oya.
After six weeks of training, the IOs were sent out to to the Gal Oya farmers, with about four or five IOs per middle canal. Each IO was responsible for several little canals. Their job was, at the beginning, specifically to be catalysts for farmer conversations. They did informally pool what they’d learned at weekly IO meetings, but their own learning wasn’t really the focus.
Instead, their job was to go to the farmers using a particular little canal, listen, and then say, “Almost all of you think the roots choking the canal are a problem. What can you do to fix it?”
Farmers often arrived at solutions that involved communal work (like chopping away at the roots). Notice that they’re all doing this together, and that they can *see* that they’re all doing it together. Remember, we’re trying to build up a habit of “quasi-voluntary compliance”, which relies on everyone justifiably believing that everyone else is also complying.
After a time, people got used to working collectively. The next step was to establish a more official Little Canal organization for the 12-15 farmers. These were very informal. There were no regular meeting times, agendas, or written records. However, one of the things they did was select – by consensus – a farmer to attend middle-level meetings.
That is, they created a middle-level organization where little-level farmers had a delegated voice. I think it’s relevant that the little-level farmers had gotten their act together before anyone started worrying about the middle level. Very bottom up.
Two things stood out to me about the middle-level organizations.
First, there were many of them, and each chose its own rules. No one bothered with finding and applying the One True Middle Level organizational style.
Second, this level also worked by consensus. Consensus systems are somewhat prone to That One Guy who blocks everything unless it’s done his way. I wish I knew more about how they avoided that. Perhaps Sri Lankans have more experience with consensus than we Westerners? Perhaps the work at the little-canal level had instilled some norms? I don’t know, but I wish I did.
The medium-level organizations were the right level to negotiate with the state’s Irrigation Department. The original plan included infrastructure projects. The engineers were not initially inclined to listen to the farmers, but it happened that the project plan was for the farmers to contribute a lot of the actual construction work. The engineers were persuaded that listening to the farmers’ representatives would make them more likely to work hard. Having the engineers responsible for modifying a particular canal talking with the users of that canal worked out well. Crazy, huh?
Finally, a big canal organization was layered on top of the medium canal organization. It seems this was still run by consensus.
That’s as much as I have to say. In the show notes, I give some further references, both to the Gal Oya project and similar projects in the Philippines, Nepal, Bangladesh, and Thailand.
≤ music ≥
This may be a good moment to point out that Elinor Ostrom comes from a strain of sociology/economics/political science called “New Institutionalism”. Like older institutionalisms, it studies how institutions work and what makes them durable. Other strains include the Max Weber tradition, which focuses on organizational structure (that is, bureaucracy); the Anglo/US approach of focusing on formal political structures; and a behaviorist or game-theoretical approach that focuses on the individual and bounded rationality. New institutionalism focuses (as far as my shallow reading can tell) on two things:
The first is on how formal and informal rules enable or constrain the behavior of individuals and groups. Crudely: get the rules right and behavior will follow.
The second focus is on legitimacy, which I guess means why people choose to follow the rules. There’s still some game-theory-like emphasis on rational decision-making, but it’s not assumed an institution is purely driven by rational or optimal goals. Instead, things like myths and ceremonies are orthogonal to rationality but nevertheless have important effects. I can see that in Ostrom’s emphasis on, say, conversation and norms. She does see people as something other than calculating machines.
So it seems to me that, when it comes to commons governance, rationality is a *constraint* more than a resource. In game theory terms, rules, social interactions, and norms are to be designed (or somehow arrived at) such that individuals are discouraged from “defecting” and so that the whole community arrives at a better (but not necessarily optimal) equilibrium.
Against that background, let me spin a little fantasy about instituting commons-style governance over a code base. My goal is to give you some sort of *feel* for how it might work, supposing that all this talk about water and lobsters and timber might not be doing it for you.
≤ short music ≥
It’s necessary to start small, within individual teams. There’s no particular reason, at an early stage, for teams to coordinate. It would probably be better for each team to come up with its own rules.
It’s important to start with some low-cost improvement, after a probably leisurely set of conversations. If you’re working in a statically-typed language and use a good refactoring editor, you might decide to work on better names, perhaps more intention-revealing, perhaps more tied to the business domain than to the implementation.
You’ll need cheap monitoring and cheap decision-making, since those are characteristics of commons governance. Probably for that, you need a syndic. (Recall that “syndic” was the title of the person in Valencia who resolved conflicts for a particular irrigation canal.) Having everyone weigh in on whether a particular name is good or not seems expensive to me.
What I’d be inclined to do, as an experiment, is to let anyone who wants to be the syndic of naming have the job of looking at commits and finding just one better name. Perhaps the role can be rotated if more than one person wants it. The syndic can look at changed code and suggest one name (new or old) that should be changed. A quick conversation, and the change can be made.
Now, my pinned post on Mastodon is ‘I wish "it's unusual to see a computer programmer being so dogmatic" was a thing people said’, so I predict naming disputes. I’d suggest establishing the following norm or rule – just for now, just for names. It’s one I learned from Ron Jeffries when we were doing team coaching for a particular software team.
We were pair programming. Ron and his pair (let’s call him Jim because I don’t remember his real name) – they were sitting next to me and the person I was working with. It happened that both pairs got into a design discussion at about the same time. Mine, I’m ashamed to say, got a little heated. Since I know Ron cares a lot about design, I was surprised that he quickly agreed they should try Jim’s approach.
That evening, at our daily debrief-and-dinner, I asked him why he’d done that.
He explained there were two possibilities: Ron might in fact be wrong and Jim’s approach was better. In that case, Ron would have learned something about design – a win! – and Jim would now trust him more – another win. Ron didn’t *predict* that would happen – he was sure he was the one that was right. But at some point, he thought, Jim would realize the problem with his approach – another win! – in a more visceral, effective way than just being *told* about the problem. And Ron was confident the two of them could easily “bend the code” toward Ron’s design. And Jim would benefit from seeing how such “bending” is done, as that’s an under-taught skill.
To expand on that: Ostrom seems to have been a big fan of arguing. She dedicated /Governing the Commons/ to her husband for “love and contestation.” The problem isn’t arguing. It’s arguing without a way to *stop* arguing, without a way to decide how to move on. What Ron did for me, in that interaction, was to teach me a new rule: if you start to get frustrated, say “let’s try it your way and see”. Kent Beck had a similar rule that was something like “No design discussion should last for more than 15 minutes without someone turning to the keyboard to do an experiment.”
Ron also gave a little reinforcement to a norm I’ve been trying to adopt all my life: that it’s better to learn from being wrong than to always be right. A reason our arguments go on and on and on, it seems to me, is because we’ve been acculturated to be the smartest in the room, to think that we have to be seen as *correct*, lest we be diminished in other peoples’ eyes. Pish. In a long career of being wrong, I’ve learned that a lot of people *do* in fact initially lose some respect for you if you readily admit actual error or even the possibility of erring, but that reaction usually goes away as you continue to work together.
Remember: part of building a commons is having people internalize new norms, including norms of interaction with other people.
Over time, let’s hope your team settles on some broadly accepted rules for good names, with some, uh, “love and contestation” along the way. I wouldn’t be surprised if, over time, you change the rules for who detects bad names, when they detect them, how renamings are arrived at, and so on. One thing Ostrom points out is that revising metarules – rules about how you make and enforce rules – often provides more leverage for useful change than do the “operational”, bottom-level rules themselves.
After that, what? Ostrom seems a fan of dealing with a single problem, getting the relevant solution in place, and then moving on to the next problem. People have only so much attention span for improvements. Also, as Ostrom puts it: “Each institutional change transform[s] the structure of incentives within which future strategic decisions [will] be made.” Okay, I guess you can see why she won the Nobel Prize for economics rather than for literature. What’s she’s saying, I think, is that success tends to teach you how to succeed. If you try to do A and B at the same time, it’s harder for your approach to B to be informed by your success at A.
How you get from your first problem and solution to a state where the team is truly cooperating as a commons is not something I can predict, but I can imagine things like the team agreeing to take one of the books on refactoring (I’ll list a few in the show notes) and perform those refactorings where they apply. For example, they might decide to stamp out “primitive obsession”.
Now, because I’m a Big Picture guy, I’d be itching to start chipping away at a big problem. For example, I’m a fan of Martin Fowler’s “Strangler Fig” pattern for rescuing legacy code. The idea is that you wrap a big chunk of decayed code with what starts as a thin layer of new code. As you add features to the product, you preferentially put the new code in the outer layer rather than modifying the inside. That outer layer can call into the legacy code, but the reverse is never allowed. That avoids the kind of tight coupling that is probably the problem with the legacy code. You will also – sometimes – judiciously pull out a little functionality from the legacy code and reimplement it (better) in the outer layer. Eventually – and for a big system we’re talking more years than months – most or all of the old functionality will have been sucked out of the legacy code into a code base you can work with. Some, granted, may remain behind as a core. If it works, and it’s not code that people ever have to change, why bother rewriting it?
It might be tricky to know whether your commons has evolved to the place where Strangler Fig is a low-risk, low-cost – if lengthy – approach.
≤ short music ≥
At some point it might be good to have teams get together to talk about common problems and share solutions. That could be an opportunity to introduce the post-2010 management-friendly version of “communities of practice”. (See episode 22, titled “this is not an episode”.)
I’m sure a lot of people would want to jump to a grand structure that applied to all teams, perhaps headed by a Chief Quality Architect, but I’d push back against that. Remember that a lot of successful commons are “polycentric”, defined as “a social system that has many centers of decision making, each acting somewhat independently but under a common set of guiding principles.” Rather than leaping to a comprehensive organization, I bet it would be more successful for your team to talk to teams “adjacent” to yours – teams whose code frequently affects your code – and then come up with rules that work for you, pairwise.
My personal prejudice, after all these years, is to think that our tendency to look for abstractions or rules with universal applicability is sometimes useful but often a vice – and a trap.
≤ short music ≥
I usually like to end with more of a bang, but that’s what I’ve got. With luck, last episode’s pitch for collaboration between software teams and Indiana University’s center for commons studies will come to pass. *That’s* where valuable advice will come from, not from my half-informed speculation.
Thank you for listening.