Galison’s /Image and Logic/, Part 2: The Trading Zone

Download MP3
Galison uses the metaphor of cultures meeting to trade to describe how, say, experimentalists and theorists collaborate. He describes procedures, machines, and diagrams as akin to pidgin trading languages.

Welcome to oddly influenced, a podcast about how people have applied ideas from *outside* software *to* software. Episode 9: Galison’s /Image and Logic/, Part 2: The trading zone.

At the beginning of the 90 or so years Galison covers, a physicist might very well have been a theorist one week, made instruments another week, and run an experiment with those instruments in a third week. At the end of those 90 years, such a person would have been as scarce as hen’s teeth. There now existed distinct subcultures of particle physics, such as “theory”, “experiment”, and “instrument making”, each with their own journals, their own conferences, and their own typical career progressions. There’s still an overarching culture of physics, but it’s generated in part by how the subcultures interact.

Because they do have to interact. During World War II, places like Los Alamos and the MIT Radiation Lab pioneered the era of “Big Science” and the “Physics Factory”. As the subcultures drifted apart into their own special jargon, special techniques, and special interpretations of common language, it became both harder and more necessary that they work together. And they had to work with an even *more* foreign culture: that of engineering.

Galison’s question in the book is quote “how, given the extraordinary diversity of the participants in physics – cryogenic engineers, radio chemists, algebraic topologists, prototype tinkerers, computer wizards, quantum field theorists – did they speak to each other at all?” And, I’d add, how did they get anything done?

Galison describes his answer with a metaphor from anthropology: the trading zone and its associated trading language.

A trading zone is a place where two cultures meet to trade. One historical example is the Pomor trade, which took place between Russian speakers and Norwegian speakers up in the Arctic. The Norwegians had a problem, called “maggot time”. In July and August, the weather was warm enough that it was hard to preserve fish, and they had no place to sell it. Some enterprising Russians started bringing in rye and wheat to trade for fish, and that worked well for both sides.

They got around the language barrier by creating a *pidgin* language called Russenorsk. It had a grammar mostly due to Norwegian, but very simplified. (For example, there’s quote “no clear verb conjugation” according to Wikipedia. As someone who’s tried and failed to learn Portuguese, which has 11 different verb conjugations, I have to say I approve.) If I understand right, Russenorsk has only one preposition, “på”, which can be used to mean “in”, “on”, “to”, “of”, “before”, or “after”. There were 150 to 200 core words, with around 200 more that only have one example in the records. The vocabulary is even more restricted than that makes it seem because many concepts, like “halibut”, have two words for them, one derived from Norwegian, one derived from Russian. This is a language good for basically nothing except trading fish for grains. For that, it worked fine for 150 years, but is now extinct.

Not only are there few words, but Galison emphasizes that their meanings were narrowly restricted to what’s necessary for the task. As a made-up example, the Russenorsk word for “captain” might have brought connotations of “person with the legal power to do violence to the crew” to a Russian speaker, and it might mean “driver of a boat whose crew are independent contractors” to the Norwegians. But since such connotations never mattered in trade, they didn’t have to argue about which was right. Some lowest common denominator would do.

Pidgins are never a native tongue. They don’t cover enough topics, and they change too fast. But pidgins can complexify enough, and get stable enough, that people can grow up speaking them as their first language. They are then called creoles. You can write poetry in a creole. I’m not going to talk about how Galison uses creoles, just pidgins.

Galison frequently describes the laboratory as a trading zone where, for example, theorists and experimenters develop a pidgin through which they can work together without getting bogged down in their different interpretations of particle physics, machines, how the common goal advances their career goals, and so on. In this way, the pidgin plays something of the role of episode 1’s boundary objects, but what Galison describes is much more fluid and active than, say, using the object “California” to help align the interests of a board of trustees and the director of a museum. Galison is primarily concerned with the trading zone / laboratory as, quote “an intermediate domain in which *procedures* could be coordinated locally even where broader meanings clash.” To him, the trading zone is all about suppressing conflicts about words or concepts in favor of coordinating *practices* tailored to a particular collaboration.

A complication with Galison’s metaphor is that the laboratory’s pidgin doesn’t have a made-up vocabulary. The theorists and experimenters are all talking English or German or whatever. There is a certain redefinition of words, like a restriction of “electron” to only properties that matter to a particular instrument or experiment. But Galison doesn’t depend on meanings of words. What he mostly wants to do is to treat instruments, parts of instruments, documents like the 1940 /Atlas of Cloud Chamber Pictures/, ways of collecting data, and ways of interpreting data all as part of the vocabulary, a quote “materialized pidgin”.

Does Galison’s metaphor work? Or is it bogus? Remember, such questions are not the point of this podcast. The point is to see where a way of describing work takes us.

Technical explanations are coming up, so the usual disclaimer applies: I’ve probably botched the details, but the gist is still probably good enough.

Radar was invented by the British before and during WWII. The MIT Radiation Laboratory was America’s lab charged with developing radar devices that would be used in ships, on land, and in the air. The task required the collaboration of physicists adept in the theory of electromagnetism (that is, Maxwell’s equations for electric and magnetic fields), experimenters, and electrical engineers who often came from a background of radio or sound reproduction engineering.

One problem that radar posed was that it used microwaves, which are considerably shorter in wavelength than radio waves. Shorter wavelengths meant a microwave’s wavelength was approximately the same size as a resister or capacitor, which made the usual theory of radio waves in components not work. So the first task of the theorists was to find rules that would let engineers do their normal calculations and build their circuits.

A second problem was the existence of waveguides. Waveguides are hollow channels that serve to focus the microwave radiation into a beam. Engineers did not want to calculate properties of real waveguides. Instead, they wanted to repeat an old trick. Suppose you want to design an electrical circuit that contains a loudspeaker. That’s a physical component with both electrical and mechanical properties. So the trick is to convert the loudspeaker, analytically, into an *equivalent circuit*; that is, a description of a purely electrical circuit with the same input/output properties as the loudspeaker. Thereafter, you just plug the equivalent circuit into the larger circuit and treat everything as a network of idealized electrical components.

What the theorists needed to do was calculate those equivalent circuits for the engineers to use. And they did. This was no simple matter – it involved the theorist Schwinger inventing a style of approximation that he later built on to do the quantum theoretical work that made him a co-winner of the Nobel Prize. But in the interests of time, I’m leaving out that story.

The Radiation Labs’ Waveguide Handbook described the result as, quote “casting the results of field calculations in a conventional engineering mold from which information can be derived by standard engineering calculations”.

The way Galison wants us to look at this experience is that the pre-microwave way of calculating networks formed a pidgin in which electrical components formed the words. The syntax – or combining rules of the language – was the existing theory of how a particular network of resistors and capacitors and equivalent circuits and parallel and serial connections was simplified down to the input/output values that let it be treated as a black box for engineering work.

The theorists provided a new or updated syntax (calculation rules used like the old ones but derived from Maxwellian field theory) plus a lot of new vocabulary words in the form of equivalent circuits for differently shaped waveguides.

The next example is Quantum chromodynamics or QCD. That’s a theory of how quarks combine to form particles like protons and neutrons, collectively called hadrons. In the time Galison covers, the early 1980s, it was a relatively new theory and it didn’t have a lot of strong experimental evidence for it. Per Galison, quote "For theorists, [QCD] drew its strength not from quantitative links to specific experiments, but from a combination of qualitative explanations and the intratheoretic links it provided between different domains of phenomena.” And, QCD in turn, couldn’t do much to guide experimentation. Galison says, quote “QCD cannot predict how quarks [combine to form] hadrons”, which seems kind of a problem considering that’s its whole point. I can’t tell from Galison whether the calculation was in principle impossible or just impossibly time-consuming. I think the former.

In any case, that left experimenters in something of a pickle. They have these very expensive devices for smashing beams of particles together. They had results that show collisions producing “jets” of hadrons (more than one, all heading in roughly the same direction, strongly interacting with each other). But that provided little guidance in setting up further experiments, and no particular reason to believe that any particular experiment you might be planning will have anything to say about QCD.

Enter groups of theorists intent on helping by way of simplifying, or restricting, or pidginizing. First out the gate were Feynman and Fields who developed their “independent fragmentation model”. This was explicitly *not* a candidate for being a true theory. They wrote, quote “We think of our jet model, not as an interesting theory to be checked by experiment, but rather as a possibly reliable guide as to what general properties might be expected experimentally” and, quote “The predictions of the model are reasonable enough physically that we expect it may be close enough to reality to be useful in designing future experiments, and to serve as a reasonable approximation to compare to data. We do not think of the model as a sound physical theory.”

What it was mainly for was to guide the discovery of more data about jets. Feynman and Fields, again, quote “We thought it might prove useful to have some easy-to-analyze ‘standard’ jet structure to compare to. Thus, a hadron experiment could say ‘the real jets differ from the ‘standard’ in such and such a way.’”

Galison calls this an ‘intra-experimental pidgin’. It gave ways, for example, that experimenters bashing protons into anti-protons could compare their jets with the jets other experimenters got bashing electrons into positrons. It was enthusiastically adopted.

A group in Sweden approached the QCD vs. experiment problem differently. They simplified QCD by, for example, assuming the connection between two quarks was a straightforward spring-like “string” rather than a more diffuse and hard-to-calculate field. What this group was attempting was somewhat more ambitious than Feynman and Fields: they wanted a model that could (1) suggest experiments, (2) suggest how experimental results would feed back into the theorists’ understanding of QCD, and (3) suggest improvements to the simplified model itself. In Galison’s terms, they, quote “took a highly simplified physical model and looked for results that this model held in common with QCD on one side and with experiment on the other”. That’s reminiscent of Russenorsk providing ways for Russians and Norwegians to achieve their different goals.

There was another theory, also intermediate between experiment and QCD, that I’ll skip to save time and because Galison’s explanation makes absolutely no sense to me.

As far as I know, all the theories (or models) were successful in guiding experiment. People produced useful results. At some point, last episode’s Time Projection Chamber got involved. In a lengthy four-way discussion, the theorists and experimenters settled on particular experiments that would suggest which of the three theories was closest to nature. As it turned out, the string theory won.

For my purpose, the most important thing is how much *work* was required to get to a point where people agreed on what counted as evidence. Galison says, quote “From the theorists’ side, people like Wolfram, Fox, Gottschalk, and Andersson had to plunge themselves into the details of event simulations and specific experimental conditions that took them a long way from the clear ethereal world of renormalization, chiral symmetries, and grand unification. From the experimenter’s side, it meant that [experimenters] Werner Hoffmann and Charles Buchanan began coauthoring articles on the underlying basis of the string model.”

Both the Radiation Lab and QCD examples have the deliberate restriction of complexity that’s characteristic of pidgins. They also exhibit three other characteristics Galison emphasizes.

First, they are *local*. I think of them as belonging to specific places. That’s literal in the case of the Radiation Lab. It’s perhaps less literal in the QCD case because the string people were in Sweden. Although they did visit California for important discussions, it’s hard to say that everyone involved was in single *place*. However. Humans are really good at conceiving of non-spatial things as places. Think of “going to a website to check the weather”, “storing data in the cloud”, or saying there’s a band that’s in the neo-gothic post-punk hair-metal *scene*.

The second characteristic of pidgins is that they are *temporal*. They change a lot faster than non-pidgin languages. In both of the QCD examples, the pidgins were *designed* to change in response to experimental results. Pidgins are also ephemeral, readily abandoned in a way full-fledged languages aren’t.

Third, pidgins are *origin-sensitive*. Pidgins are tools that arise to solve a particular problem, or to handle certain recurrent situations. You can’t really understand a pidgin independently of understanding what it’s *for*.

So, what does all that say to us? I confess I haven’t consciously used the idea of trading zones in my own work. I’m hoping to have some guests from the Domain-Driven Design tradition, where I think its ideas are most likely to resonate. Or, if this resonates with *your* practice, send me a note so that I can interview you.

That said, let me speculate.

It strikes me that software itself is a pidgin between people who use it and people who build it. Thinking of it as such might help in product design. For one thing, it might help us not be so *serious* about what our software is. It is not some sort of representation of some sort of world. It does not contain a “model” of the “domain”. It’s something to facilitate our user’s practices and procedures, which must lie at least partly outside the software. They, the users, trade us money for help with *some* of their goals. Thinking of our software as a radically restricted, changeable, partial tool might keep us humble – and so design software that isn’t so overbearing.

In my analogy, product owners are akin to those string theorists who simplified quantum chromodynamics and became enough of a hybrid theorist/experimentalist that they could guide the development of a pidgin.

But here’s a hope they (and maybe not just them) might be more. All of the success stories Galison cites had their fair share of bobbles and near-disasters along the way as they stumbled toward a workable organization, workable practices, a workable pidgin. Would they have stumbled less if they’d had better models – like Galison’s trading zone model – for what they were doing?

Would product development go better if we thought of ourselves as doing the sort of history Galison is doing? He says, quote “the historian is not simply concerned with the interpretations of meanings but rather with defining the ambiguities of the symbolic world, the plurality of possible interpretations of it, and the struggle which takes place over symbolic as much as over material resources.”

Keeping ambiguities in mind, rather than trying to definitively resolve them, might produce software – and teams – that are better prepared to handle the rapid change inevitable in pidginized situations.

That’s all I’ve got. Thank you for listening.

Galison’s /Image and Logic/, Part 2: The Trading Zone
Broadcast by