E7: Imre Lakatos on what persuades scientists to risk their careers

Download MP3
Imre Lakatos intended to give rules for when scientists would be *rational* to switch to a new research program. At this, he probably failed, but I think he provides good heuristics for how to *persuade* scientist-like people to make a bet on something new.

Welcome to oddly influenced, a podcast about how people have applied ideas from *outside* software *to* software. Episode 7: Imre Lakatos on what persuades scientists to risk their careers.

Around the early 1970’s, the Hungarian-born philosopher Imre Lakatos had a problem. Three problems, actually. The first was named Immanuel Velikovsky, the second was named Isaac Newton, and the third was the Soviet Union. From them and others, he’d develop a theory of what makes it *rational* for a scientist to either devote their career to a new quote “research programme” or to abandon an old one.

Now, it seems to be the consensus that Lakatos failed. He didn’t provide something a scientist could plug into a reasoning machine and use to make career decisions.

He did however, I think, capture factors that can *persuade* – if not in a strictly rational way – scientists and science-inclined people to make big bets on radical new approaches. Since a lot of people in software are science-inclined, I think his key ideas apply to us too. Throughout, I’ll use Agile as my modern example, because I recognize Lakatos’s factors apply to it nicely. They’re pretty descriptive of what persuaded me, at least, to make my bet that what wasn’t yet called Agile was going places.

If the people *you’re* trying to persuade are like me, Lakatos’s guidelines may help you explain what you’re doing in a more convincing way.

Immanuel Velikovsky published /Worlds in Collision/ in 1950. It was, roughly speaking, a work of comparative mythology. Velikovsky’s thesis was that in early recorded history, a lot of cultures recorded a lot of weird events, which he wanted to take seriously. The preface says, quote “The historical-cosmological story of this book is based in the evidence of historical texts of many people around the globe, on classical literature, on epics of the northern races, on sacred books of the peoples of the Orient and Occident, on traditions and folklore of primitive peoples, on old astronomical inscriptions and charts, on archaeological finds, and also on geological and paleontological material.”

Velikovsky took all that experience as factual. That is, when Israel’s King Joshua fought the Amorites, the sun really did stand still in the sky, and the majority of Amorites really were killed by large hailstones. However, Velikovsky didn’t think the explanation was “God answered Joshua’s prayer”; rather, Velikovsky came up with a physical theory that went something like this:

About 1700 years ago, a comet was ejected from Jupiter and passed by the earth twice within a century, causing excitement such as temporarily stopping the earth’s rotation when Joshua was fighting the Amorites. It also dislodged Mars from its orbit, and around 800 years later that planet also passed close to the earth, causing other catastrophic events.

Eventually things calmed down. The comet settled into a more circular orbit around the sun, and we today call it Venus. Mars resumed a regular orbit.

Scientists were not… overly impressed. Rather embarrassingly, they tried to get the book cancelled by its publisher. It all came to a head in 1974 when the American Association for the Advancement of Science held a conference on the book, which Velikovsky attended. The scientists went on and on about things like conservation of angular momentum, conservation of energy, and Newton’s laws. Velikovsky defended against that by saying electromagnetism played a larger role than celestial mechanics. The sun, Venus, and so on were highly charged. The notable scientist Robert Millikin famously measured the charge of an electron by suspending charged oil drops in an electric field, and nobody accuses *him* of violating the law of gravity.

Although Lakatos thought Velikovsky was obviously a crackpot, he nevertheless took the quote “Velikovsky affair” seriously. A big problem, as he saw it, was that Velikovsky was playing by the rules of science. He took reported observations. He developed a theory to explain them. And – and this is important – *he made testable predictions.* For example, Velikovsky had an explanation for the biblical account of Israelites eating manna from heaven during their 40 years in the desert. In his telling, Venus came so close to the earth that their atmospheres mixed. Hydrocarbons from Venus mixed with oxygen from the earth to form edible carbohydrates that then sifted down onto the desert. Based on that, Velikovsky made a specific, testable prediction that Venus was rich in petroleum and had an atmosphere heavy in hydrocarbons. And he made other testable predictions.

Things get worse because, while Velikovsky played by the rules, unquestionably great scientists *don’t*. Isaac Newton, for example, broke rules that Velikovsky followed. (We’ll see examples later.)

So Lakatos’s goal was to find new rules of science that would *include* Newton in the category “scientist” and *exclude* Velikovsky, instead putting him in the category “crackpot”. Lakatos called his new rules “the methodology of scientific research programmes”. The name is a bit unfortunate for us, because he didn’t mean “methodology” to be anything like “a series of steps to follow to do science”. He meant it more like “a description of how science tends to progress from worse theories to better theories.”

With that in mind, let’s take a look.

Prior to Lakatos (at least according to him) people paid too much attention to isolated scientific theories, which are to be either refuted (by the discovery of experimental evidence that contradicts what the theory predicts) or provisionally accepted as “not yet refuted”. This is a philosophy of science most associated with Karl Popper, though arguably Lakatos is treating him as something of a straw man.

Lakatos describes “refutation” as being too far from what actual scientists actually do. Hardly anybody gives up a theory because it’s refuted by experiment. For example, Newton's theory did not correctly predict the observed motion of the moon, but he did not discard it. When scientists discovered the precession of Mercury's perihelion was faster than predicted by Newton, people shrugged and waited for Einstein to explain it. And about that explanation, general relativity, Lakatos said, quote “Einstein's theory inherits 99% of [the anomalies found in Newtonian mechanics and] eliminates only 1%... As a matter of fact, Einstein's theory increases the number of anomalies; the sea of anomalies does not diminish, it only shifts."

Instead, according to Lakatos, scientists organize in cooperative networks to work on a particular research program that is organized around a *hard core* of two, three, four, or at most five postulates or statements of fact. The research program is all about exploring the consequences of the hard core. Lakatos considered Newton’s three laws of dynamics and his law of gravitation to be an excellent example of a hard core.

The Agile Manifesto, you’ll note, has four postulates. I’ll rewrite them like this:

* If you are having problems in your team, look to solutions requiring individuals and interactions before leaning on processes and tools.

* If you are having difficulties satisfying your customers, try delivering working software at frequent intervals before you ask for comprehensive documentation of what they really want.

* Don’t allow the customer to seek to defend itself against you by insisting on a rigid transactional quote “you do this, we’ll pay you that” contract-like relationship. Instead, invite them to collaborate throughout the project on figuring out what to ask for next.

* Accept the reality of change. Favor training yourself (and your code) to handle change gracefully rather than trying to plan harder in the hopes of seeing less change.

I rewrote the Agile postulates to make them have more obviously the function of a hard core: which is to be tools for growth. In software, that growth is in your ability to solve today’s problems, to be more prepared for tomorrow’s problems, and to perform more competently in the steady state between problems.

What if you can’t solve some problems with the hard core? For example, Agile was clearly developed in the context of small teams. The preface to the first edition of /Extreme Programmimng Explained/ says explicitly quote “XP is designed to work with projects that can be built by teams of two to ten programmers, that aren't sharply constrained by the existing computing environment, and where a reasonable job of executing tests can be done in a fraction of a day." You’re not on such a project, you say? I guess you’ll have to do something else then. Right now, we can’t help you. That’s in keeping with Lakatos’s, quote "theories grow in a sea of anomalies, and counterexamples are merrily ignored.”

The hard core can even contradict things everybody knows are true. Rutherford's model of the atom (mostly empty space, electrons orbiting a nucleus) violated Maxwell's equations, which were believed to be rock solid. Those equations were certainly much more compelling than the new evidence Rutherford's model was intended to explain. But Rutherford's programme essentially said, "We'll figure out how to reconcile with Maxwell later. Meanwhile, we’ll use this model to suggest new experiments.” (The solution, by the way, was quantized orbits - the so-called “Bohr atom".)

So far this is all negative: defenses people use to protect a research programme, which is not the same thing as reasons to join up with it. What convinces scientists to join?

The first thing is *novel confirmations*. What convinced scientists of Newton's theory of gravitation? According to Lakatos, it was Edmund Halley's successful prediction (to within a minute) of the return date of the comet that now bears his name. What "tipped" scientific opinion toward Einstein's theory of general relativity? The famous experiment in which the bending of light was observed during a solar eclipse.

It seems to me that “novel” here is in two senses. Early in the research program, it means both new *and* dramatic. The prediction has to be both new – derived from the hard core – and unexpected. Later on, it can mean mostly just new: the theorist predicts, and the experimentalist determines the prediction was correct. But there still need to be occasional surprises, to keep everyone enthused.

To my mind, a key novel prediction of Agile was: “work doesn’t have to suck”. A lot of early Agile took place in a context where people just assumed the majority of projects would be quote “death marches”, where a project that was a year or two in length just naturally might cause a few divorces due to overwork.

In contrast, I remember in 2001 or 2002, I dropped in on a well-established Scrum team, just to learn how they did things. I remember talking to their product owner who said something like, “My job is more work than it used to be, but it’s great. I can’t imagine going back to the old way.” That, my friends, is a novel confirmation of a prediction of joy at work.

In contrast, the moment I decided to get out of Agile consulting was maybe 7 years later, when I was brought into a regional bank that had been quote “rolling out Scrum across the organization”. I was talking to a programmer who said, I remember vividly, quote “At least my job doesn’t suck as much as it used to.” That was a confirmation that Scrum was better, I guess, but it’s hardly the sort of dramatic confirmation that attracted me to Agile.

The second way that research programmes become appealing is a little complicated to explain, so let me start by discussing how research programmes respond to counterexamples or anomalies that can’t just be ignored. Here’s where I get back to Newton.

According to Lakatos, after Newton published his book on orbital dynamics, the Astronomer Royal of England wrote to him saying, regretfully, that they had many many years of data on the orbit of the moon and that, oops, those facts didn’t fit Newton’s theory of gravitation. Newton wrote back with words to the effect that “I enclose my new theory of refraction. If you use this to adjust your observations, you’ll find that they match my earlier theory.” (I should note here that Lakatos sometimes exaggerates for effect. He even wrote a paper suggesting that adjusting the history to match what he called a quote “rational reconstruction” was OK. Yeah, I’m uncomfortable with that, too, but I don’t think he really distorts Newton’s history to make it match his point.)

Lakatos claims it is common for the hard core of a research programme to be surrounded by a *protective belt* of auxiliary hypotheses that are used to handle telling counterexamples. The theory of refraction is one such.

(It’s perhaps also worth noting that Newton’s theory of refraction didn’t actually solve the whole problem because the moon’s center of mass isn’t at the center of the sphere, something that I think wasn’t understood until the era of space flight. But the remaining anomaly was put into the category of something that could be ignored.)
Lakatos thinks protective belts fall into two different categories.
In Newton’s case, he did not develop the theory in response to new data. (If I remember right, the Astronomer Royal offered to send him the contradictory measurements, but Newton said, “don’t bother, just use this new theory.”) And his *progressive* theory – his protective belt – made its own novel predictions, which were later confirmed.
Lakatos contrasts this productive progressive belt with a different kind. To explain that, let me take a little historical digression.
Imre Lakatos grew up in a Hungary dominated by the Soviet Union. Around the time of World War Two, he became a committed Stalinist. (Having your country be invaded by Germany will do that for you.) However, his views shifted around 1956. (Having your country be invaded by the Soviet Union will do that for you.). He turned against state socialism. So he fled, ending up at the London School of Economics.
Out of his disillusionment, he cites the Soviet Union as using the quote “ad hoc” style of protective belt. It goes back to the Soviet Union being a substantial problem for Marxism. Marxism, intended to be scientific about the laws of history, had predicted that communist revolution would come first in the most industrialized nations: the ones with an industrial proletariate, an established middle class or bourgeoisie with some political rights, and a somewhat reduced upper class. Awkwardly, the Russian Revolution happened in the most feudal of the European powers, the one with lots of peasants, not many industrial proletariate, and a pretty powerless middle class. This really bothered the early Bolsheviks, according to my reading. Their theorists, including Lenin and Stalin, were true believers, so they had to reconcile the reality on the ground with the theory. My understanding is their first approach was to work *really* hard to create the Russia that *should have* existed before the revolution. That accounts for why they were so intent on building up Russia’s industrial base – you can’t hardly have an industrial proletariat without any industry – and so casual about destroying the peasantry. They were supposed to have been out of the picture long before.
They also expected – or hoped – or predicted – the more industrial nations would quickly have their own revolutions, the ones Marx predicted. After all, the time was clearly ripe. In a sense, I guess, they created a potentially productive protective belt – had their predictions come true.
My impression is that by the time Lakatos was growing up, communist leaders had given up on reconciling reality with theory and were intent on explaining it away: Marx was right, of course, but yadda yadda yadda, excuse excuse excuse.
Lakatos objected to such theories because (1) they were obviously motivated by explaining away unfortunate facts, and (2) - more importantly - either couldn’t make predictions for the future, or made predictions that were not confirmed. As he puts it: quote ”[Some proposed counterexample] was never discussed before, but now you can account for this case too by introducing an ad hoc auxiliary hypothesis. Everything can be explained in this way, but there is never any prediction of a corroborated novel fact."
The practical import of this is that you can *never* take down a rival research programme by pointing to its failures. They can always be defended. Instead, your approach to that other programme should be “What have you done for me lately? Surprise me. Delight me.”

Scientists join research programs with clear hard cores, that continue to make novel predictions that are confirmed, and whose protective belts do the same. Or, which drop protective theories that stop doing the same. The protective belt can be changed; the hard core never is.
Once the research program stops doing all that, scientists begin to peel off. Even hard cores with excellent auxiliary theories can run out of steam. As Lakatos puts it: quote “[A programme] is degenerating if ... (1) it does not lead to stunning new predictions (at least occasionally...); (2) if all its bold predictions are falsified; and (3) if it does not grow in steps which follow the spirit of the programme." That last is kind of vague, but I think it means not using the hard core, something like “well, we’re doing Agile, but really we need to institute more codified processes because remote work is hard.” In a progressive programme, you’d apply the Agile Manifesto hard core and its associated theories when faced with a novelty like remote work.
My opinion is that Lakatos provides a way for Agilists – and others – to think about what we’re doing. Specifically, whether we’re actually working in the spirit of the hard core or just accumulating whatever kludges can be used to explain away failures. Lakatos notes that research programmes can degenerate for a time and then regain their progressive character. Let’s hope that happens to Agile, not that it contents itself with making work suck a little less.

I next want to discuss Galison’s /Image and Logic/. After rereading 844 pages with *way* too much detail about the history of particle physics detectors – bubble chambers and the like – I’m finding it a hard book to summarize, and the woman I affectionately call Pterano-Dawn wants to go on a road trip. So there will likely be a gap in the episodes, which I never really expected to be weekly in the first place. Sorry, and thank you for listening.

E7: Imre Lakatos on what persuades scientists to risk their careers
Broadcast by