E49: Metaphors and the predictive brain

Download MP3

Welcome to Oddly Influenced, a podcast for people who want to apply ideas from *outside* software *to* software. Episode 49: Metaphors and the predictive brain

Homeostasis is out; allostasis is in. In this episode, I’ll discuss briefly Lisa Feldman Barrett’s theory of the brain and use that to launch another critique of metaphors in programming. But first let’s talk about me.

≤ music ≥

I used to fly the sort of gliders where you start on the ground, attached to a powered airplane – the “tow plane” – by a rather surprisingly thin tow rope. The tow plane drags the glider up to (in our club’s case) some 700 meters, whereupon you, the glider pilot, pull a knob that lets go of your end of the rope. The tow plane turns left and dives; you turn right and soar.

Tow ropes are stronger than they look, but they do sometimes break. This is essentially the same as starting your flight way lower than normal. In glider training, you practice calling out key stages of the ascent in the form of heights. Below a certain height, your only viable option is to land straight ahead. Then there’s a range of heights where your best bet is to do a U turn and land “downwind,” with the wind blowing behind you, which is harder than landing with the wind blowing at you. Then there’s a height at which it’s advisable to make three left turns to trace out three sides of a rectangle and land upwind in the normal way.

Just learning to call out the heights isn’t enough to get certified to fly solo. You have to be tested.

So, picture me, sitting in our club’s venerable Schweitzer 2-33 training glider. The instructor is sitting invisibly behind me, in his own seat with duplicates of all of my controls – including the “release the tow rope” knob. So I was in the middle of a routine ascent when he pulled that knob. I have a fairly vivid memory of a “bang” of the releasing rope – doubtless exaggerated – and of the tow rope flying away from the cockpit I was watching from.

I *instantly* and without any conscious thought turned to land downwind. Smoothly and correctly. I felt rather pleased with myself.

So hold that example in mind while I talk definitions.

Homeostasis is the ability of the body to keep certain measurements within narrow bounds. For example, under homeostasis, your blood pressure is steady, your pulse rate is steady, and your internal temperature is steady.

However, sometimes the brain does something else. When you’re infected by a virus, your brain (using mainly the hippocampus) finds it useful to abandon homeostasis and raise your body temperature. That’s because viruses like it cool, so a fever helps your body fight infection. That change away from homeostasis is called allostasis.

I learned of allostasis in Barrett’s 2017 paper “The theory of constructed emotion: an active inference account of interoception and categorization.” She starts with what the brain is *for*:

“All brains accomplish the same core task: to efficiently ensure resources for physiological systems within an animal’s body (i.e. its internal milieu) so that an animal can grow, survive and reproduce.”

Allostasis is the means to that end:

“Allostasis is not a condition of the body, but a process for how the brain regulates the body according to costs and benefits […] Whatever else your brain is doing—thinking, feeling, perceiving, emoting—it is also regulating your autonomic nervous system, your immune system and your endocrine system as resources are spent in seeking and securing more resources.”

Let’s say you’re sitting in a rocking chair and decide to stand up. Before you start moving, your brain needs to increase your heart rate and blood pressure. If it didn’t, you’d get that “I stood up too fast” feeling of dizziness. It’s the process of adjusting those values that’s called allostasis.

Once the heart rate and blood pressure are appropriate, the brain sends signals to motor neurons that start the choreographed process of standing up. Your brain has learned from a vast amount of experience that there are various possible futures at this point. In one, you’re starting to stand up and all is going well. In another, you mistimed the rocking of the chair and you’re off balance.

The brain has nothing to work with but perceptions, the most important of which in this case come from what are called “proprioceptors,” internal sensors that send signals based on the positions and movements of parts of the body.

But the brain doesn’t passively receive those signals. It’s apparently both more efficient and effective for the brain to be proactive. We can imagine it making predictions of what signals it will receive in the two cases. “If I get *this* pattern of signals in the next little while, all is well – continue; if I get *that* pattern, my body is out of balance.”

Associated with each prediction are actions. When one of the predictions matches, the corresponding action is taken. If it turns out you’re off balance, the action might be to instruct thigh muscles to relax and plop you back down, or alternately perhaps to grab the arm of your chair to steady yourself. Which does it do?

Let’s get back to my glider analogy. I hypothesize that, upon takeoff, my brain made the prediction that I would shortly see the tow rope dwindling away in front of me (the same thing I saw on every flight, just way too early). Were my brain to perceive that, the associated action would be to put the plane in the right orientation for final approach, because I was going to land straight ahead in the cornfield past the end of the runway. That action means an arm movement and setting a new prediction for the view out of the cockpit that indicates success – which triggers another action to pull back a little and keep the plane on the glide path.

But that prediction didn’t come true. When I hit the first callout point (70 meters, I think), either the sight of the altimeter or the sound of myself saying “70” caused that first prediction to be “canceled,” with a new prediction/action pair activated. The prediction would be the same (tow rope visibly dwindling), but the action would be to invoke the habit of banking and turning.

I have a similar theory for standing up. We humans spend a good amount of time in early childhood learning how to control our bodies. That aimless waving about of limbs that infants do? They’re actually at work learning the “statistical regularities” connecting motor movements to their proprioceptive consequences and other perceptions. That’s not built in, not instinctive. As Andy Clark says,

“One infant, Gabriel, was very active by nature, generating fast flapping motions with his arms. For him, the task was to convert the flapping motions into directed reaching. To do so, he needed to learn to contract muscles once the arm was in the vicinity of a target so as to dampen the flapping and allow proper contact.

“Hannah, in contrast, was motorically quiescent. Such movements as she did produce exhibited low hand speeds and low torque. Her problem was not to control flapping, but to generate enough lift to overcome gravity.”

Babies learn from experience how to make their own limbs do what they want.

Later in life, children learn how to grab things to steady themselves when they’re off balance. I imagine that part of the process of deciding to stand up is to attend to things around you that you might grab onto. (Maybe you’ve been keeping track of them all along, or maybe you check at the moment of decision – I don’t know.) The result is a prediction about whether the appropriate reaction to the perceptions associated with being off balance is to slacken the quadriceps muscles or to start reaching out with the left hand. Only one of those is chosen to be the action assigned to the prediction that the proprioceptors will send the “we’re off balance” signal.

≤ music ≥

My understanding is that the model of the brain as a prediction engine is most solidly established for relatively quote/unquote “simple” behaviors like moving arms and legs. Barrett’s research programme is to extend the theory to emotions. I’m going to go farther. It’s always dodgy to extrapolate brain research, but I’m going to do that anyway and claim that conversation is heavily predictive, and metaphor plays into that.

My argument is as follows: any time we’re engaged in conversation, we are predicting our interlocutor’s possible reactions and the appropriate resulting actions. Let’s take an example:

A first-year college student, call him Stu, approaches his Resident Advisor or RA, call her Rachel. (An RA is an upperclassman or graduate student who lives in a college dorm and is paid to provide advice to the dorm’s less experienced and younger students.) Stu hems and haws a bit, being one of those people taught to avoid showing vulnerability, something Rachel knows from past experience. Since it’s the weekend before final exams, Rachel can guess what’s happening, and conveys to Stu some truth-valued propositions for him to evaluate:

“It is nearly time for final exams. You, Stu, appear to me to be nervous. Students in your position are often nervous that finals will have bad effects on them and are regretting their choice of college. Those concerns are almost always exaggerated. Finals are a near-term cost that will be exceeded by the medium-term reward of this semester of education. And the costs of semesters of schooling are outweighed by the long-term benefits of a college education.”

No, of course she doesn’t say that. How would that be reassuring? She’s not trying to convey truths Stu already knows – albeit only intellectually – she’s trying to manipulate his emotions or feelings. Her brain, like everyone’s, acts to change its environment. As social animals, our survival depends a *lot* on the emotions of people around us, meaning they are a critical part of our environment. They are so often so salient that the brain surely automatically attends to them, just as it automatically attends to the risk of being off-balance when standing up from a chair. Rachel is perceiving Stu’s likely emotion, and she intends to change it.

So Rachel actually says something like “You know, everyone’s nervous with finals bearing down, but people get through it. Try to remember that, in two weeks, you’ll be one semester closer to graduation.”

I constructed those sentences to reinforce last episode’s message. Notice that they use two separate metaphors for the future. One is the metaphor system that `A future event is an object moving toward you.` In this case, the use of “finals bearing down on you” picks out the association with danger or threat - of objects moving toward you fast. The other metaphor system is that `Time is a medium through which we move.` Here, Rachel is portraying the near future as some viscous medium that people “get through,” with acknowledged difficulty. Note also that life is being portrayed as a journey with a destination, with some parts being slower and more difficult than others: time is not portrayed as advancing by one second per second.

As always, the brain gleefully mixes and matches even contradictory metaphors in the service of activating associations.

But – and this is the point of this episode – the process of Rachel’s brain formulating and saying, “You know, everyone’s nervous…” includes as one prediction that Stu will make perceptible bodily and facial movements compatible with an upcoming declaration that no, he’s not nervous at all, this is something else (even if he actually is nervous and just isn’t admitting he’s seeking reassurance).

If Stu did that, it’d be the equivalent of the tow rope breaking. Rachel would shift smoothly and mostly unconsciously away from the planned path to one appropriate to the developing situation.

≤ short music ≥

So sentences are also social animals; they don’t live in isolation. And sentences are formed in a predictive process that takes into account the listener’s various predicted reactions. Or the reader’s. Some writers have a conscious picture of a specific person they’re writing for. Others, like me, have a more generic person whose vaguely-predicted reactions tell me where my argument is going astray. Plus one specific person, Dawn: who unaccountably listens to episodes on her morning walk. Still others probably don’t believe they’re doing the predicting-the-reaction thing at all, but I suspect they’re fooling themselves. Speech and writing are so closely related that I don’t see how massive experience with speech can fail to be adapted for writing in the normal way the brain kludges up new abilities by piggybacking on old ones.

And that’s a problem with metaphors that appear in program text: there’s *generally* no anticipated conversation. Oh, we say things like “I think the code is trying to tell us something,” but really? In my case, when that happens, I’m not having a conversation with the code, I’m finally giving in to increasing discomfort – usually it’s well after I should have started paying attention to, say, tension in my muscles, the way I’m striking the keys on the keyboard harder, and so on. If we want to use a conversational metaphor for this process, I had my fingers stuck in my ears, saying “La-la-la, I can’t hear you” to my body, and then decided to stop doing that.

That’s not to say there aren’t *some* parallels between coding and writing for an absent reader. For example, many years ago, I wrote a code coverage tool for Clojure. For some functions, it used a particular sort of variable that was generally out of favor, partly because it interacted poorly with multithreading. I signaled that to the reader by (1) putting the relevant code in a file called `thread_safe_var_nesting.clj`, whose name reveals I know about the problem, and put in a docstring (a kind of widely-visible comment) that read, “Used instead of `with-bindings` because bindings are thread-local and require specially declared vars."

And anticipating the reader probably does happen within most routines or methods. The small scope allows for tractable Barrett-esque predictions about the reader’s reactions. But at the larger scale, the writer has no control over the order of reading, and that makes it hard to predict *where* some reader might have some reaction and *when* that reader might read your counter to their anticipated reaction.

So conversation – a powerful metaphor for investigating how metaphors work – is very limited in code. It’s limited to advice like “if you have trouble finding a method, edit the code to move it to the place you looked first,” which implements a sort of conversation between the original coder and the new one. Or, in reviews of code changes, which I venture to guess are more often read linearly, from top to bottom, making the reviewer’s reactions more predictable.

But coming back specifically to metaphor, I’ve tried to demonstrate in previous episodes that the most interesting uses of metaphors are (1) revealing an association the reader hadn’t yet made (or strengthening or weakening one which they had made) and (2) helping you solve problems.

But code is presenting a solution, not a problem, and it’s not clear what new associations are evoked by seeing the nth use of the classname `Invoice`. I’ll defer saying more about that until the next episode.

Instead, I’ll close by noting again that among the associations commonly evoked by metaphors are feelings and emotions. I know, I know, “facts don’t care about your feelings,” but let me just point out that statement is intended to be both factual *and* to hurt the feelings of the listener or the reader of the tee shirt at the Trump rally.

I think you won’t go far wrong if you think *all* writing is persuasive writing. Even the scientific paper – perhaps the paradigm of “just the facts, ma’am” – assumes its reader is taking a particular interested but skeptical stance toward it. I’d count that under “feelings,” and note that the stereotyped structure of such a paper rewards that reader and discourages other types. For example, it’s conventional to put experimental Results in a separate section placed before the Discussion of those results. That rewards skeptical readers who want to form their own opinions before reading the authors’. Indeed, authors who want to break away from the conventional form would do well to add some text to explain why that’s OK, actually. To assuage the readers’ feelings.

I don’t know how persuasive writing fits with code. Persuade people of *what*? That the code is correct? That the problem is understood – and how it’s understood? That this particular hunk of code is coherent and cohesive, so changes should be made elsewhere? As an estimated 80% of my wife’s scientific papers say, “More research is needed.” I’m fairly sure that, to the extent code is persuasive, metaphorical names in programs won’t be deployed the way they are in conversation. They’re too lightweight to take the load.

Two down, one to go, in my series sniping at programmatic metaphors. Thank you for listening. I’m off on a road trip. Road trip! Road trip!

E49: Metaphors and the predictive brain
Broadcast by