E45: The offloaded brain, part 5: I propose a software design style

Download MP3

Welcome to Oddly Influenced, a podcast for people who want to apply ideas from *outside* software *to* software. Episode 45: The offloaded brain, part 5: I propose a software design style

I’ve been playing a little game in previous episodes. I’ve been using the word “model” where cognitive science people tend to use the word “representation”. There’s a subtle difference.

First, “model”. The statistician George Box famously said “all models are wrong, but some are useful”. Representations are judged by a different standard, where the highest praise is that a representation is *faithful*. That is, representations are oriented towards *truth*. Steve Doubleday suggested a way to explain the difference that makes, and it really resonated with me because…

I grew up in semi-rural central Illinois, and I still live there. Illinois is in the US Midwest, and most of it is very flat. As a result, roads tend to be aligned with the major points of the compass. Back when I used to fly gliders out of “Monticello International Airport” – a grass strip for small aircraft – every trip I took there started on County Road 1000, which lies between County Road 1100 (a bit north of it) and County Road 900 (an equal distance south of it). Technically, all those road names are suffixed with “north” to, um, indicate that they run east/west, but most people wouldn’t use that when giving directions. Instead, they’d say “head west on county road 1000”.) Their directions would also tell you to turn south (or maybe they’d use the word “left”) onto Road 1150.

My point is that directions in Illinois can be *reasoned about* or *extrapolated from*. If I find myself somehow going west on County Road 900, I’ll know how I can get to County Road 1000: head north. If I find myself crossing a north-south road labeled 700, I’ll know I’ve overshot road 1150, where I should have turned south.

It’s *easy* to translate from a representation of rural Illinois to the world outside your car windows. These road designations are, thus, faithful representations of an aspect of reality.

In contrast, Dawn grew up in New Hampshire, a state in the part of the US called New England. She didn’t learn maps so much as she learned *routes*. Dawn learned the sequence of landmarks and turns to get from point A to point B, but that knowledge – by itself – doesn’t allow much in the way of unplanned inferences. As I know to my sorrow, once you’re lost in New England, you’re *really lost*. The landscape – hilly, forested, with roads that curve all over the place – just doesn’t lend itself to the simple one-to-one spatial representations we can use so easily in Illinois.

In Illinois, our models get to be both useful *and* true. In New Hampshire, the environment forces people to concentrate on just “useful”. People like Andy Clark (in his book /Being There/) claim that our evolutionary environments and history have forced our brains to – mostly – do the same.

And what *I’m* going to claim is that accepted methods of software design have a preference for abstraction and representation “baked in” so that it seems the most natural thing in the world, just as it feels obvious to me that I should be uncomfortable when I don’t know what direction north is, a discomfort Dawn didn’t grow up sharing. In fact, let’s call that current style of software “Illinois style”. It wants representations that are both useful and true.

That implies there could be a “New Hampshire style” of design that differs by placing much less emphasis on whether the code has a faithful representation of the business domain. Using the terminology of domain-driven design, New Hampshire style, for example, would be far less interested in a “ubiquitous language”. It’d be content, mostly, with routes that can’t be reasoned about or extrapolated from, so long as the app gets where it’s going.

I aim to explore New Hampshire style by writing an app to help me write podcast scripts. In my retirement, I like to think I’m upholding one of the great traditions of programming: spending mere months of pleasurable work to save entire hours of time and frustration. This episode is about what I’ve learned as I’ve meditated on how the app will work.

It’s very tentative because I haven’t even started writing the app yet, nor even the first of my prototypes. I thought getting started would interfere with getting this episode out in a timely way. As it turned out, I’d have learned quicker if I’d coded rather than just cogitated.

≤ music ≥

I’m going to say New Hampshire style is not about development process. My process is going to remain what it’s been for a couple of decades or so: I’ll code one feature (or behavior) at a time. I’ll obey the high level heuristics “you aren’t going to need it” and “do the simplest thing that could possibly work”. I’m supposing, in short, that iterative and incremental development work pretty much the same in both Illinois and New Hampshire style. The differences will be in building blocks used and decisions made *during* the process, not in the process itself.

When justifying building blocks and decisions, I’m going to use the guideline of “biological plausibility”. For example, suppose I’m implementing a new feature and my attention is drawn to some chunk of code, call it Thing One. I might have the choice of changing Thing One’s behavior so that it does what it used to do before but also something new that helps with the feature I’m working on. But I might also have an option to leave Thing One alone but add a Thing Two that uses Thing One’s existing behavior to create new, feature-specific behavior.

In Illinois style, I’d very likely change Thing One, especially if Thing One’s real name was derived from what people usually call the “domain” or “business domain” but I’ll call “the environment” – the world the app must operate in. In Illinois style, I’d try to make the change not only useful, but also a more faithful representation of whatever in the environment Thing One represents.

However, adding on Thing Two is more biologically plausible, in that – if I’m allowed to personify evolution – adding on seems to be its default choice.

For example, consider the monkey’s paw. Monkeys, like us, can move individual digits independently. It used to be thought that each digit had a set of neurons devoted to it, but that’s not the case (according to Clark). Rather, the base case is a process that causes the whole paw to close into a fist. Layered *on top of that* are other processes that modulate the base behavior to allow control of individual fingers. Moving a single finger involves – at least in part – *suppressing* the natural whole-paw clenching motion made by the other fingers. So “curl your index finger” is more like “clench your fist, except turn off the non-index fingers.”

That makes evolutionary sense. A creature that swings through trees is primarily concerned with grabbing branches: a whole-paw behavior. Finer control over individual fingers evolved later, and in a not-at-all uncommon way: by tacking on new structures or processes, rather than by reworking old ones.

For experienced software builders, this approach is just *wrong*: it’s piling kludge on top of kludge on top of kludge, and we all know how that ends up.
And those of us trained in iterative Illinois-style design know how you avoid the mess: you continually refactor toward a design that *feels right*, where “feels right” is some (I’d argue) semi-intuitive, semi-learned combination of “elegance”’; a resemblance to the abstractions used in fields we envy, particularly math and physics; a “shape” that seems likely to support making more changes like the ones the app has already undergone; and – frequently – some kind of faithful mapping to part of the app’s environment. Good representations *feel* good.

(For those not up on the jargon, a “refactoring” is a change to the code that gives it a different structure or text but keeps its observable external behavior the same. Refactoring might be done in order to make a particular feature easier to implement, or just because the code looks icky.)

Evolution is clearly able to deal with more complexity than the human brain can. It piles kludge upon kludge and still produces bodies that survive long enough to reproduce. But – Clark argues – even evolution still does do a certain amount of refactoring, and it sometimes does refactor toward what we recognize as representations.

A quote’s coming up, but first I have to make two definitions. What Clark calls “action-oriented” models or encodings contain just enough data to make one specific action happen. Way back in the unfortunately unnumbered episode “Concepts without Categories” – the one just before episode 41 – I described how, once a rabbit is taught a carrot is tasty, there’s a set of neurons specifically devoted to the smell of carrot. Those neurons are action-oriented: they’re all about making the rabbit act to eat a nearby carrot.

So-called “action-neutral models” can support multiple activities. A rat has a map of its environment it stores in its hippocampus, one that supports all of route-finding, hiding, climbing, choice of nesting site, and so on.

Here’s Clark:

“If a creature needs to use the same body of information to drive multiple or open-ended types of activity, it will often be economical to deploy a more action-neutral encoding which can then act as input to a whole variety of more specific computational routines. For example, if knowledge about an object’s location is to be used for a multitude of different purposes, it may be most efficient to generate a single, action-independent inner map that can be accessed by multiple, more special-purpose routines.”

That’s *something* like the old programming adages of “don’t repeat yourself” and “eliminate duplication”, but those are about reducing programmer effort, especially *future* programmer effort. That’s not what Clark means by “efficiency”. To evolution, what matters is cutting down on the amount of glucose the animal needs to metabolize to run that expensive brain.

So my job when coding my app is to find biologically-plausible refactorings that produce more maintainable code. That is, my underlying motive might be: “I need to simplify what I’m looking at because I predict my future brain will just not be big enough to deal with it alongside everything else”, but – if I’m playing fair – I also need to come up with some argument that the change is compatible with saving program effort, not just my own.

I should note that Clark says evolution *has* produced representations, but he doesn’t claim it *must*. A model that supports many actions without being a faithful representation would be fine. Creating such models is what interests me. I know how to use Illinois-style refactorings to arrive at representation-style designs. But what kind of refactorings push toward non-representational designs that are *just as good*?

This is going to be a fair amount of work because Clark describes endpoints, not a process. He talks a good deal about simple action-oriented models and also about more complex action-neutral models, but not so much about how you get from a soup of the first to a single one of the second. In a way, it’s like that famous Sidney Harris cartoon that shows two scientists standing in front of a blackboard that holds some mathematical argument meant to be read left to right. There’s some math-like text on the left, and more math-like text on the right, but in the middle it says “Then a miracle occurs”. One scientist is pointing at that sentence and saying to the other “I think you should be more explicit here in step two.” I’m not blaming Clark for the gap, mind you – he’s proposing a research programme, not describing a solved problem. But it leaves me with more work to do.

I should note that I’m not egotistical enough to expect my work to cast any light on how *evolution* developed abstractions – the steps I wish Clark had been able to be more explicit about. How would I know if any of what I learn applies to the brain? I’m not a neurologist nor a cognitive scientist, just a guy who’s read some books.

But it’s high time to get down to specifics.

≤ music ≥

First, architecture, or the fundamental building blocks. I’m going to go with what the Erlang language calls processes. Other languages call them “actors”, but I prefer “processes” so that’s what I’ll use here. Erlang processes seem a good match for how a brain does a lot of its work.

Remember the neurons that a rabbit uses to smell carrots? They’re linked up to chemical receptors in the rabbit’s nose. When those receptors detect their chemicals, the neural signals they send will reach the carrot bundle. That starts the bundle firing in a circular, self-reinforcing pattern. If the carrot stays put, later sniffs of the rabbit’s nose will send reinforcing signals that keep the pattern going and even increase its strength. But if there are no more signals from the nose, the pattern will die out.

It’s those patterns of neuronal firing that I’m analogizing to an Erlang process. Like the pattern, a process is created, can persist, and might eventually go away or die out.

Like a pattern receiving reinforcing signals, an Erlang process can receive messages that might make it change its internal state. Neuronal messages tend to be simple: “do that thing you do”, “don’t do that thing you do”, “slow down”, and so on. Erlang messages aren’t generally *that* simple, but because passing data in messages is relatively expensive, there’s some pressure toward simplicity.

An Erlang process can also send out messages to other processes that it’s connected to. This is much like how the rabbit’s neural bundle sends out a signal when the reinforcement reaches some threshold: the smell of the carrot is certain enough that it’s worth telling other processes to start looking around for a snack.

While the rabbit is smelling carrot, it’s also doing other things like listening for predators and turning its ears to face the most suspicious sounds. It has many patterns running simultaneously and independently. It’s easy to do the same with Erlang processes. In fact, Erlang processes are sufficiently lightweight that having a hundred thousand of them running at the same time is just another day at the office. That encourages lots of little processes rather than fewer, bigger ones, which seems a good match for the brain’s workings.

(As an aside, I’m not actually going to use Erlang processes for anything other than prototyping. Because this is a Mac app and perhaps an iPadOS app, I’ll be using Swift and its variant of Erlang processes. They’re not so lightweight as the Erlang version, but I plan to pretend they are until forced to obey constraints.)

So the app I’m to develop will be a big stew of processes. But it can’t be just that because… it’s a Mac app. Apple has quite definite ideas about how programs should present their user interfaces, and Swift has a lot of libraries to support those ideas. I’ll certainly be using those for things like editing and structuring text; drawing, dragging, and dropping little images of notecards or paragraphs of text; annotating the edges of windows; and so on.

That raises the question of architectural layering or, really, the question of: if I’m modeling the app after an animal, what’s that animal’s environment?

I intend the basic user experience to be one of collaboration between me and what I’m now going to call – cringe-inducingly – the “app-animal”. I expect normal usage to be me editing away – making changes to an environment shared with the app-animal. It’ll be watching and will sometimes react to my changes by making its own changes in response. I might react to those in turn, or I might not. All this will be asynchronous.

So: what’s the environment? From my point of view, that’s pretty simple: It’s what I see on the screen plus, I suppose, the keyboard and trackpad I’m using.

I was initially tempted to make that be the app-animal’s environment as well, just “from behind”. That is, it would perceive its world by querying the same data structures that MacOS uses to paint the screen, and it would react by directly changing those structures.

But that would be hard, and no fun. And it misses the entire point of the app, which is to edit a complex document composed of structured text for a script plus a set of organized notes plus etc. etc. It’s really that in-memory document that I and the app-animal will treat as our shared environment.

Information about this environment will flow in very different ways to me and to the app-animal. That’s not unusual. Here’s a quote from fictional author Dr. Ha Nguyen, in her fictional book /How Oceans Think/:

“What matters to the blind, deaf tick is the presence of butyric acid. For the black ghost knifefish, it’s electrical fields. For the bat, what matters is air-compression waves. This is the animal’s umwelt: that portion of the world their sensory apparatus and nervous system allow them to sense. It is the only portion of the world that ‘matters’ to them.

“The human umwelt is structured according to our species’ sensory apparatus and nervous system as well. But the octopus will have an umwelt nothing like ours. In a sense (and I use that word purposefully) we will not exist in the same world.”

For me, the human, information flows to me through a collection of code bundles with names like NSTextView and NSScrollBar that I, as programmer, have orchestrated in ways that make the document visible on a laptop screen. While all that fairly mundane code is part of the app, it’s not part of the *app-animal*. If anything, it’s part of *my* “sensory apparatus”.

The app-animal perceives the document via a completely different route, one made of special-purpose Erlang processes, not MacOS built-ins.

(Note: The imaginary book and author are from the real Ray Naylor’s real novel /The Mountain in the Sea/, which turns out to be oddly relevant to my project. A real synchronicity that I reread it while writing this episode.)

≤ short music ≥

I’m going to follow the brain’s rough layering into a perceptual system (or systems), calculational or control systems, and motor systems – although, like the brain, I’ll be sloppy about separation of concerns. As an example of the brain’s sloppiness, consider the game Tetris. Skilled Tetris players obey certain heuristics. However, there’s evidence that the game moves too fast for the control system to actually enforce those heuristics. So the control system instead tweaks the visual system to bias it in a way that has it deliver only inputs compatible with the heuristics. The control system doesn’t have to worry about enforcing heuristics because it will never get inputs that tempt it to violate them.

When it comes to the perceptual system, I’ll be borrowing the idea of focused (or indexical) attention from last episode’s Pengi system. For example, when I create a new note, certain processes will spring into life and watch the text I type into it. When I’m moving around in the script proper, the particular paragraph the cursor is in will be watched carefully. (I’m going to make the cursor position part of the document; that is, part of the environment. It’s some of the movement the app-animal will be watching.)

What these attention-paying processes are watching for are affordances. For example, it’s extremely common for me to start revising a paragraph by moving the cursor to the middle and splitting it in two, making two chunks of text with whitespace between. That split is, in itself, an affordance, signaling the app-animal to focus specially on the two half-paragraphs and, most of all, on that whitespace.

You see, when I edit such a split paragraph, the space in between tends to fill up with sentence fragments, each recognizable by the blank lines before and after it. For example, an abandoned paragraph edit for this very script contained these interior fragments, in order:

“Now, even if I’m lucky and find an open source library that implements dragging of notes” (That ends abruptly, with no final period to make it a sentence.)

and

“For the app” (no period)

and

“I want to model my app on animals with perception and behavior. For example, it is extremely common for me to begin” (no period)

I don’t want to throw away those fragments, because sometimes I use them. But they pile up, and frequently push the end chunk of the paragraph below the bottom of the screen, which helps me lose track of what I’m doing. So I want the app-animal to sweep them to the side, out of the main flow of the text, but still visible while I’m in the paragraph, working.

The way I expect to implement this really needs pictures or, better, 3x5 cards I can slide around on a table top to explain. So I must warn you that, in the words of the summary sentence of the only paragraph-long comment in the circa-1980 Unix kernel: “you are not expected to understand this”. I include it to show what I think this kind of design will *feel like*. Let the description, uh, flow over you, noting if it feels different from what you’re used to.

So, that said, the way I’m imagining this working is that a process that watches the cursor will notice that I’ve split a paragraph. It’ll create a new perceptual process that’s focused on the start and end chunks and the space between them. It will also start a control process to deal with the affordance of a split paragraph. Right now, I see that control process as doing nothing but starting a motor process that attaches an empty “fragment holder” to the paragraph. That is, it *puts something in the environment* rather than maintaining internal data *about* the environment. Both of those processes will exit after doing their jobs. I’ll explain why in a bit.

After a while, the process that’s watching paragraph editing may detect a new affordance: a sentence fragment has appeared. It will then start up a new control process to handle that. That control process will start a motor process that moves the new fragment into the paragraph’s fragment holder. And then those control and motor processes will also go away.

For practicality’s sake, the motor process has probably signaled the Brian-facing side of the app that there’s a fragment-moving change it should display – I’m thinking of something like the Observer pattern here. That code will do whatever MacOS requires to make the new fragment swoop over to the side, letting the end chunk move up to fill in the now-vacated space.

≤ short music ≥

What I want to emphasize here is the proliferation of small ephemeral processes. There’s not a single control process that lives during the entirety of paragraph editing and handles each new fragment in turn. Instead, a new control process is started for each fragment, handles it, and then goes away. That seems in keeping with the brain’s essential stinginess: the brain doesn’t want to keep a circular activation pattern going when it doesn’t need to. It’s cheaper, in terms of glucose, to stop and later restart a pattern than it is to maintain it. The opposite is probably true for Erlang processes, and even more likely true for more heavyweight Swift actors, but I’m going to stick with how the brain does things, insofar as I understand it.

You may have noticed that the control processes that live between the perceptual processes and the motor processes don’t actually do anything more than start the motor processes. The perceptual processes really could just directly start the motor processes. I’m not doing that because I don’t know if that sort of thing is biologically plausible.

In Illinois style design, I’d probably also have such empty control processes, if only to enforce layering in the way I’ve been taught to do. But I’d also be using them as the seeds from which richer, more representational models would grow along with the app. That is, I’d likely make the control processes persist, right from the start. Over time, as new features were added to the program, those processes would tend to get new behavior and new state, and so the path of least mental effort would be to refactor toward representations. But, in New Hampshire style, I don’t want to have that bias. I want to be *forced* toward action-neutral models.

Which raises the question: what *is* the trajectory toward the more complicated, more persistent, more representation-like models that Clark says the brain is sometimes willing to pay for? What kind of refactoring will lead to more capable models that can be shared by multiple action-oriented processes?

My thinking about that starts with how neuronal activation patterns can be nested. That is, one circular self-reinforcing pattern might be part of a larger, also self-reinforcing pattern. Wheels within wheels.

Now, it turns out that sometimes an activation pattern can be embedded within more than one larger activation pattern. How might that happen? Well, I don’t know. I’ve never read anything that speculates on history rather than noting the reality. But I’ll note that evolution is absolutely shameless about making use of what’s to hand, without caring a whit about our ideas of encapsulation.

So, shamelessly personifying again, I can imagine evolution “thinking” that to solve its current problem, that little activation pattern *over there* would come in handy, neither noting nor caring that the handy pattern is already contained within some other larger activation pattern. So it just grabs it. (I think I’ve read that the same set of neurons can be *simultaneously* participating in different activation patterns, which just makes my brain hurt.)

Is the smaller pattern now “part of” two larger patterns? Or is it “used by” two client patterns? As far as I can tell, such distinctions don’t really make sense for the brain.

That’s fine for the brain, but programming languages are much more rigid about containment vs. use. So how can I use this metaphor when coding my soup of Erlang processes?

Here’s my thinking. I’m adding on a new Erlang process to my existing system. If the fates want to be easy on me, the new process can be self-contained, not needing any already-existing code from anywhere. Or I might see that I can get the behavior I want by having the new process send messages to an old process. That’d be more efficient than adding duplicative code to my new process: efficient in the brain sense, I mean, because new code or new data is the equivalent of new, expensive neurons.

Sending messages to another process gains my new process access to existing code or data, but those might be awkwardly embedded. My new process might need to do some elaborate dance to make use of the old process’s behavior. That’s bad! Dancing burns glucose.

Evolution would, at this point, give me a puzzled look and say “just reach into the other process and control the embedded code directly, what’s the big deal?”, which is not something that really works for us humans or our programming languages. We can’t really deal with a process being contained simultaneously inside two other processes. Instead, I’ll have to convert containment into use.

By that I mean I’ll extract a subset of the old process’s code+data into a new helper process to be used by both the old process and the process I’m writing. At this point, an Illinois-style programmer might say, “Haven’t you just given a roundabout explanation of a plain old Extract Class or Method Object refactoring?”, and, well, I can’t really argue with that, at least not yet. It *feels* different. With luck, examples will make the feeling more concrete. For now, I’m going to dodge the question.

The next question is what causes that new helper process to be *persistent* rather than ephemeral? Rats don’t keep recreating their map of their environment; rather, their map is basically a long-running process that’s queried often. It seems to me you’ll never get to larger, more powerful action-neutral models without persistence.

I think economics is, again, the answer. As an action-neutral model gets used by more and more processes, it’ll come to contain more data. Each time the action-neutral model is started, it’ll have to metabolize glucose to initialize that data. So, as the number of client processes grows, it will become more efficient to just to keep the action-neutral model running than to keep restarting it.

Once a persistent process exists, I want design forces of some sort to push me towards evolving its contained data, perhaps towards something that an outside observer would say is a representation. Maybe a bit more concretely: Let’s say that individual action-oriented processes each contained a route. Now I’ve piled those routes together into an action-neutral process. Fine. But how do I get from that collection of routes to a map? What refactorings do I use? What drives my choice to use them?

I don’t have any really concrete ideas, but I’m inclined to think about it the same way Andy Clark thinks about concepts. Recall from the Concepts Without Categories episode that he thinks that concepts come from words, rather than that words label concepts. The word is injected into the brain from outside – from society – and serves as the seed around which associations grow. The analogy I used was the way a single snag in a river can collect debris to form a larger logjam.

I haven’t talked about associations in this episode – even though the brain is highly associative – because I don’t really know how to think about them yet – but I bet they’re important for turning routes into maps.

One idea I have comes from Naylor’s /The Mountain in the Sea/. One of his characters, Rustem, is something of an old-school cyberpunk hacker cracking systems that are more advanced versions of today’s neural networks. As such, they, like the ChatGPT of today, don’t have an evident internal representation of the world.

Rustem’s great talent is that he can *produce* a conceptual representation of a computer system that doesn’t inherently have one: a map that lets him do whatever he’s being paid to do. In the following quote, he’s talking to his sometime lover about how he’s making progress. He’s analogizing his work to how a foreigner learns to get around a new city.

block quote

“It is as if the city had been described to me by someone who lived there. You know—how someone will lead you through the city they live in more by landmarks than by street names. There is a yellow house, with a boarded-up window, at this corner. You turn left there. If you see a billboard with an advertisement for private dronecopter trips to the Canaries, you’ve gone too far. It’s a bit like that: The map has been accented by these signposts, these recognitions of place and pattern. And now I’ve left the outskirts behind. Now I am drawing close to the center.”

Aynur exhaled a ring of vapor. “How nice for you. Too bad you got me killed.”

end quote

(It turns out Rustem’s clients didn’t like him impressing lovers by talking about his work and expressed their displeasure forcefully, so this is a conversation Rustem is imagining.)

What I’m thinking – and this is super-vague – is that it’s the landmarks from which the map grows. When a new route is added, it might have a landmark that is present in another route. Suddenly that landmark is more interesting. Let’s say that all landmarks present on two or more routes are interesting. So now we invert our attention: instead of caring about routes that include landmarks, we care about landmarks connected by routes. Handwaving wildly here, removing redundancy from such connections might converge – in a series of small, individually justifiable steps – into something that looks more like a map of Illinois county roads than a sheaf of a thousand New Hampshire routes, organized by starting and ending destinations.

≤ music ≥

OK. I’m aware that I sound perilously close to all those people who handwave about quantum mechanics to argue about consciousness or truth or morality or whatever. How am I not (just) some crackpot?

Well, let me quote last episode’s David Chapman about what made his and Phil Agre’s work special:

“I think we were very much part of a tradition. What was novel was that we wrote code.”

As the boxer Mike Tyson said, “Everybody has a plan until they get punched in the mouth." Grand software conceptions – be they crackpot or actually insightful – are cheap until demonstrated in working code. So that’s what I’m going to try to do.

Clearly, I’m going to have to do more reading in neurobiology as a way to generate ideas. (For example, spreading activation networks probably have something to do with coalescing routes into maps, but I don’t know anything about them.)

Naturally, I want to share those ideas, especially when I express them in the form of code, but I want to split that work off from this podcast. If interested – well, even if you’re not – that work will appear at nh.oddly-influenced.dev, a blog, as well as on github. The “nh” is short for “New Hampshire”. You can find the link in the show notes.

Thus endeth this series. Next time, something completely different. Thank you for listening.

E45: The offloaded brain, part 5: I propose a software design style
Broadcast by