E44: The offloaded brain, part 4: an interview with David Chapman

Download MP3

Welcome to Oddly Influenced, a podcast for people who want to apply ideas from *outside* software *to* software. Episode 44: An interview with David Chapman about the offloaded brain.

I’ve been working on the promised episode about what we writers of just-plain-apps might draw from ecological and embodied cognition. Along the way, I noticed that my authors cite work done by David Chapman and Phil Agre in the ’80s, including their work on a program, Pengi, that played a video game while being as ignorant of a “big picture” representation of its environment as, say, a diving gannet or a fly jumping to activate its wings or a rabbit that seems to make no connection between the smell of a carrot and any concept of carrot.

Since I know David slightly, I thought I would ask him about what provoked he and Agre to go ecological before ecological cognition was cool, and how Pengi worked. He agreed, and thus this episode. They took a more aggressively minimalist approach than I’ve been thinking of, but I think it makes a fine introduction to the ideas of Andy Clark that I’ll be explaining in more detail in the next episode.

It’s hard to introduce David Chapman. He’s a true character, the sort of polymath I aspire to be. His sprawling… book?... blog?… epic work?... meaningness.com has given me ideas I’ve happily exploited over the years – with due credit, mostly. People who like this podcast will likely like that site. The show notes link to a page with his “greatest hits”.

People interested in Buddhism, especially undomesticated, non-Westernized Buddhism, will also find much interesting material. I’m not a Buddhist myself, but it was fascinating.

≤ music ≥

Brian Marick
40 years ago, my association with AI was porting CMU Common Lisp to Gould PowerNode supermini computers. And you were at that same time at the beating heart of classical AI. That is: the MIT AI Lab. At that time, the MIT AI Lab was doing the kind of cognitive science that the people in the books I've been talking about have been saying, "no, no, no, that's wrong!" You and your collaborator Phil Agre were doing the same thing. You were tearing down the temple from inside. It seems that the people who write these books consider you and Phil Agre as something of progenitors or early pioneers in this area. So that's the reason I thought I would like to talk with you.

Brian Marick
Maybe the best way to start is by looking at the influences you were building on, and the approach you were using to thinking about AI, because it seems somewhat different from what my various people are doing. In particular, you mentioned to me over email that you were building on phenomenology and ethnomethodology. Some of the more philosophically inclined authors I've been reading do mention phenomenology, but I haven't mentioned it at all. And they certainly do not mention ethnomethodology. So explain, if you would, two vast fields in a few minutes. And what they meant to you at the time.

David Chapman
[While[] it's flattering to be called progenitors, I think we were very much part of a tradition. What was novel was that we wrote code.

David Chapman
The ideas we had go back decades and arguably many centuries. The difference was that we were taking that and putting it into an AI context instead of a philosophical context or a cognitive science context.

David Chapman
There's a lineage within cognitive science that you've probably discussed [I have not], which is early cybernetics, which had a lot of the same general ideas, but they didn't have sufficient computational resources or sophistication to make models that would really do very much. We had the advantage of several decades of improvement in computation – hardware and software – so that we could make more sophisticated models.

David Chapman
Phenomenology... the relevance for us was that [it said] if you want to understand something, you just look. One of the problems with cognitive science as a discipline is that it's supposed to be about what's going on inside people's heads. But for most of the history, we couldn't look inside people's heads. Now there's fMRI, but it's a very crude tool that's mostly misleading. So [cognitive science] mostly still can't look inside people's heads. So cognitive science is basically taking philosophical ideas and assuming that must be what's going on inside people's heads. But there's various reasons for thinking that's wrong.

David Chapman
If we want to know something about people or other creatures, just looking at what we do, instead of making hypotheses about what's inside the head (that can't be tested) gives a huge amount of insight.

David Chapman
Ethnomethodology is a discipline that grew out of phenomenology. Phenomenology was a branch of philosophy that was very abstract and theoretical. [But] phenomenology was the idea that you actually look and see instead of making up stories. Ethnomethodology made that very much more concrete.

David Chapman
This is going back to the 60s. They looked at, first, audio recordings of conversations, and then video recordings of people interacting. [They] would microscopically dissect moment-by-moment what was happening – and [so] be able to have a deep understanding (without reference to supposed entities inside people's heads) about the structure of how people interact and how people get practical work done.

David Chapman
In our work, we basically were taking that insight and making computational models that tried to exhibit the same kinds of patterns of interaction – with absolutely minimal machinery. [Back then,] the whole thing in AI was making complicated, cool software that had all kinds of complexity in order to model theories about what happened inside people's heads. And we were like: "Okay, what is the absolute minimum amount of complexity we can implement that will get interesting behavior?"

Brian Marick
So you were examining your own actions, right? Do you have any anecdotes about that? What sort of things did you look at?

David Chapman
Breakfast is my favorite example. If you watch yourself making breakfast, just noticing things that happen, you can see patterns: you didn't realize you were doing something. [You wouldn't have noticed] the effect of what you're doing unless you took this ethnomethodological attitude of micro-dissection. An example is if you are a graduate student, and none of your china matches because it's been in a graduate student household for a long time. There's a lot of miscellaneous sets of china, of which there's one or two plates or bowls each. [You notice] you'll end up with this stack of bowls, and all the good ones are near the top. And that's not because anybody deliberately put the good bowls (or, in other words, the ones that are most functional) at the top. It's that you pull bowls out and, when you clean them, [you] put them into the dishwasher, and you put them back on the stack, you put them on the top. And so an automatic sorting routine that occurs is according to goodness of bowls, without anybody having deliberately decided to sort the bowls. And without there being any representation in anybody's head of this. It's a dynamic that occurs as a result of: "okay, I'll grab a bowl that looks plausible, have my cereal in it, put it in the dishwasher, and then put it back at the top of the stack."

Brian Marick
Was there an emphasis on "routine"?

David Chapman
We were talking – just before we started recording – about planning. Cognitive science is part of the rationalist tradition, which goes back to the ancient Greeks. And the rationalist tradition basically says: you are or should be a rational animal, which means that you figure things out from first principles using logic. And that guarantees that what you believe and what you do is correct.

David Chapman
The cognitive science theory of action was a rationalist theory, which is that you have a full representation of the situation you are in, and you have some goals, and you make a logical proof that taking a specific series of actions will result in your desired goals. This theory has an enormous number of different defects. [Being] computationally utterly intractable is one of them. The combinatorics are horrendous. It's not something that is realistically feasible without a whole lot of idealizations. [On] hardware in the 1980s, when we were working, it was just ludicrously impossible.

David Chapman
But we also know that it can't be what people do, because we often react within a small fraction of a second to emerging events. And we just couldn't be proving theorems about how logically taking such and such an action will be the correct way of dealing with this thing that just happened. The brain is slow, and that couldn't be the right story. Just reasoning from basic facts of the brain being slow, the first observation is that what we must be doing is having quite simple ways of responding to aspects of the situation, which you might call affordances. (I think you've talked about those before in this podcast series.) So affordances are things you can perceive that tell you How to Act or at least tell you a possible way you can act which is going to have a particular effect. And the way that we actually get stuff done is primarily by registering these affordances and taking the actions that the world is recommending to us, essentially.

David Chapman
And so the planning view is that action is orderly and structured, or sequences of actions are orderly and structured because you reason from first principles about how to accomplish your goals. The mystery in the view that our actions are mostly just taking advantage of affordances is: why does that result in apparently structured patterns of behavior? [Agre] describes a routine as a pattern of interaction. And it's a pattern that may or may not be mentally represented. The bowl sorting routine is one that [no one had] mentally represented. At some point, I noticed this thing that happens: [the] routine of putting the bowl back at the top of the stack [has the effect of] producing a structured outcome, which is that the good balls are at the top, which is convenient, because that's where you pull them off from. And then sometimes you have a party and all the bowls get scrambled, and they go back unsorted. And then the routine kicks in. And gradually the bowls get sorted again, but nobody intended that – nobody represented that – there's no plan for that. Agre pointed out that this structure occurs despite the absence of representation. So his research question is: that's a very, very specific example (the bowls getting sorted), [so] what can we generalize from this? He and I observed, phenomenologically, or ethnomethodologically, a lot of different patterns in interaction like this, and then tried to make some general stories about them.

Brian Marick
Another key term, and I do not know if I'm pronouncing this correctly, is diectic [dee-ec-tic]

David Chapman
Deictic [dike-tic] I think. I wouldn't swear that I've ever heard it pronounced properly by somebody who knows how to say it. In the Pengi paper, we talk about "indexical functional representations," which is a terrible term – the word "representation" is misleading here. "Indexical functional" is kind of a mouthful, but it does point to two important aspects. I'll come back to that in a moment. "Deictic " is a term from linguistics. In linguistics, [it] is essentially synonymous with indexical. But I suggested that we use it as a kind of condensation of "indexical functional", and Phil agreed, and we went with that.

David Chapman
So what is indexical? It's a term from linguistics that the ethnomethodological tradition picked up. An indexical expression is one whose meaning depends on the circumstances. So the classical example is news announcers. When you're about to have a commercial break, they would say, "and now this!" This is an extremely indexical expression, because what it refers to depends entirely on what the advertisement is that's coming up, and the moment in which it is said. So, "now" is just by itself an indexical term, what "now" means is different every single time it is used. Also what "this" means is dependent on context.

David Chapman
So indexicality is the phenomenon that [the meaning of] things people say depends on the occasion in which they say them. What the ethnomethodologists observed is that: first of all, linguistic indexicality is pervasive. It's not restricted to a handful of words like "I" and "now" and "this". The meaning of basically anything you say can be completely changed if you imagine a different context.

David Chapman
This also applies to all of the rest of action, not just linguistic speech. So what you are doing, and what effect it has, and what meaning it has, depends entirely on the situation in which you're doing that thing, whatever it is.

David Chapman
Let's go back for a moment. Coming out of the rationalist tradition, cognitive science imagined that inside your head, you had mental representations of a very large number of propositions: facts and generalizations about the world. For cognitive science, those were taken to be non-indexical. So your mental representations were not supposed to depend on anything, they were just true facts, independent of circumstances. And in addition, they were supposed to be purpose-independent. So: "grass is green." This is a fact that might be useful in lots of different ways. But you just represent the fact that grass is green. And it's not okay for something you believe to only be true relative to what you're up to.

David Chapman
For a large number of reasons, this whole idea is completely wrong. But one reason is that it is computationally intractable. It is not feasible to reason with those kinds of purpose-independent and context-independent knowledge. People are still trying to do this, but it just doesn't work. It can't work. [There's] a very long list of reasons why it can't work. It's been explained very thoroughly. And the rationalist tradition and cognitive science continue to ignore this.

David Chapman
So in Pengi, we set out to demonstrate that if you... we said "represent", [although] I think that's kind of misleading. But if you "represent" the world in ways that are indexical, and functional (meaning purpose-laden) then "reasoning" becomes radically more tractable. Pengi was a program that played a video game called Pengo. Playing Pengo is something that was utterly beyond the state of the art in artificial intelligence at the time that we did this work, because the situation was much more complex than it was feasible to represent in a non-indexical, non-purpose laden, rationalist way with computers [of] the time. Furthermore, [the game board] was very rapidly changing and partially random. And this makes planning essentially impossible because you just don't have time, given that things are going to be constantly changing out from under you.

Brian Marick
It's probably a good place for me to describe Pengo. Let me describe it, and then you can correct me where I've got it wrong. It's a two-dimensional grid consisting of three kinds of objects. One object is a penguin. And another set of objects are bees, because as we all know, there's a long history of conflict between bees and penguins, faithfully captured by this game. This game represents faithfully a truth about nature – similar to "the grass is green" – which is that bees try to kill penguins and penguins try to kill bees. The player of the game controls the penguin. The bees move around randomly or semi-randomly?

David Chapman
They are purposive. They're actively trying to kill the penguin. But they also randomly choose different ways they could go, and sometimes they just sort of turn around for no particular reason.

Brian Marick
Okay, and bees can kill penguins in one of two ways. If they sidle up to the penguin, if they get next to the penguin, the penguin dies. So you [as player] have to [make the penguin] run away. The other way is the bees can use the third kind of object, which are ice cubes (blocks that fill a square of the grid): the bees can push or kick [one] at the penguin. And if it hits the penguin, the penguin dies and you lose the game.

Brian Marick
The way that the penguin can kill the bees is by kicking a block so that it hits the bees. So blocks will kill either bees or penguins. And that's it. Right? If you manage to kill all the bees, you win.

David Chapman
There was some more complexity to the game, but that's about as much as we actually got implemented. This is an arcade game from 1982 or something that I recreated from memory. It was no longer around when I did the implementation. I recreated it from memory rather imperfectly, [but] because it's the future, you can now go to YouTube, and see a Pengo game being played. My recreation was somewhat inaccurate,

Brian Marick
I'll have to look that up and put a link in the show notes. Okay, so you implemented this game, and wrote a program called Pengi, which pretends to be the player. The program is not the penguin, the program is the controller of the penguin.

Brian Marick
I'd like to try to work through it. You start out [on] frame number one. (The game has a clock cycle that it goes through.) On frame number one, you start. The penguin is somewhere randomly on the board. There are bees and blocks randomly on the board. What happens then?

David Chapman
Pengi is playing the game Pengo in the same way that a human does, meaning you have visual access to the whole game board. It has a simulated visual system. It looks to see affordances. (In the paper, we call them "indexical functional aspects", but affordances is what they are.) [It looks for] affordances that are relevant. So, they're purpose-relative: they're relevant to the purpose of killing bees, or escaping from bees. Affordances are particular configurations of blocks, penguin, and bees such that various tactics can be deployed to either escape if threatened by a bee or to kill bees. The tactics may require several steps. There's a routine that you can set up a trap for the bee that you're currently concentrating on.

David Chapman
The simulated visual system was inspired by fairly in-depth reading of what was understood about the human visual system at the time, primarily from what's called psychophysics, somewhat from neurophysiology. Psychophysics is a branch of experimental psychology that figures out what's going on inside the head by looking at reaction time data (primarily). And it turns out, there's just absolutely brilliant work in the 70s and 80s, initiated by scientist Anne Treisman, which figured out the mechanisms of visual attention.

David Chapman
Again, the old rationalist story was [that] somehow your brain takes the input as if it was from a camera, and constructs from that a full logical representation of the scene. This is not true, and it's not feasible. In fact, you visually attend to only typically one or a handful of areas within your visual field, [those] which are salient in terms of the current context and purposes. So your visual activity is indexical and functional – or deictic. And that is the way that you find affordances, or that affordances become apparent to you. And so Pengi has a fairly detailed model of what was understood about that process as of the mid 80s.

Brian Marick
So when the game starts, you might be in a situation where you, a block, and a bee are in a straight line...

David Chapman
Right. This is a dangerous situation, because the bees and the penguin move at the same speed. If the bee is closer to the block, then it can get up to the block, kick it in your direction, and smush you before you get to the block and kick it to smush the bee. So there's two things that need to be registered here. First is that we're in this situation of the penguin, the block, and the bee being collinear. And then the second thing we register the bee as... we can call it "the enemy bee". It's the one that is salient for the current situation. And then there's an aspect of this entity, the enemy bee, which is one of two things: either the enemy bee is closer to the block or the enemy bee is further from the block. And then having first identified the enemy bee and then secondly registered who's closer, you can take action: if the bee is closer you need to somehow get out of the way. If you are closer, then you want to run towards the block and kick it at the bee.

Brian Marick
So, in those situations, the same bee – we could say Bee 12 – in one situation that bee would be pointed at as the-bee-the-penguin-is-going-to-kill. And in the other case, it would be the-bee-that-is-trying-to-kill-me. And those are two very different things from the point of view of the penguin.

Brian Marick
Similarly, the block is going to have a pointer to it as the-block-I-am-going-to-kick or the-block-to-avoid. And those different natures feed into the computation that determines what the penguin should do in the next moment. We've got markers, labeling things that are of interest, and then that would feed into what – if I understand it correctly – is just a combinatorial, combinational, network (ANDs, ORs, and NOTs) that feed out the next thing that penguin will do, which is kick or move in a direction.

David Chapman
That's right. The thing that's interesting here is that you don't know, and you don't care, the objective identity of which block this is. It could be Block 12, it could be Block 217. The block-I'm-about-to-kick might be one of those at [different times], and block 12 could either be the-block-that-I-need-to-avoid [or] the-block-that-I'm-going-to-kick-at-the-bee. Again, in the rationalist/objective worldview, you have to be keeping track of the absolute, objective identity of every object that you know about. And this is just an enormous amount of completely unnecessary work.

Brian Marick
Because you don't care about bees that are far away from you.

David Chapman
Yeah, and you don't care which bee is which. What you care about is what their relationship is with you and what that implies about what you have to do.

Brian Marick
Why did you choose just to use such simple combinatorial logic. Was it showing off: see how much we can do with how little?

David Chapman
Yeah, basically [we were] going in the opposite extreme of the AI of the day. The AI of the day basically rewarded people for coming up with baroque software architectures. And we were like, "Okay, what is the absolute stupidest, most minimal thing?" It's feed-forward logic gates, no state: there's no flip-flops in there. Just absolutely the simplest possible thing.

Brian Marick
Is the identity, the deictic identity of a bee recalculated every frame, or is it somehow remembered from frame to frame?

David Chapman
The simulated visual system can keep track of a small fixed number of objects. There was some psychophysics at the time that said that we could track... I don't know... six or something [objects]. And the visual system puts markers in places the program has determined are interesting. So, for example, the enemy bee gets a marker on it. That state is in the visual system rather than in the central system. As things move, the visual system moves the marker. So that is done frame by frame. The visual system also returns measurements of various things. So for example, the distance from one marker to another. [Later note: my understanding is that distances are reported to the combinatorial logic as booleans: "is-close-to" and the like.] And so it is recalculating on every frame, what the comparative distances are between the penguin, the block, and the bee.

David Chapman
What we call "the central system," which is deciding how to act, doesn't have any memory. Every tick, the world reappears to it as a set of boolean values output by the visual system. Those flow forward through the combinatorial logic network, and [it] outputs the boolean values that say which direction to move in and whether or not to be kicking.

Brian Marick
Okay, so the interesting thing is that all the memory is just in the visual system. And it's not something that is anything like what we would call intelligent. It's the same sort of principle that if a bird flies behind a tree trunk, when it comes out the other side, we know it's the same bird. The claim of Pengi is that there's no *reasoning* going on that says "this object [now] moving on the left hand side of the tree is the same object that moved toward the tree from the right hand side." It's all just kind of automatically registered.

Brian Marick
Now, if it is the case that... This is going to be very hard to describe purely audibly, but there's a bee kind of up and to the right, there is a block that's kind of below the bee, there's another block that's in line with that block horizontally. So if you kick that block, to slide over into the other block, it'll stop. (Because they just stopped when they hit things; there's no momentum or anything.) So the block will slide over and stop vertically below the bee. While it's sliding, you can be creeping down below it. So you've now created a situation where the bee is vertically above the block, [and] farther away from the block than you are from the block. So now you've established the situation where you can kick the block to hit the bee, unless, of course, the bee has moved, which it seems like it would. But let's suppose the bee hasn't moved.

Brian Marick
Now, what I've just described is a plan: it's something I might think, "given the situation, here's how I could do it." But in Pengi, there is no plan. So can you say something about how that would work?

David Chapman
This is an affordance, the particular configuration of blocks that you described. Which is probably difficult for listeners to visualize, but there is a particular geometrical configuration of blocks, such that a series of actions – basically three of them: first kicking the projectile you're going to use to kill the bee into place, [then] you chase after that ice cube or block, and then you kick it at the bee. That [starting] geometrical configuration is one that the central system can instruct the visual system to notice. And when the visual system notices this configuration, it... I'm sliding over a lot of detail here. But it essentially informs the central system that "oh, this affordance is available." And then the central system says "Oh, so now I can do this thing."

Brian Marick
And it's doing that thing not as part of a plan.

David Chapman
Right. What the visual system says, "hey, you've got this affordance." And central system just says, "Oh, in this configuration, I do the thing that *we* could call the first step of the plan." And then there's a different configuration, which corresponds to the last step of the plan, which is the one where the block is in line with the bee so that the bee is vulnerable. But in that initial configuration, the central system only needs to know to do what we could call the first step of the plan. But there isn't a plan. It just knows "when this affordance is active or available, then I do this thing." And there's an advantage to this, which is that – as you said – the bee's not going to cooperate a lot of the time. It's going to move out of the way partway through this routine. (It's not a plan. It's a routine. It's a pattern.) Partway through the pattern, the pattern disintegrates, because the bee has randomly moved off in some direction, [and] that means the affordance no longer applies. [The second configuration never appears.] And as soon as that happens, the visual system ceases saying, "hey, this configuration is available." And so the central system at that point just drops the whole thing. It doesn't need to say "oh, you know, my plan failed. I need to come up with a new plan." ...

Brian Marick
[Interrupts] It doesn't "drop the whole thing" because it never held the thing in the first place.

David Chapman
Right. There never was a plan.

Brian Marick
Here's an interesting – well, I think it's interesting – an interesting example of something very similar that happens in nature. There is one of those disgusting parasitic wasps called a Bee Wolf. And what they do is they capture, sting, [and] paralyze bees of various species, and they lay their eggs on these bees, and then the larvae eat the paralyzed bee and grow up to be more wasps that can kill more bees.

Later Brian
This is post-interview Brian interrupting. I completely botched the explanation here, especially by saying "wasp" when I meant "bee" and vice versa, which made everything confusing So here I'll say what I should have said.

Later Brian
These wasps don't put paralyzed bees just anywhere before laying their eggs. Instead, the mommy wasp digs a tunnel, up to a meter long, with a number of side passages. Those passages lead to chambers. Each chamber is the home to (I think) a single larva and up to six paralyzed bees.

Later Brian
The wasp seems to be following a plan: when it's paralyzed a bee, it carries it back to the mouth of the main tunnel. It leaves the bee next to the entrance hole while it goes down to prepare the chamber. Then the wasp comes up, grabs the bee, and drags it down into the chamber.

Later Brian
Some biologists with nothing better to do with their lives than torment wasps do this game where they wait for the wasp to be down working on the chamber, and then grab the bee and move it away from the entrance. So the wasp comes out, and there's no bee there.

Later Brian
At this point, it seems best to resume with what I originally recorded, starting just after the wasp fails to find the bee. Be aware that I confuse bees and wasps twice more, but we'll have a special guest appearance by Dawn to make corrections.

Brian Marick
That triggers behavior where it searches for the bee, finds the paralyzed bee, [and] moves it back to the hole. And then... it goes back down and prepares the chamber that it already prepared!

Brian Marick
Because "bee next to hole" is putting into the environment the affordance for the next step of the "plan," which is to prepare the chamber. So the bee looks like it has a step-by-step plan.

Dawn
No, dear, it's the wasp that looks like it has the step-by-step plan.

Brian Marick
But in fact, all it's doing is: at every step of the way, it installs in its environment an affordance to cause the next step...

David Chapman
Yes, that's very similar.

Brian Marick
... which sounds dumb of the wasp. But realistically, evolutionarily, how often do wasps run into biologists who like to move their bees? It must happen... it must happen sometime because [wasps] do have this pattern of "I have prepared, I come up, no bee, institute a search pattern to find the bee." So maybe, I guess, bees maybe get blown by the wind or something. And so they have the affordance of: "come up, find wasp...

Dawn
Bee!

Brian Marick
... causes the direct action: drag wasp...

Dawn
Bee!

Brian Marick
down." Whereas the affordance "come up, no wasp... come up, no *bee*" has the affordance: "institute a search routine."

Brian Marick
And so your claim and Phil Agre's claim, and the claim of various people like Andy Clark, is that a lot of our seemingly intelligent behavior is like that. It's not planned. It's dropping affordances in our environment, or in our bodily configuration, that prompt the next activity. And as you say, that's convenient, because you don't have to check to see if the plan is going according to plan, you just do the next thing, whatever the next thing the environment tells you to do is.

David Chapman
Yep.

Brian Marick
I don't actually have any more questions about Pengi... Oh, I actually do. How good was Pengi?

David Chapman
Um, it was superhumanly good in some ways, and terrible in others. It was superhumanly good because it actually could track more moving objects better than people can. So it was unrealistically visually apt. It was much less good than a reasonably competent human player. We did not implement a very large number of these affordances and what action to take based on the affordance. We were expecting that, after the Pengi paper was published, we would do a lot more work to flesh out those. For various reasons that I can't fully remember, that didn't happen. I think the main thing was Phil was under time pressure to complete a PhD. And I wanted to take the research in a slightly different direction. And so we abandoned Pengi without it's having a very large repertoire of routines.

Brian Marick
Another question: How easy was Pengi to maintain, to extend, to work with, compared to your average program and perhaps compared to some of these overly elaborate conventional AI programs.

David Chapman
Ah... so the culture of AI at the time was that basically you threw together code that minimally illustrated whatever point you were trying to make, and principles of software engineering – to the extent that they were understood at the time – were utterly ignored. So it's a bit difficult to answer that question because maintenance was a total non-thing. And the code was... it would be covered in comments, you know, of the "TODO: make this not be an utter kludge." [It was] a big pile of Lisp code that was quite unlike anything anybody had ever written before. So it was a big pile of kludges.

Brian Marick
Okay. Do you have any other comments for the listenership?

David Chapman
No, I am... I guess I can say I didn't realize you had a podcast until [you] contacted me a week or two ago. And I've listened to several episodes and really enjoyed them, so I can recommend to anyone who's listening just to this episode that they check out other ones.

Brian Marick
I will definitely leave that in! Okay, well, thank you for talking to me.

David Chapman
Sure. It's been fun.

E44: The offloaded brain, part 4: an interview with David Chapman
Broadcast by