Thursday, December 27, 2018

A little bit of history repeating

Wikimedia Commons,  Hubert Berberich
So a few weeks ago, I was looking for something on my "old stuff" hard drive, and I ran into some essays I wrote around 1988. That was back when I had my IBM portable PC. It weighed thirty pounds and had a little orange-text screen, and it was a pretty good heater if you sat in bed with it on your lap (and legs).

Anyway, as I said, the other day, instead of looking for whatever I was supposed to be looking for, I started reading these old essays. And I noticed something strange about them. They were written around the time I discovered complexity theory and roughly a decade before I learned anything about stories. At the time I thought I'd be an ethologist (animal behaviorist) forever, and I gave little thought to my own species. But here's the strange thing. Those old essays sound a lot like the things I've been writing lately about participatory narrative inquiry. I think you might be interested in hearing about that.

I've always thought PNI started during my two years at IBM Research (first during my explorations of questions about stories, and then when Neal Keller and I created what we called the "Narrative Insight Method"), then developed further through my research and project work with Dave Snowden and Sharon Darwent. But I wrote those essays ten years before IBM. I wasn't thinking about stories, or even people, in 1988. I was thinking, however, about how organisms of every species look back on their experiences and make decisions.

I'll show you some of the writing so you can judge for yourself, but if this connection is real, it means that at least some of the roots of PNI go back not twenty but thirty years ago, to the days when complexity theory changed the way I thought about social behavior. And if that's true, it raises the possibility that PNI developed because it is the inevitable result of taking complexity into account when considering the behavior of social species such as our own.

On hierarchy as help and hindrance

This is the first of the three essays I think you might like to read. It came directly out of a feverish run through some books and articles about complexity and chaos.


Multiplicity. Even the word is too long. Have you ever sat very still and thought about how many there are of everything? Try it for a while — but only for a little while, because it's dangerous. You can go in either direction; the confusion is marvelous in both the infinite and the infinitesimal. Think big: towns, nations, worlds, galaxies. Think small: bodies, molecules, electrons, empty whizzing space. Space in either direction.

It's a paradoxical result: the contemplation of complexity leads to the homogeneity of the void. Everything there is turns out to be only a small part of everything there isn't. If the universe were made of numbers, most of them would be zero.

So here we are, a bundle of neurons and some other cells, in the middle of this complex void. We, among all the animals, have the ability to see outside our native scale to other measures of time and space. How do we cope? How do we read the mail or shop for food without suddenly, paralyzingly, confronting the enormity of it all?

The answer lies in a special feature of the human mind that seems to have evolved specifically to deal with the burden of awareness: hierarchy. We organize things. We divide time into centuries, years, seconds. We divide space into light years, kilometers, microns. Think about anything we experience, and we will have arranged it hierarchically. What is a child's first reaction to a number of blocks? To pile them up. To make, not a group of equal components, but a smaller number of nested units composed of those components. In hierarchy lies safety.

It is precisely for this reason that it is necessary, at times, to put away the crutch of hierarchy and try to stand unaided on the shifting sands of complexity. Maintaining an awareness of other-than-categorical connections among elements of disparate origin requires that we — sometimes, temporarily — place them all on the same level. To discover similarity in the shape of a leaf, a differential equation, and the swoop of a flute, we must suspend our hierarchical definitions and allow new connections to leap up from a flat sea of perception.

As a visual image, I like to shape each piece of information into a tiny sand grain in a flat wide desert. All are equal; all contain only the crucial property of being observed. Then experience, intuition, and thought, like a warm wind, catch up these grains and form them into new and ephemeral patterns of truth.

Letting the mind loose in this way, by consciously breaking down some of the barriers that subdivide our experience, allows our integrative genius to work on the raw material of reality and produce exciting results.



Grand Canyon, Wikimedia Commons, Pescaiolo
I remember the image that was foremost in my mind when I wrote that essay: the Grand Canyon (in the Western US). I spent a lot of time in those years thinking about making sense of complexity, and I kept going back in my mind to the times I had visited the Grand Canyon and had been stopped in my tracks by its complexity.

How is it possible, I wondered, to live in full awareness of the complexity in the universe? In its enormity, its detail, its mesmerizing intricacy, its worlds within worlds? Must we become numb and stupid to carry on in the face of such wonder? Can we?

And I remember how I solved the conundrum — or rather, how the solution came about, because it was more of a reception than a creation. One day, in the midst of this dilemma, I was eating a sandwich while contemplating the blades of grass in a field (another Grand Canyon) when the answer suddenly came to me: The elements. The alphabet. The types and categories of things. In the Bible, Adam gives names to the types and categories of animals. Why does he do that? Because he has to figure out some way to live in a sea of complexity. So do we.

We cannot cope with an inconceivable number of things, but we can cope with an inconceivable number of combinations of a conceivable number of things. Focusing on the classes instead of the instantiations makes it possible to live life without being overcome with awe. The hierarchies we create are the fictions we need to stop our over-developed awareness from damaging our sanity. From this perspective, what Plato was after was not truth itself, but fiction whose purpose is to help us cope with truth.

Just look at how our hierarchies help us. The alphabet shapes the wild sounds we make and hear into neat, predictable groupings. The periodic table (and the types and categories of stones) makes the Grand Canyon not only bearable, but enjoyable. Biological nomenclature corrals the countless hordes of beasts and vermin into compact species, nested inside genera, families, orders, phyla, and kingdoms. The laws of physics transform the shocking realities of physical life — rushing, falling, colliding — into manageable formulas. Wherever we find unpredictable complexity, we build predictable, complicated maps to help us make our way through it. Without those maps we would be lost.

But the solution of complication comes with a price, and the price is amnesia. At the start, our maps are conscious creations, and we discuss and experiment as we refine them to suit our needs. But eventually, inevitably, we forget that our structures are fictions and our conditions are choices, and our maps become our prisons. Every map we build becomes the territory it once represented, and only in the places where it has worn bare can you see the reality that still lies beneath it.

How this idea influenced PNI

The fingerprints of this idea are all over participatory narrative inquiry. To begin with, all PNI projects start by suspending assumptions about "the way things are" and preparing to listen to the way things really are, in the minds of the people who have lived through whatever it is we want to think about. This is nothing less than the deliberate destruction of hierarchy — temporarily, thoughtfully, and for a reason. We roll up the map and put it aside, and we walk unaided on the ground.

I have said before that when you listen to someone telling you a story, you have to listen past what you thought they were going to say, past what you want them to say, and past what they ought to say, until you get to what they are actually saying. In practice, this means that in PNI we don't address research questions or gather information or question informants or apply instruments. We start conversations, and we listen. We let people choose the experiences they want to tell us about, and we invite them to reflect on those experiences with us. The way we set up the context for the conversation, the questions we ask, and the way we ask them — all of these things work together to push past the structures of our lives to the reality that lies beneath them.

We are not, of course, so deluded as to believe that we succeed in this entirely. Every PNI project succeeds and fails when it comes to getting to ground truth. But we try and we learn. I learn something new about engaging participants and helping them delve into the insights of their experiences on every project I work on; and so do all of us who are doing PNI work.

The idea of temporarily and purposefully dissolving structure comes up again in PNI's technique of narrative catalysis, where we look at patterns in the questions people answered about their stories. One of the rules of catalysis is to generate and consider deliberately competing interpretations of each pattern we find. As a result, catalysis never generates answers or findings, but it always generates questions, food for thought, material for making sense of the map in relation to what lies beneath it.

Sensemaking is the place in PNI where the map and the land come into the strongest interaction. It is in sensemaking that the map is rolled out again, but (to extend the metaphor) with a group of engaged people standing under it, actively mediating between the map and the land it represents, negotiating, adjusting, rewriting. When PNI works well, the map emerges from sensemaking new-made, better, more capable of supporting life — until the next time it needs updating.

So you could say that PNI is a solution to the solution of life in a complex world. It's that little spot of yin in the yang that makes the yin survivable.

Is PNI unique in this? Of course not. Lots of methods and approaches do similar things for similar reasons. All the same, I find it fascinating to realize that the roots of PNI stretch further back than I thought they did, and further out than social science or psychology or, really, anything human. I knew nothing about sensemaking (in the way Weick and Dervin wrote about it) back then; but coming from the study of social behavior in a variety of species, I arrived at a similar place. That's just . . . cool.

On optimality and incomplete information

Here's the second essay. This one was from a little later, when I was over having my mind blown by complexity theory and was starting to use it to hammer away at foraging theory (the particular part of ethology I found most interesting).



When biologists speak of the use of information by animals, they usually consider the question of what an animal should optimally do given that its information is less than perfect. In my opinion, the study of "imperfect information," as it is called, has been marred by two problems.

First, information has always been assumed to be about the environment. But if one considers the totality of information that could possibly be used to make decisions, it also includes information about the internal state of the individual making a decision and information about how the environment affects the future internal state of the individual.

Second, studies of imperfect information have a hidden assumption of awareness that may or may not be realistic. They ask the question of what an animal should do based on its knowledge that its knowledge is incomplete. For example, Stephens and Krebs (1986) ask, "How should a long-term rate maximizer treat recognizable types when it knows that they are divided into indistinguishable subtypes?"

Do we have any proof that animals are at all aware that the information they hold is incomplete? Is not the knowledge of the inadequacy of one's knowledge a type of information in itself, a type of information that we cannot assume animals have access to? I would hold that animals always act as if they had complete information, since they cannot know that their information is incomplete. The question then becomes one of constrained optimization within the information base available.

More interestingly, the behavior of animals acting optimally with incomplete information is then removed from its promise of being optimal in the overall sense, in the sense that the animal always performs the correct behavior for the conditions at hand. This should more closely approximate real behavior than theories that assume knowledge of ignorance. In other words, knowing that you know nothing is knowing something, and this is something that we cannot assume animals know.

If you look at incomplete information in this way, it is a lot simpler. Optimization just becomes optimization under a blanket of uncertainty, and is no longer especially correct or adaptive. Maximally optimal organisms might still make wrong decisions based on incomplete information, because optimality and infallibility might not always be perfectly linked. This means that we should watch not what should evolve, but what does evolve given the amount of information available (including information about what information is available).

Which leads into my next point: that the value of increasing information is not necessarily monotonically increasing. And that there are types of information we don't consider, such as internal information (where I am coming from) and relational information (how it all fits together).

It is a point of constraints. Evolution optimizes behavior inside of the constraints of what an animal can possibly know. But natural selection doesn't know that animals don't know everything. Obviously any animals that are aware of their inadequacy will win out over others that always think they are right; caution should win. But how does caution evolve? If there is a population of animals eating two prey items which they cannot distinguish (say noxious models and good mimics), and one organism evolves that knows that it cannot know which are models and which are mimics, then by definition it knows that models and mimics exist, which is distinguishing between models and mimics. Right?

Or if a population exists which samples from a distribution of prey energy amounts, and one individual evolves that knows that its sample is not completely representative of the universe of prey types, then does it not know something about the universe of prey types (if only that it is or is not adequately represented by the sample) that it by definition cannot know?

In statistics, we take a sample of a universe of data and hold it to be representative. We know that it should be representative because we have some idea of the larger universe from which we selected it. The point is that we have selected the sample. I don't think animals select a sample. I think they only have access to a sample.

Animals live local lives. They cannot know that the prey types they encounter are only one percent of the prey in a particular forest, or 0.00009% of all the animals of that species. They can only see what is given to them. Therefore they are not aware that any more exists. To them, the sample is the universe, and they base their decisions on it. They may have some uncertainty, but they cannot quantify it as we do when we know that our sample is 9% of the universe. What way of telling the size of the universe do animals have? None. Perhaps they have a rough idea that 90 bugs is not a good sample, but does not the number of bugs change all the time?



That second essay ends a little abruptly, doesn't it? I don't remember why. Anyway, that idea grew into my master's thesis, which would have grown into a Ph.D. dissertation if the department I was in at the time had been willing to consider simulation modeling a legitimate form of research. It was not, and I left science in a huff. (But I have written about the idea a few times over the years, and that makes me happy, so I'm good.)

In case my primary argument in that essay was not clear, I'll put it more simply: Never assume anyone knows what they don't know. That sounds obvious, but it's a hard habit to break.

Funny story: around the time I wrote that second essay, at a reception after a talk, I had the opportunity to ask John Krebs (of Stephens and Krebs foraging theory fame) a question about foraging theory. I have spent decades puzzling over the conversation, which went like this:
Me: What do you think about the idea that foraging theory anthropomorphizes animal knowledge and information use? I think there might be things we're not seeing because we don't think like other species do. I wonder what would happen if we approached information from a different point of view, from their point of view, as if we thought the way they think.
Krebs: How long have you been in graduate school?
Me: Two years.
Krebs: You'll get over it.
I still can't make out what he meant by that. Did he mean that ethologists don't anthropomorphize animal knowledge and information use? Or that they do, but they can't do anything about it? Or that nobody cares? Or that I should shut up and do as I was told? I still don't know.

But I wasn't the only one thinking about the issue. In the years after that, I attended several lectures on research that suspended assumptions about the way animals thought, and as a result, discovered some surprising things. In the study I remember best, researchers took birds of a species that was famous for having no courtship ritual whatsoever, filmed them interacting, and slowed down the film, to find an elaborate courtship ritual playing out so quickly that the human brain cannot see it happening. I remember being so excited during that talk that I could barely sit still, because it confirmed what I had been thinking about the way we went about studying animals and making claims about their behavior.

Another study I remember proved the now-well-known fact that putting colored bands on birds' legs and studying their social relations is a bad idea, because having a colored band on your leg changes your social standing. That seems obvious now, but it was quite a revelation at the time. Another study revealed that some male fish mate by pretending to be female fish. This pattern was hidden in plain sight for decades, because everyone who saw it assumed it must be a misunderstanding or a fluke. Then it was elephants communicating in wavelengths we can't hear, and plants sending messages in wavelengths we can't see, and the surprises just kept coming. I haven't exactly kept up with new developments in the field of ethology, but the little I have seen has given me hope that researchers are continuing to explore animal behavior in new and creative ways.

How these ideas influenced PNI

What does this essay have to do with participatory narrative inquiry? Lots. I can see influences on the development of PNI that came from each of the three points I made (about the limits of knowledge, the types of knowledge, and the value of information).

PNI and the limits of knowledge

You've probably heard about a thing in psychology called the Dunning-Kruger effect, where people become over-confident in an area because they are unaware of their ignorance. Back when I wrote that second essay, I was trying to express my feeling that ethologists had developed two simultaneous manifestations of a relational Dunning-Kruger effect, thus:
  1. The normal, self-reflective version, in which they overestimated their own knowledge about the knowledge of their study subjects, plus
  2. A vicarious version, in which they attributed knowledge of the limits of knowledge to their study subjects, when their study subjects had no such knowledge about the limits of their own knowledge.
What I didn't realize until recently is that (a) people don't just do this with respect to animals; and (b) I've never stopped thinking about the problem.

Let's think about animals for a second. Animals almost certainly don't sit around worrying all day about how much they know and how much they don't know. They know what they know, and they assume that's all there is to know. As far as we can tell, we are the only organisms that think about how much we don't know. So any random human being is likely to know more about the limits of their knowledge than any random dog or cat. But that doesn't mean we all know a lot about what we know and don't know; it just means we all know something about it.

I would guess that there is a normal distribution of awareness about knowledge limits. Some small number of people are probably aware of their ignorance to the point that they can take it into account in their decision making. The majority of us are dimly aware of the boundaries of our understanding, to the extent that we can apply rules of thumb and margins of error when we feel vaguely under-confident. And another small number of people are probably almost as clueless about the limits of their knowledge as any random dog or cat.

Figuring out how much any given person knows about how much they know is not an easy task, even when you can talk to them. How do you ask someone how much they don't know about something? You can test them to find out how much they know, but if you want them to estimate how much they don't know, don't you have to tell them the scope of the topic before they can make an estimate? And then don't they know more than they did? And then do you have to describe what's beyond that so they can make a new estimate? It's like trying to count the number of weeds in a pond when the only way you have of counting the number of weeds causes more weeds to grow.

So I'm not sure the question is that much easier to answer with people than it is with animals. But that doesn't mean we don't need to keep trying to answer the question; in fact, we need to answer it even more urgently with respect to each other. As social animals, we spend a lot of mental energy trying to figure out what other people need and how they will respond to the things we do and say. Everybody needs to do that in daily life, but when we are in a position to help people, we need to do it even more. If we think people know more about their needs and their limitations than they actually do, we are apt to predict their needs and responses wrongly, and we might end up hurting people instead of helping them.

Sometimes I think people give up trying and simply pretend they know what other people know about the limits of their knowledge. And then when someone asks them how they know that, they say things like "You'll get over it." Not getting over it — by actively pursuing answers to that question — is one of the goals of participatory narrative inquiry. In a sense, PNI came out of thirty years of my not getting over my original desire to make sense of perspectives that are different from my own.

Ignaz Semmelweis, Wikimedia Commons, Eugen Doby
A tragic example of what happens when you make erroneous assumptions about other people's knowledge of their limitations can be found in the story of Ignaz Semmelweis, the nineteenth-century doctor who famously tried (and failed) to convince other doctors to wash their hands after dissecting corpses and before treating pregnant women. (Actually, doctors were washing their hands, but with ordinary soap, which did not kill enough streptococcal bacteria to prevent subsequent infection.)

According to Wikipedia,
Semmelweis described desperate women begging on their knees not to be admitted to the First Clinic [where physicians also examined cadavers; in the Second Clinic, midwives did not, and the death rate was much lower]. Some women even preferred to give birth in the streets, pretending to have given sudden birth en route to the hospital (a practice known as street births), which meant they would still qualify for the child care benefits without having been admitted to the clinic. (Wikipedia)
Semmelweis wrote a series of articles advancing the theory that "cadaverous particles" were the sole cause of patient infections. His theory was attacked on many grounds, some reasonable, some questionable, and some simply prejudiced (such as the belief that his theory arose solely from his Catholic faith). He did not react well to these criticisms, becoming more and more combative, drinking heavily, and calling doctors who refused to change their practices "murderers." At the age of 42, Semmelweis was tricked into entering an insane asylum, held there against his will, and severely beaten, dying weeks later from his injuries. Only with the discovery of germ theory two decades later was he proven right — not as to his explanation of his findings, but as to his belief that lives could be saved by the measures he tried to promote.

The widespread rejection among Semmelweis' contemporaries of what today seems like common-sense advice has often been used as an example of blind perseverance in the face of contradictory evidence. But I'm not as interested in how other doctors reacted to Semmelweis' advice as I am in his failure to understand and adapt to their needs and limitations.

Ignaz Semmelweis was a man who cared deeply about his patients. He was "severely troubled" by the high incidence of puerperal fever in the wards he administered, writing that it "made me so miserable that life seemed worthless." These strong feelings set him apart from many doctors of the time; and later, his unique experiences set him even further apart. The death of a close friend and colleague, Jakob Kolletschka, forcibly and painfully challenged Semmelweis' views on infections and autopsies. He recounts the incident thus:
I was immediately overwhelmed by the sad news that Professor Kolletschka, whom I greatly admired, had died. . . . Kolletschka, Professor of Forensic Medicine, often conducted autopsies for legal purposes in the company of students. During one such exercise, his finger was pricked by a student with the same knife that was being used in the autopsy. . . . [H]e died of bilateral pleurisy, pericarditis, peritonitis, and meningitis. . . . Day and night I was haunted by the image of Kolletschka's disease and was forced to recognize, ever more decisively, that the disease from which Kolletschka died was identical to that from which so many maternity patients died. (Wikipedia)
Notice the words Semmelweis uses here. He was forced to recognize the connection, and ever more decisively, meaning that he must have revisited the tragedy over and over, as we do when someone close to us dies. Even his choice of the word haunted implies repetition, such that a place haunted by a ghost is described as being "frequented," that is, visited frequently. In this light, Semmelweis seems less a visionary than a man tormented by the consequences of his limited vision. If he had never experienced such a deep despair over his inability to make sense of the patterns he saw, he might have been as reluctant to examine the limits of his knowledge as the doctors he tried to convince.

It seems to me that Semmelweis' failure might have sprung in part from his inability to understand the impact of this experience on his awareness — and the impact of the lack of such an experience in the careers of his contemporaries. Consider the fact that one doctor Semmelweis did convince had a similar experience to his own:
Professor Gustav Adolf Michaelis from a maternity institution in Kiel replied positively to Semmelweis' suggestions — eventually he committed suicide, however, because he felt responsible for the death of his own cousin, whom he had examined after she gave birth. (Wikipedia)
Semmelweis seems to have assumed that other doctors were as haunted by their ignorance as he was; but it sounds like most of them were not. The theory of the four humours was in full force at that time, and most doctors probably felt no need to venture past its readily available explanations. They were satisfied with the state of their knowledge, saw no gulf beyond it, and were content to carry on as they had always done.

I wonder if Semmelweis would have gained more traction if, for example, he had refrained from posing any theory at all, and suggested changes to practice solely on the basis of the evidence he had collected. After all, he could have proposed his changes without attacking the predominant medical theories of the day. Neither he nor anyone else at the time could explain why the washing of hands with a chlorinated lime solution greatly reduced the incidence of infection in maternity wards; but the fact that it did reduce the incidence of infection was not in dispute.

As I said above, such an inability to imagine the experiences and mindsets of other people, based on erroneous assumptions about the nature and limitations of their knowledge, is something we directly seek to address and correct when we carry out projects in participatory narrative inquiry.

How do we do this? We ask people to tell us what happened to them, and we ask them questions about their knowledge and awareness during the events of the story. We ask questions like these:
  • How predictable was the outcome of this story? Did you know what was going to happen next?
  • What in this story surprised you? What do you think would surprise other people about it?
  • If this story had happened ten years ago, how do you think it  would have come out then? What about fifty years ago? What about in another location?
  • What could have changed the outcome of this story? What makes you say that?
  • What did the people in this story need? Did they get it? Who or what helped them get it? Who or what hindered them?
  • Does this sort of thing happen to you all the time, or is it rare? What about to other people you know? What about to people you don't know? Can you guess?
  • If you could go back in time to the start of this story, what would you say or do to help it turn out differently? What would you avoid changing?
The answers to these questions help us understand not only what happened to people but also what they know and don't know about it. Sometimes the most illuminating answer is "I don't know." And we sometimes ask follow-up questions, like:
  • Why did you say "I don't know"? 
  • What does that mean to you, that you didn't know?
  • What would you like to know?
  • How do you think you could find out? 
People facing situations like Ignaz Semmelweis faced can ask questions like these to understand (as much as anyone can) the perspectives, needs, and limitations of those they are trying to help.

PNI and the not-always-increasing value of increasing information

Now let's get back to the second point in the second essay: the value of increasing information. When I wrote that essay, I was concerned about an assumption I found distributed throughout the scientific literature on foraging theory, which was that the value of increasing information increased monotonically. In all of the models and theoretical frameworks I read on information use, more information was assumed to be more optimal than less information. I didn't see why that should always be the case. In particular, I thought the assumption might be problematical in situations where individual choices are interlinked in a complex network of mutual influence.

So I wrote a computer simulation to find out whether "smarter" individuals with somewhat better information about density-dependent resources would always out-compete "dumber" individuals with less information. ("Density-dependent resources" are resources whose value to each individual depends on the number of other individuals drawing from it, like a bird feeder that holds the same amount of food whether five or fifty birds visit it.)

According to foraging theory, there was no point in writing such a simulation because the outcome could be predicted in advance; but I wrote it anyway, because I was curious. Surprisingly, the "smarter" simulated allele did not fixate (exclude all others) in the population. Rather, the two alleles kept returning to a roughly 75/25 ratio, representing (for that simulated situation) a "mixed evolutionarily stable strategy," that is, one in which a mixture of strategies is more optimal than any one pure strategy.

A lot of birds, Wikimedia Commons, NASA
It took me a while to figure out why this was happening. After I spent some time watching my simulated organisms make their decisions, I realized that what I was seeing made perfect sense. The smart individuals would find out exactly where the best food sources were and rush to them, only to find all the other smart individuals there dividing up the food. The stupid individuals would wander aimlessly from place to place. Most of the time they'd get nothing but the crumbs left over, but sometimes they'd find themselves feasting at a "bad" food site that was nevertheless better than the "good" sites the smart crowd was picking to pieces. After a while, I couldn't get the joke "nobody goes there anymore, it's too crowded" out of my mind.

The result I got was counter-intuitive to foraging theory because there was an inconvenient trough in the value of increasing information. The smart organisms knew that a food source was better, which was more than the stupid organisms knew; but they didn't know what all the other organisms were about to do. Thus their intermediate level of information was sometimes better and sometimes worse, such that the net value of the increase was not enough to eliminate the relative value of stupidity. Thus the greatest fitness, at the population level, was in a mixture of strategies, including some that had no obvious value on their own. (I should mention that the idea of an optimal mixture of strategies goes all the way back to Cournot's 1838 concept of a duopoly; but still, the idea was not commonly applied to foraging theory at the time I was thinking about it.)

Now let's come back to participatory narrative inquiry. Situations in which complex interactions influence the options, choices, and behaviors of everyone involved are also situations in which PNI works best — and I now realize that this is probably not an accident. PNI is at its most useful at times when it seems like you know enough to come up with a viable solution, but you have been stymied by missing information you can't guess at. In fact, most PNI projects start from a situation in which, even though "everyone knows" what the problem is, prior attempts at solutions have shown the current level of knowledge to be insufficient. You could even say that the whole reason PNI exists is to compensate for troughs in the value of intermediate levels of information in complex situations.

That's why surprise is such an important part of PNI. I've noticed that on every PNI project, somebody is surprised by something. An assumption is overturned, a trough turns into a peak, and new options open up as a result. I've always found this to be profoundly satisfying, and now I know why.

PNI and types of information

The third thing that bothered me about foraging theory when I wrote that essay was how researchers used the word "information." Whenever people gave examples of information in the papers and books I read, it was nearly always about facts external to the organism: where food could be found, how much energy could be found in the food, weather conditions, and so on.

But that can't be the only information an organism needs, I thought. There must also be internal information, such as the organism's hunger or satiety, its health, its age, its reproductive state, and so on. An animal with excellent knowledge about its internal state should out-compete an animal with poor internal knowledge, right? But nobody seemed to be studying internal information, or even acknowledging that it existed.

Bird on branch, Wikimedia Commons,  Mathew Schwartz
And there must also be another type of information, I thought: some idea of how all the other pieces of information fit together. I called this relational information. For example, if I am a tiny bird perched on a branch in mid-winter, I must know that I am in danger if I don't obtain enough food to replenish my fat stores to a certain extent. Such information may only be "known" at the level of an instinctual urge, but it should exist in some way, because it must stand behind the decisions animals make about how much energy to expend on foraging. Should I stay on the branch and conserve my heat, or should I swoop down in search of food? Without internal and relational information it's hard to make such a decision.

So I wondered why researchers never seemed to pay attention to either internal or relational information, even in theoretical considerations of animal behavior. My guess was that these types of information were so much harder to observe and control that people tended to ignore them. It's easy to vary the values and distributions of food sources and then watch what animals do in those situations, especially when you can see them evaluating the obvious differences between the food sources. Trying to figure out what animals know about their internal states and how the world works is a more daunting challenge. But that doesn't mean those types of information don't exist or don't matter.

Now let's think about how this applies to participatory narrative inquiry, because, of course, it does. Just like the researchers whose papers I was reading back then, we all theorize about the mental and emotional states of the people whose needs, limitations, and probable responses are important to us. We do this individually every day, and we do it collectively when we embark on a project to solve problems or improve conditions in our communities and organizations. And like those researchers, we have an easier time thinking about external information than internal or relational information.

That's something I have noticed when I talk to people who are just starting out doing PNI work. If you visualize all the questions you could possibly ask about a story, arrayed in a set of concentric spheres around the story, people always seem to start out thinking about the outermost sphere. They ask questions about the basic facts of the story, like:
  • Where and when did this take place?
  • Who was involved?
  • What was the topic or theme of the story?
  • What problem was solved? Who solved it? What was the solution?
After they've gotten more practice thinking about projects, people start moving inward, inside the bubble of their participants' experiences, to where internal information is important. They start asking questions only a story's teller can answer, like:
  • How did you feel when this happened?
  • What surprised you about it? What did you learn from it?
  • What do you wish had happened?
  • What helped you in the story? What held you back?
Finally, experienced PNI practitioners move into the center, where relational information (that is, beliefs and values) can be found. They start asking questions about what the storyteller thinks the story means about the way the world works, like:
  • Why do you think this happened?
  • Does this happen often? Should it?
  • What would have changed the outcome of the story? Would that be better or worse?
  • Who needs to hear this story? What would they say if they heard it? What would happen to you if they heard it?
Another thing I've noticed is that the closer PNI moves to the center of these concentric spheres, the more it deviates from other modes of inquiry. When a PNI project asks questions anyone could answer about a story, it's hard to distinguish from any other kind of survey-based research (and it's hard to make a case for its use). In such a situation, the story is just another data point, and it's not all that critical of a data point either. You could ask people questions at the outermost level with or without a story, and the answers would not be that different. For example, you could ask people to give you a list of all the problems they solved in the past year, and you wouldn't get much of a different picture than if you asked them to tell you a story about a problem they solved.

When a PNI project asks questions closer to the center of experience, however, the story becomes much more than a data point. It becomes a vehicle by which participants can make sense of their experiences, drawing forth internal and relational information they didn't realize they had (or cared about). As a result, when PNI works well, by the end of the project, everyone learns something about themselves and each other.

So in a way, you could say that my work on PNI has been a continuation of my earlier attempts to get people to "move inward," closer to the center of the experiences and perspectives of those they seek to understand.

On experiment and reality

I have one more old essay to show you. It's an appendix to a paper I wrote for a graduate class, apparently in the sociology department, about an experiment on social interactions among fish. At first I didn't remember the project described in the paper, but as I read I began to remember bits of it. What I remember most is that I did the project in the "fish room" of the biology building basement. The light switch in that room was wired badly, and two or three times I got an electrical shock when I flipped the switch with wet hands. That's a thing you remember.

Most of the experiments I did in my early days as a wannabe-ethologist had to do with social interactions: dominance hierarchies, how kin find each other, tit-for-tat balances, methods of communication, social signaling, intention movements, and so on. I was intensely curious about the evolution of social interactions, because . . . well, I still can't understand how anybody could not be intensely curious about that.

Pumpkinseed sunfish, Wikimedia Commons, Kafziel
The experiment went this way. I netted 150 pumpkinseed sunfish from a pond and put them in a tank. (Or somebody netted them. It says "for use in another experiment.") From those 150 fish I picked out ten groups of three fish of roughly equal size (because any big-fish-little-fish contest is a foregone conclusion).

For each of the ten groups of three fish, I followed this procedure:
  1. Isolation: I put all three fish in tanks by themselves for five days.
  2. Establishment of dominance: I put two of the three fish together and watched them until I could see which one was dominant. (They peck at each other, like chickens.)
  3. Re-isolation: I isolated the loser of the dominance contest for another day. (The winner got to go back into the big tank.)
  4. Test: I put the loser from the previous encounter together with the third (still isolated) fish and watched what happened.
What was supposed to happen, according to prior research, was that the losers in the first contests would remember their low status and lose in the second contest as well. What did happen was that eight of the ten losers won the second time around. As I explained in the paper, this could have meant a wide variety of things, but it could not really be said to mean anything, because the sample size was so tiny. I knew that going in, and so did my professor. It was just a practice project to write a practice paper.

None of that is interesting. What is interesting (to me, now) is that I wrote an appendix to the paper, and that appendix, even though it's mostly a jokey thing I wrote to myself, connects with participatory narrative inquiry. I can't guess if I actually submitted the appendix with the paper or just kept a copy for my own amusement. In any case, here's what I wrote.



Appendix: The Poorly Understood and Sorely Neglected Behavior of Pumpkinseed Sunfish in Laboratory Tests.

As I reviewed the literature for this experiment, and again as I watched the fish setting up dominance relationships, it occurred to me that although many descriptions had been published of the social behavior of the pumpkinseed sunfish and other species, never had anyone attempted to describe the peculiar suite of behaviors that is shown when fish are placed in a testing tank and observed. I will now endeavor to present an extended ethogram of the experimental behavior of that species, with due attention to the fish-human interaction.

In the course of my work, I soon realized that I could divide the entire behavior of the fish in the test situation into a series of discrete stages that occur repeatedly and in a predictable sequence.

1.) Disbelief (D). When a fish is first placed in a strange tank, or the partition dividing a tank is removed, or some other equally amazing thing happens, the fish's first thought is — "I am dead." This has some basis in nature; when a fish is suddenly caught up and thrown into a new body of water, it is most probably in (a) another fish or (b) a net. Thus the fish upon entering the test arena spends some time in what others may call shock but which I prefer to call disbelief (mostly because it is a longer word, and it simply doesn't do to have scientists running around using small words). Now the state of this poor fish would be almost comical, if one were completely callous and cold-hearted (which I am not!); it lolls about on the bottom or in a corner, sometimes rocking gently, for a period of anywhere from ten seconds to half an hour.

2) Escape (E). At some point (as I have said, this is highly variable and begs further study) the fish suddenly realizes that it is alive. Its very next thought is — "If I'm alive, then . . . I'm trapped! I've got to get out of here!" It then proceeds to push its way out of what it assumes to be water but what most annoyingly turns out to be an invisible force field, or what we humans know better as glass. The fish, as any good Vulcan would do, assumes that there has to be a weak spot in the force field, "Somewhere where the ion magnifier exceeds its photon limit. It is only logical." With its mouth open and its gills flaring, it presses here and there and here and there and there and over in that place and down here and up there — you get the idea.

I can see another parallel for this behavior in nature. Surfaces in nature, be they pond bottoms or stream edges, are mostly made of stones; and stones often have fish-sized holes between them. So a fish trapped in, say, a small pool off a running stream, needs only to poke and prod until it finds a way out. The intensity of this behavior often gets quite high and varies substantially among individuals, due undoubtedly to some differences in susceptibility to claustrophobia.

3). Recognition (R). You may have noticed that so far I have not mentioned interactions between the two fish. Far from being unprofessional and unobservant, I reserved the recognition of another fish to its own stage. At some point one of the fish looks around and gasps — "Good God! There is another fish in here!" And it is from this realization that we get the data point "Attacked first," for that fish usually wants to get a good nip in before its fellow occupant itself reaches the R stage.

You may ask why the fish did not notice its companion before, especially when they both decided to poke at the same spot. Yes, this is another parallel in nature. Fish in the wild get bumped up quite a bit: things are always floating by, children will be throwing rocks, crayfish are scuttling around, outboard motors are making a ruckus. So even the most violent escape attempts by another fish are treated as the usual disturbance — get as far away from it as possible, but for heaven's sake don't stand there gawking at it! Thus it is only in a moment of lucid tranquility that the recognition stage arrives on the fish. To the nipped fish, the R stage is entered abruptly and assuredly, as nothing else feels quite like a pumpkinseed sunfish bite.

From this stage on begin the "normal" interactions we record on our data sheets and analyze, ignoring as good scientists the unseen (but standard! at any rate) behaviors described here.

Perturbations of the normal scheme of things are of two types: relapse and awareness. A relapse is caused by a large disturbance, such as the observer tripping over the blind or camera, dropping something loudly, or banging the testing tank with any number of things. (Not that any of these things has ever actually happened to me; I merely heard of them through other experimenters.) A relapse usually drives both occupants of the tank back to the disbelief stage, from which it is a long wait to realization of life, frantic escape, and back to aggression.

Awareness, the second type of perturbation, is often more devastating for the observer because of its psychological implications. This perturbation occurs when the observer is foolish enough to bump the blind or sit in such a way that a bit of his or her clothing shows (the observer who wears brightly colored clothing clearly knows nothing about fish), or cough (this has produced innumerable disasters to science). At this point the fish becomes aware of the fact that "Something . . . is out there . . . watching me." (Or us, if the R stage has been reached.) The fish assumes a position quite like that taken in the disbelief stage, with the exception that the fish faces the observer, glaring intensely this one thought: "I see you, you disgusting finless giant; I know what you're doing; and whatever it is you are waiting for me to do I will try my hardest to avoid." At this time the observer quite predictably mutters (inaudibly, of course, so as to prevent a relapse) several epithets that would not evoke full cooperation if heard and understood.

This concludes the extended ethogram of the true behavior of the pumpkinseed sunfish, adding precious insight to our scientific understanding of this interesting species.



That essay is a silly little thing, but I had something serious in mind when I wrote it, and I haven't stopped thinking about it in the years since. The more you read about the science of behavior in any species (including our own), the more obvious it becomes that a lot of the findings we rely on were derived in artificial contexts, just like my ridiculous project watching fish interact in empty tanks and pretending it meant anything at all about what their lives would be like in a natural setting. (It was a practice project, but the experiments it referenced and sought to replicate followed similar procedures and drew similar conclusions.)

The most obvious example of such blindness in human research is the much discussed fact that almost all psychological and sociological research — research that tells us how "humans" think and feel — is done on WEIRD (Western, Educated, Industrialized, Rich, Democratic) university students. The WEIRD acronym comes from the instantly-famous 2010 paper "The weirdest people in the world?" (in Behavioral and Brain Sciences, by Joseph Henrich, Steven J. Heine and Ara Norenzayan). Other researchers brought up the issue before that paper (for example, the "carpentered world hypothesis" was first put forth in 1973), but the WEIRD name has given the discussion new energy.

As a 2010 New York Times article put it:
[A] randomly selected American undergraduate [is] 4,000 times likelier to be a subject [of psychological research] than a random non-Westerner. . . . Western psychologists routinely generalize about “human” traits from data on this slender subpopulation, and psychologists elsewhere cite these papers as evidence. . . . [R]elying on WEIRD subjects can make others feel alienated, with their ways of thinking framed as deviant, not different.
I'm not going to cite any of the studies that demonstrate the flaws of WEIRD research here — they're easy to find — but I would like to mention a few things I noticed in recent discussions that connect to participatory narrative inquiry.

In a blog post called "Psychology Secrets: Most Psychology Studies Are College Student Biased" (on the PsychCentral blog), John Grohol lists the reasons psychologists are still not widening their research populations. Using university students is convenient; it's cheap; it's the way things have always been done; and it's good enough for the time being. You'll get over it, basically.

Grohol then says this:
There’s little to be done about this state of affairs, unfortunately. Journals will continue to accept such studies (indeed, there are entire journals devoted to these kinds of studies). Authors of such studies will continue to fail to note this limitation when writing about their findings (few authors mention it, except in passing). We’ve simply become accustomed to a lower quality of research than we’d otherwise demand from a profession. 
Perhaps it’s because the findings of such research rarely result in anything much useful — what I call “actionable” behavior. These studies seem to offer snippets of insights into disjointed pieces of American behavior. Then someone publishes a book about them, pulling them all together, and suggesting there’s an overarching theme that can be followed. (If you dig into the research such books are based upon, they are nearly always lacking.) 
Don’t get me wrong — it can be very entertaining and often interesting to read such books and studies. But the contribution to our real understanding of human behavior is increasingly being called into question.
I have learned over the years that if I try to defend participatory narrative inquiry as being "scientifically valid" I will fail. PNI just doesn't hold up as a scientific endeavor. Its participants are given too much control over the process for PNI to prove anything conclusively. There's no control group. The sample is self-selected and non-representative. Interpretation is biased and variable. There are no inter-interpreter validation checks. Conclusions are idiosyncratic and local. Results cannot be replicated, not even later on the same day. What it all means depends on whom you ask, and when, and how.

This is what I mean when I say that PNI is not a science; it's a conversation. When you invite people to tell whatever stories they want to, interpret their stories however they like, talk about their stories in groups, and draw their own conclusions, "proof" isn't a very useful word. "Useful" is a useful word. Above all else, PNI aims to be useful.

In a way, PNI is the ultimate anti-WEIRD research paradigm, because it aims for a real understanding of human behavior — that is, an understanding that is contextually situated, internally relevant, externally meaningless (and happy to be so), and purposefully, aspirationally, hope-fully actionable.

Here's one more quote about WEIRD research, from a Slate article, that relates to PNI:
So the next time you see a study telling you that semen is an effective antidepressant, or that men are funnier than women, or whether penis size really matters, take a closer look. Is that study WEIRDly made up of college psychology students? And would that population maybe have something about it that makes their reactions drastically different from yours? If so, give the study the squinty eye of context. As we often add “… in bed” to our reading of the fortunes in fortune cookies, it’s well worth adding “… in a population of Westernized, educated, industrialized, rich, and democratic college students” to many of these studies. It can help explain many of the strange conclusions.
The purpose of PNI is, precisely, to apply the "squinty eye of context" to statements about what is normal, or real, or human, so that they can grow into insights we can use in our lives and communities.

The types and categories of research

As I said above, I take this connection across three decades to mean that PNI was in a sense fated to happen when complexity theory worked its way into the study of social behavior. As a nice side effect, it also means that my professional career has been a lot less rambling and accidental than I thought it was. At least I've rambled over some of the same spots, and that's a comfort.

I can't help but wonder, though, why it took me so long to realize that I was still working on the same issues. Why did I not see that my work on PNI was a continuation of "not getting over" my early concerns about hasty assumptions and unexamined perspectives in social behavior? I don't know. Maybe it was because I left science in a huff. Maybe the idea of "leaving science" was the problem in the first place. Maybe science, or research, shouldn't be so easy to leave.

I don't know if it's because of reaching the twenty-year mark or what, but I've noticed that I've been describing my career differently over the past year or two than I used to. People always ask how I got started doing story work, probably because I don't sound like any sociologists or anthropologists (or storytellers) they know. I used to say "it was an accident" and describe how I applied for a job at IBM Research because my husband was already working there and we could commute together, and I ended up getting hooked on "this story stuff" as a result. That's all true, but lately I've noticed myself saying, "I started out as an animal behaviorist, but after a while I switched species." That always gets a laugh, but probably the deeper reason I say it is that I'd like to have a more coherent story to tell myself. But it's not a fictional story; it's a real connection. So why didn't I see it?

Maybe it's not just me. Maybe it's the way we all think about research. Maybe it's too organized. Maybe it has too many types and categories. Maybe sometimes — temporarily, thoughtfully, and purposefully — we need to place everything on the same level and let new connections appear. Yes, we need more diversity in our research populations (both researcher and researched), but maybe we need new connections among some other things too:
  • sociology, psychology, anthropology, and ethology;
  • proof, utility, and action; 
  • participation, observation, and experimentation; 
  • contextual and universal conclusions; 
  • academia, business, government, and even some out-there independent scholars like me, who bounce around from one field to another, thinking they've crossed vast distances when they've really just been pacing the same small circles for decades.
Why don't we all walk around together finding out useful things? That sounds good. Let's do that.