Wednesday, September 4, 2019

Even more NarraFirma

I have just finished another NarraFirma release, again based on commissions, with more new features. Chiefly this:


I am calling it a "correlation map." It shows relationships among all the scale questions in your project in one graph. When you hover over a link you can see the scatterplot of combined values (as in this example). When you hover over a bubble you can see a histogram of values for that question.

You can also see how relationships among scale questions appear for subsets of stories based on answers to choice questions, like this:

 
Also included in the new release are more improvements to the catalysis process based on user feedback. For details see the blog post on narrafirma.com.

About improvements to NarraFirma

If you are a NarraFirma user, you should know that (a) I am interested to hear about your experiences and suggestions; and (b) I offer an excellent rate for open-source development commissions. You might want to look over the NF commissions page to see some of the ideas I am considering for the future of NarraFirma.

About this blog

I have not been posting on the blog much this year. I am not sure if anybody is reading it. Also, when I can find time between consulting gigs, I am working on two (nested) book projects, and I want to save my precious writing time for those. I will continue to announce things on the blog when I have things to announce; but the days of "feeding the blog" are, I think, over. (Unless a really great idea pops up that demands to be written about. That's different.)

Friday, May 17, 2019

NarraFirma keeps getting better

Last month I received some more development commissions to improve NarraFirma, so I have been working on it again, and I released a new version today.

The focus of this release has been on improving the flexibility and usability of the catalysis process.
  • You can now easily set up multiple observations per pattern and multiple patterns per observation. 
  • Many new options make your graphs and reports look the way you want them to.
  • There are new fields and new report types. 
  • The interface has been improved.
You can review the full list of changes at the NarraFirma blog.

By the way, if there is anything you would like to see improved in NarraFirma, I offer a pretty low development rate (as long as the changes are open source and reasonably useful to everyone). I am happy to talk about any improvements you would like to see.

Tuesday, April 9, 2019

Mail bag: Stepping stones

Hello readers. I have been very busy in the past few months with paying projects. This has been a wonderful thing for my prospects of continuing to do this work! But it has led to the neglect of this blog, and of the book project. I have a few partially written blog posts in development, because I've been learning a lot lately, but I like to wait until they are ready to be of use before I post them. As for the book, I should be able to get back to it soon. At the moment I'm working my way through a new set of development commissions for NarraFirma, plus another ongoing story collection.

However, I am recovering from a stomach bug today, so I wanted to do something simple and fun, so I started writing an email reply to a question somebody asked me last week. As I was writing, it occurred to me that some of you might like to read the question and my answer, and that my correspondent probably wouldn't mind if I posted their question here. This is an off-the-top-of-my-head answer, mind you, without any preparation, to take a break on a sick day, but you still might find it interesting.

The question was:
I have a question about how you relate sensemaking to ontology. I have heard the term multi-ontology used in reference to Cynefin, and I am wondering if Confluence as you use it is more useful for describing the environment. Do you have a working term around this? I am trying to connect this to the latest thinking in psychology around ontological pluralism of perception.
Here is the way I understand it. Ontology is the study of existence, of what-is. People used to talk about ontology as one thing, like there can only ever be one ontology to cover everything for everyone. Then some people realized that ontology wasn't really the study of what-is; it was the study of what-is-as-seen-by-those-who-get-to-say-what-is. And then the term began to be used in the plural, and people started to talk about ontological pluralism. (This is my vague sense of what happened. If I had the time and inclination I could look up all kinds of things and write pages about what exactly happened, but ... it was probably something like that. Now you see why it takes months for my blog posts to develop.)

My guess is that the term "multi-ontology" has been used to describe the Cynefin framework because it attempts to give people a way to represent multiple perspectives on what-is. That is, people can use Cynefin to talk about not only what-is in a general sense, but also with respect to various worldviews or mindsets on what-is. The Confluence framework also attempts to do this, but in a way that makes more sense to me.

In the white paper I wrote when I was on the verge of breaking Confluence away from Cynefin, I talked about visualizing "clouds" on Cynefin space (that is, on the "dimensional form" of Cynefin, which was my version of it, and which maybe no longer exists) to represent different perspectives, which could be called ontologies, or views of what-is. That paper represented most of the reason I moved Confluence away from Cynefin, because visualizing clouds drifting across a landscape requires a landscape, not a bunch of boxes. (Maybe the dimensional form of Cynefin was a thing, but it was not a thing people seemed to pay much attention to - and that's fine, but it didn't suit my needs.)
~ ~ ~  

Now, as to whether Confluence is "more useful for describing the environment," I would say that Confluence is more useful for describing the environment when a person is in the right state of mind to use it to do that. When they are not in the right state of mind, it is not more useful; it is less so.

What do I mean about being in the right state of mind? It's like when people learn about biology. First you learn about kingdoms and phyla and classes and orders and genera and species, and you learn about evolutionary eras, and you learn about stages of embryonic development, and you learn lots of other names for things. You get the most you can from all those boxes, and then they tell you (or you realize) that the boxes are made up, that the boundaries between cells and organs and bodies, between colonies and organisms, between species, between historical periods, between stages of development, are lines we drew, not lines we found.

After you cross that threshold, you begin to explore biology at a completely different level. You see intermingling and interaction you could never have imagined before. You see things that might or might not be alive. You see living things that might be organisms and might be parts of organisms. You see organisms that are both plants and animals (or maybe neither). You see trees that might be forests and forests that might be trees. You even begin to see yourself differently - as both a being and an assemblage of beings. The things you think about become more confusing, and a lot more fascinating.

~ ~ ~

You cannot reach the second level of understanding biology without going through the first level. If I tried to explain biology to you without using any artificial constructions, I couldn't do it. For example, every time I said "species" I would have to qualify the word with a dozen statements about how the species concept is flawed and controversial and challenged by many examples in nature, and how we really shouldn't be putting a lot of weight onto it, but that it has some benefit as a mental construct in spite of its limitations. Two quick quotes:
No term is more difficult to define than "species," and on no point are zoologists more divided than as to what should be understood by this word. -- H.A. Nicholson, 1872.

We show that although discrete phenotypic clusters exist in most [plant] genera (greater than 80%), the correspondence of taxonomic species to these clusters is poor (less than 60%). -- Rieseberg et al., 2006
I would have to explain all of these layers of nuance for every term I used, and we would make very slow progress.

This is exactly why people create categories - useful yet artificial constructions - even though they know that the utility of those categories will decline later on. Categories are both necessary and insufficient. You can find them in every sphere of knowledge, from board games to rocket science. The beginners know the spaces and the experts know the space.

This is why I like to think of boundaries that divide up the indivisible space of what-is as stepping stones to greater understanding.

What bothers me is when people step onto a stepping stone and become convinced that they have reached the other side. The best stepping stones wobble, that is, communicate their weaknesses to those who are in the right place to notice them. When a framework or model does not cause anyone to question it or want to reach beyond it, it stops people from making progress beyond it when it is time for them to do so.

~ ~ ~

The great majority of people start thinking about organization and self-organization by using one of the things-in-boxes frameworks, of which there are many. This is appropriate, because at that stage of thinking, you need boxes to be able to think about organization and self-organization. Attempting to understand those things on a continuous, blended landscape is too confusing to be useful - at that time.

Can you imagine explaining to someone who is trying to learn the various families of birds that the very species (and indeed families) whose names they are struggling to remember have shifted over time? What would they say if you told them that the categories they are learning have been influenced by accidents and personalities and disagreements and mistakes - and fashions and fads, of all things? What would they say if you told them that thirty years from now some of the names will have changed? No teacher would bring any of that messy stuff into the picture when their students were prepping for a test on bird identification. You don't learn about the messy stuff until years later, when you start exploring what's behind things like Linnaean taxonomy and the periodic table and other constructs we take for granted. Sometimes there are juicy stories about why we frame things the way we do, and how we almost framed them differently, but somebody said something to somebody at some party, and everything changed. Things are never as simple as they appear when you are first learning the names of things.

When you have got as much as you can out of thinking about organization and self-organization by using the things-in-boxes frameworks - and you can tell when you have reached that point, because you start asking more questions about the boxes than about the things inside them - it is time to move on to a more nuanced way of thinking. After you cross that threshold, you begin to explore organization and self-organization at a completely different level. You see patterns that are both complex and complicated - and usually, not in the way you expected them to come together. You see how these forces help and hinder and seed and destroy each other.

from flickr, by Alfred Grupstra
As you begin to explore the nuances of intermingling and interaction, you realize just how far down these things go - to the extent that simple words like "system" and "adaptive" start to seem ridiculous to you. The categories and boundaries and zones you once saw as canonical morph from certainties into contingencies. You see many new things: influxes, backflows, pockets, flourishes, flashes, disasters, revolutions, realignments, black holes, bright lights, peace, joy.

~ ~ ~

When you reach that second level of thinking about organization and self-organization, it's time to use the Confluence framework, or another of the continuous frameworks, such as ... such as ... I don't think there are any other continuous frameworks. Dee Hock's chaordic concept and Manuel deLanda's ideas on meshwork and hierarchy talk about blending, but they don't have sensemaking frameworks, in the sense of diagrams and exercises people can use to think with them. I know that Ralph Stacey talks about blending (somewhere, I'm not sure where), but his Agreement-Certainty matrix describes a priori bounded spaces, and seems to be used in that way.

I wonder why I have never seen another continuous sensemaking framework for organization and self-organization. (I wonder why I have never noticed this before.) Either there is nothing like the Confluence framework, or something like it exists and I am unaware of it. If anybody knows of a sensemaking framework that helps people describe the intermingling and interaction of organization and self-organization in a space that has no boundaries marked out in advance, please tell me.

Because this is a problem. It is either a weakness in my awareness about what is available or a weakness in our collective support of sensemaking. There should be more stepping stones at this level than just one. The people who build sensemaking frameworks and the people who use them are like authors and readers of novels: every framework has its thinkers and every thinker has their frameworks. There should be a variety of frameworks available at the continuous level, just as there are at the discrete level. If you know of another such framework, please tell me. If you don't, somebody build one! We need more tools of this type.

To be clear, however, I do not believe that everyone - that anyone - needs to use the Confluence framework to make sense of the ways in which organization and self-organization intermingle and interact, either in general or with respect to any situation or topic. Nobody needs a sensemaking framework to make sense of things; they can just make sense of things. Frameworks are playthings, not gatekeepers. They are unnecessary and insufficient. Never let anybody tell you that you can't think without the thing they made. You don't need my framework; you don't need anybody's framework. You've already got what you need: that squishy thing in your head.

~ ~ ~

What do I have say about the term "multi-ontology"? I'd say it is useful, but again, I don't think it's sufficient. If you think about ontologies that are commonly contrasted, such as between Western and Eastern, or between the global North and South, or between indigenous and non-indigenous people, seeing these worldviews as simply "different" is a simplistic way of looking at them. In reality, different ways of seeing the world are not so easily separated. They intermingle and interact. The term "multi-ontology" is itself a putting-things-in-boxes term, and as such, I think it is best used as a stepping stone to deeper exploration.

I spent a few minutes today looking up "the epistemology of ontology" and found some people talking about it. I particularly liked what Seth Miller said in a blog post: "the ontology of epistemology is the epistemology of ontology." Meaning, when you reach into the roots of ontology you find epistemology, and vice versa. The two phenomena intermingle and interact.

~ ~ ~

from flickr, by davidgsteadman
One of the reasons I like the metaphor of clouds, and the exercise of visualizing them on the space defined by the Confluence framework, is that clouds are useful metaphors for ontologies.

This is true for several reasons. A cloud is both an object and a process. Where its boundaries lie depends on how you look at them. Its behavior is partially observable and partially predictable, but never sure or certain. It can sometimes be placed into a category, but only temporarily and provisionally. It has a history: it is born, grows, and dies. A cloud might be two clouds for a while, and then one cloud again, but in a different way.

Another useful thing about clouds is that they are used to perceive and understand other forces - air masses - that lie unseen in and around them. In the same way, ontologies, or ways of seeing the world, can be used to perceive and understand people and the lives they lead. If I tell you some aspect of my understanding about the world, some element of my personal ontology, you have learned about me in the same way that a meteorologist looking at the clouds in the sky has learned about the masses of air moving overhead. 

When people represent perspectives as clouds on Confluence space, they can talk about how those clouds intermingle and interact. They can ask questions like:
  • How are these clouds distinguished? Where do they overlap? Where are they far apart? Where do they bleed into each other? Where is it hard to tell which is which? Where is it easy to tell? 
  • How do these clouds influence each other? Are there places where they cooperate? Compete? Do both at once? Do they have indirect influences? Through what? Where and when?
  • How have these clouds changed over time? What has their history been like, considered individually and together? How might they change in the future? What would that mean?
I'm sure I could write more such cloud-exploring questions given more time, but I would like to post this today. Maybe I'll come back to it later.

~ ~ ~

Now I'll ask the question I ask myself about everything I say and think and write: What if I'm wrong? What if nobody "needs" to move to "a higher level" in their thinking about organization and self-organization, with or without the aid of a framework? Nobody could possibly deny that these things intermingle and interact, but I can see someone saying that there is no point in exploring such nuances, that I'm asking people to waste their time counting angels on the heads of pins. Sure, I can see that perspective. It's not my perspective - my view is that there is so much in here, if you just open the door and come in. But I can definitely see the point of view that the things I like to think about - the intermingling and the interaction - are not that important, that the broad expanses of - No. There are no broad expanses. In human life there is no non-intermingled organization or self-organization. There just isn't. Whenever you find one thing you find the other, and either you pay attention to that or you don't. And you should. Eventually, when you are ready, you should.

Probably the best thing is to go by the 80/20 rule. You can probably achieve about eighty percent of what you would like to achieve in thinking about complexity by thinking about things in boxes. If you're already there and you want more, open the door and walk into the space where things are not so easily put into boxes, into the space of nebular ontology.

At my house right now, the air is warm in the daytime, but there's still quite a bit of snow on the ground, so every day we get these beautiful, serene ground fogs that thin out from bottom to top. I look for these ground fogs every year. I watch them as they wander through the trees, and I think about intermingling and interaction.

~ ~ ~

Now a few questions I think you might be asking. First, does my little come-to-the-clouds sermon mean that I think people should use things-in-boxes frameworks for a while, then abandon them? Absolutely not. I didn't stop calling red-winged blackbirds red-winged blackbirds after I explored the species concept more deeply. But I did see species differently than I did before. I stopped caring as much about whether I got every identification right, and I stopped thinking it mattered if other people did. The more I learned about all of the various categories in biology, the more they stopped being biology to me. They became things people built to study biology, and biology itself became something bigger, weirder, and harder to explain - but a lot more exciting. I didn't stop using the categories I had learned before, but I stopped them using them blindly, in every situation, without thinking about what they meant in context.

I've seen a lot of people go through changes like that as they have learned more about a subject (including stories, and including complexity). Everyone seems to start out by repeating terms from the dominant categories, memorizing them, holding onto them like totems, applying them to everything reflexively, without knowing why. Then, at some point, if they keep learning, they start asking more questions about the categories than about the things in them, and they start looking for the next stepping stone. Later, when they come back to the same categories, they treat them as resources, not templates. They use them like master chefs use cookbooks - to dip into for ideas, to step into and out of, to mix and match. But never again do they see those categories as fully capable of representing the subject they have learned about.

So, if you like the stepping stone you're standing on right now, and you don't want to leave it, I can give you this bit of encouragement: When you go back to a stepping stone after you've gone beyond it, it's an even better tool, because now you can use it in ways you never could before. Now you know why you are using it, and that changes everything. Now you use it when it's useful, and you don't use it when it's not useful, and you know which is which. You might even know how (and when) to use parts or aspects of it blended with parts or aspects of other tools, in a sort of cloud-like assemblage of ideas that intermingle and interact.

~ ~ ~

Another question you might have is: Does the Confluence framework wobble? It does for me. I keep thinking that the process needs more support at the point where you discover features in your populated landscape. Some feature categories might be useful there. Also, I think the model might want to grow a bud, with more sub-frameworks in the area where organization and self-organization come together most strongly (that is, in addition to the mixed sub-framework).

Does the Confluence framework wobble for other people? I have no idea. That's because it isn't done growing yet. It's half a framework and half an idea for a framework. It needs more testing, and I need more feedback on it. This is partly my fault. I'm like one of those rock bands that refuse to play their old songs. If I don't have something new to say about something, you are not likely to hear from me about it. And I've always got lots of new ideas I don't have time to turn into real things.

Still, I have thought about introducing Confluence more formally, in a paper published in a peer-reviewed journal. I haven't done that yet because the framework is not ready for it, not without more testing. I'm not going to describe a thing and how to use it until I can truthfully say that people have used it and found it useful. Sure, lots of people have used the simplest form of the framework, because it was part of another framework for a while (though it's a fair question how many people actually ever knew about or used it in that form). But I want to see if Confluence works now, the way I want it to work now. And if it doesn't work, I want to make it better.

Here's an idea. If you can find at least six people who want to spend a few hours in a room using the Confluence framework to think together about some situation or problem, I'll coach you as you plan and prepare for the session (over Skype), as long as you promise to tell me what happened and allow me to tell the story of what you did in a paper. (I'll show you what I plan to write and give you the chance to change it.) You do have to use the whole framework, not just the first, simplest bit. Let's say that the first five people who send me a note about this (and then actually do it, like within the next year) will get my help doing this for free. You'll help me and I'll help you. Plus, it'll be fun.










Monday, January 14, 2019

More Better NarraFirma

NarraFirma has been getting some attention lately. I received development commissions from several NarraFirma users, so I've been working on the software since early November, and I've bundled together all the things people wanted into a new version, 1.2.0.

The blog post on the NarraFirma web site goes into great detail about all the new features and changes. I'll just give you a quick overview here.

Easier importing

You should now find it easier to work with NarraFirma alongside such survey systems as SurveyMonkey, LimeSurvey, Google Forms, and the Open Data Kit.

To begin with, you can import a wider variety of question formats, and there are many new import options. You can also specify all of your import options in NarraFirma itself, without writing a CSV specification file.


And there is a new pre-import check, so you can see exactly what is being read correctly and incorrectly.


I also wrote a new 18-page Guide to importing data, which explains everything I know about it.

More better graphs

One of NarraFirma's strengths is its automatic combination of questions. Every possible combination of this-versus-that is generated for you. So when a NarraFirma user suggested a combination I had never thought of before, I just had to build it. It's a choice by choice by scale graph, a contingency-histogram chart, which looks like this:



The little pseudo-histograms in the boxes help you compare the distribution of a scale question (like: Did the people in this story get what they needed?) among subsets formed by the conjunction of two choice questions (like: the people in the story needed respect, and the storyteller felt disappointed about the story). 

There's also a new "Story length" question, so you can see if people more disposed to verbosity had anything different to say.

Also, you can now write out the graphs in your catalysis reports using SVG format. SVG graphs come out sharper than the old PNG option (which is still available) because they are vector-based. You can also style SVG graphs using CSS classes (even after you have saved the HTML report -- that is, outside NarraFirma). 

Also, you can now print an observations-only catalysis report. I added this specifically to support people who want to do their catalysis in a group workshop.

Better ways to clean house

NarraFirma uses what is called a journaling datastore. We chose this option because we wanted to make sure your data would be stable and difficult to corrupt or destroy. However, a journaling datastore is a little cluttery, in the sense that it's not that easy to delete things when you want to. However, with a bit of arm-twisting, I got my husband (who wrote the lower-level part of NarraFirma) to help me write some code that will help you clean house. You can read about the new housecleaning options in the blog post.

Usability improvements

In general, while I worked on this update to NarraFirma, I remembered all the things people said to me about it over the past year or so. I remembered their confusions and gripes, and I tried to address them all. So you will find many little improvements to explanations and placements of things. 

Here's one example: "quick links" on the Home page to the major things you created in your project (questions, forms, reports). 


I've also reduced the irritation factor in several places. To give just a few examples:
  • You can set a story count lower limit for drawing graphs, so you don't have to wade through graphs that have barely any stories in them.
  • You can now sort questions on your story cards by the original survey order as well as alphabetically.
  • Buttons for import and export have been moved to better places (I noticed that people forgot where they were).
  • In the node.js version, site administrators now have a link to the site-admin page right on the Home page of every project.
And so on. There's lots more stuff like that. 

Finally, I would like to thank everybody who paid me to improve NarraFirma. We have all benefited from your generosity. Here's to the future development and growth of NarraFirma!

Thursday, December 27, 2018

A little bit of history repeating

Wikimedia Commons,  Hubert Berberich
So a few weeks ago, I was looking for something on my "old stuff" hard drive, and I ran into some essays I wrote around 1988. That was back when I had my IBM portable PC. It weighed thirty pounds and had a little orange-text screen, and it was a pretty good heater if you sat in bed with it on your lap (and legs).

Anyway, as I said, the other day, instead of looking for whatever I was supposed to be looking for, I started reading these old essays. And I noticed something strange about them. They were written around the time I discovered complexity theory and roughly a decade before I learned anything about stories. At the time I thought I'd be an ethologist (animal behaviorist) forever, and I gave little thought to my own species. But here's the strange thing. Those old essays sound a lot like the things I've been writing lately about participatory narrative inquiry. I think you might be interested in hearing about that.

I've always thought PNI started during my two years at IBM Research (first during my explorations of questions about stories, and then when Neal Keller and I created what we called the "Narrative Insight Method"), then developed further through my research and project work with Dave Snowden and Sharon Darwent. But I wrote those essays ten years before IBM. I wasn't thinking about stories, or even people, in 1988. I was thinking, however, about how organisms of every species look back on their experiences and make decisions.

I'll show you some of the writing so you can judge for yourself, but if this connection is real, it means that at least some of the roots of PNI go back not twenty but thirty years ago, to the days when complexity theory changed the way I thought about social behavior. And if that's true, it raises the possibility that PNI developed because it is the inevitable result of taking complexity into account when considering the behavior of social species such as our own.

On hierarchy as help and hindrance

This is the first of the three essays I think you might like to read. It came directly out of a feverish run through some books and articles about complexity and chaos.


Multiplicity. Even the word is too long. Have you ever sat very still and thought about how many there are of everything? Try it for a while — but only for a little while, because it's dangerous. You can go in either direction; the confusion is marvelous in both the infinite and the infinitesimal. Think big: towns, nations, worlds, galaxies. Think small: bodies, molecules, electrons, empty whizzing space. Space in either direction.

It's a paradoxical result: the contemplation of complexity leads to the homogeneity of the void. Everything there is turns out to be only a small part of everything there isn't. If the universe were made of numbers, most of them would be zero.

So here we are, a bundle of neurons and some other cells, in the middle of this complex void. We, among all the animals, have the ability to see outside our native scale to other measures of time and space. How do we cope? How do we read the mail or shop for food without suddenly, paralyzingly, confronting the enormity of it all?

The answer lies in a special feature of the human mind that seems to have evolved specifically to deal with the burden of awareness: hierarchy. We organize things. We divide time into centuries, years, seconds. We divide space into light years, kilometers, microns. Think about anything we experience, and we will have arranged it hierarchically. What is a child's first reaction to a number of blocks? To pile them up. To make, not a group of equal components, but a smaller number of nested units composed of those components. In hierarchy lies safety.

It is precisely for this reason that it is necessary, at times, to put away the crutch of hierarchy and try to stand unaided on the shifting sands of complexity. Maintaining an awareness of other-than-categorical connections among elements of disparate origin requires that we — sometimes, temporarily — place them all on the same level. To discover similarity in the shape of a leaf, a differential equation, and the swoop of a flute, we must suspend our hierarchical definitions and allow new connections to leap up from a flat sea of perception.

As a visual image, I like to shape each piece of information into a tiny sand grain in a flat wide desert. All are equal; all contain only the crucial property of being observed. Then experience, intuition, and thought, like a warm wind, catch up these grains and form them into new and ephemeral patterns of truth.

Letting the mind loose in this way, by consciously breaking down some of the barriers that subdivide our experience, allows our integrative genius to work on the raw material of reality and produce exciting results.



Grand Canyon, Wikimedia Commons, Pescaiolo
I remember the image that was foremost in my mind when I wrote that essay: the Grand Canyon (in the Western US). I spent a lot of time in those years thinking about making sense of complexity, and I kept going back in my mind to the times I had visited the Grand Canyon and had been stopped in my tracks by its complexity.

How is it possible, I wondered, to live in full awareness of the complexity in the universe? In its enormity, its detail, its mesmerizing intricacy, its worlds within worlds? Must we become numb and stupid to carry on in the face of such wonder? Can we?

And I remember how I solved the conundrum — or rather, how the solution came about, because it was more of a reception than a creation. One day, in the midst of this dilemma, I was eating a sandwich while contemplating the blades of grass in a field (another Grand Canyon) when the answer suddenly came to me: The elements. The alphabet. The types and categories of things. In the Bible, Adam gives names to the types and categories of animals. Why does he do that? Because he has to figure out some way to live in a sea of complexity. So do we.

We cannot cope with an inconceivable number of things, but we can cope with an inconceivable number of combinations of a conceivable number of things. Focusing on the classes instead of the instantiations makes it possible to live life without being overcome with awe. The hierarchies we create are the fictions we need to stop our over-developed awareness from damaging our sanity. From this perspective, what Plato was after was not truth itself, but fiction whose purpose is to help us cope with truth.

Just look at how our hierarchies help us. The alphabet shapes the wild sounds we make and hear into neat, predictable groupings. The periodic table (and the types and categories of stones) makes the Grand Canyon not only bearable, but enjoyable. Biological nomenclature corrals the countless hordes of beasts and vermin into compact species, nested inside genera, families, orders, phyla, and kingdoms. The laws of physics transform the shocking realities of physical life — rushing, falling, colliding — into manageable formulas. Wherever we find unpredictable complexity, we build predictable, complicated maps to help us make our way through it. Without those maps we would be lost.

But the solution of complication comes with a price, and the price is amnesia. At the start, our maps are conscious creations, and we discuss and experiment as we refine them to suit our needs. But eventually, inevitably, we forget that our structures are fictions and our conditions are choices, and our maps become our prisons. Every map we build becomes the territory it once represented, and only in the places where it has worn bare can you see the reality that still lies beneath it.

How this idea influenced PNI

The fingerprints of this idea are all over participatory narrative inquiry. To begin with, all PNI projects start by suspending assumptions about "the way things are" and preparing to listen to the way things really are, in the minds of the people who have lived through whatever it is we want to think about. This is nothing less than the deliberate destruction of hierarchy — temporarily, thoughtfully, and for a reason. We roll up the map and put it aside, and we walk unaided on the ground.

I have said before that when you listen to someone telling you a story, you have to listen past what you thought they were going to say, past what you want them to say, and past what they ought to say, until you get to what they are actually saying. In practice, this means that in PNI we don't address research questions or gather information or question informants or apply instruments. We start conversations, and we listen. We let people choose the experiences they want to tell us about, and we invite them to reflect on those experiences with us. The way we set up the context for the conversation, the questions we ask, and the way we ask them — all of these things work together to push past the structures of our lives to the reality that lies beneath them.

We are not, of course, so deluded as to believe that we succeed in this entirely. Every PNI project succeeds and fails when it comes to getting to ground truth. But we try and we learn. I learn something new about engaging participants and helping them delve into the insights of their experiences on every project I work on; and so do all of us who are doing PNI work.

The idea of temporarily and purposefully dissolving structure comes up again in PNI's technique of narrative catalysis, where we look at patterns in the questions people answered about their stories. One of the rules of catalysis is to generate and consider deliberately competing interpretations of each pattern we find. As a result, catalysis never generates answers or findings, but it always generates questions, food for thought, material for making sense of the map in relation to what lies beneath it.

Sensemaking is the place in PNI where the map and the land come into the strongest interaction. It is in sensemaking that the map is rolled out again, but (to extend the metaphor) with a group of engaged people standing under it, actively mediating between the map and the land it represents, negotiating, adjusting, rewriting. When PNI works well, the map emerges from sensemaking new-made, better, more capable of supporting life — until the next time it needs updating.

So you could say that PNI is a solution to the solution of life in a complex world. It's that little spot of yin in the yang that makes the yin survivable.

Is PNI unique in this? Of course not. Lots of methods and approaches do similar things for similar reasons. All the same, I find it fascinating to realize that the roots of PNI stretch further back than I thought they did, and further out than social science or psychology or, really, anything human. I knew nothing about sensemaking (in the way Weick and Dervin wrote about it) back then; but coming from the study of social behavior in a variety of species, I arrived at a similar place. That's just . . . cool.

On optimality and incomplete information

Here's the second essay. This one was from a little later, when I was over having my mind blown by complexity theory and was starting to use it to hammer away at foraging theory (the particular part of ethology I found most interesting).



When biologists speak of the use of information by animals, they usually consider the question of what an animal should optimally do given that its information is less than perfect. In my opinion, the study of "imperfect information," as it is called, has been marred by two problems.

First, information has always been assumed to be about the environment. But if one considers the totality of information that could possibly be used to make decisions, it also includes information about the internal state of the individual making a decision and information about how the environment affects the future internal state of the individual.

Second, studies of imperfect information have a hidden assumption of awareness that may or may not be realistic. They ask the question of what an animal should do based on its knowledge that its knowledge is incomplete. For example, Stephens and Krebs (1986) ask, "How should a long-term rate maximizer treat recognizable types when it knows that they are divided into indistinguishable subtypes?"

Do we have any proof that animals are at all aware that the information they hold is incomplete? Is not the knowledge of the inadequacy of one's knowledge a type of information in itself, a type of information that we cannot assume animals have access to? I would hold that animals always act as if they had complete information, since they cannot know that their information is incomplete. The question then becomes one of constrained optimization within the information base available.

More interestingly, the behavior of animals acting optimally with incomplete information is then removed from its promise of being optimal in the overall sense, in the sense that the animal always performs the correct behavior for the conditions at hand. This should more closely approximate real behavior than theories that assume knowledge of ignorance. In other words, knowing that you know nothing is knowing something, and this is something that we cannot assume animals know.

If you look at incomplete information in this way, it is a lot simpler. Optimization just becomes optimization under a blanket of uncertainty, and is no longer especially correct or adaptive. Maximally optimal organisms might still make wrong decisions based on incomplete information, because optimality and infallibility might not always be perfectly linked. This means that we should watch not what should evolve, but what does evolve given the amount of information available (including information about what information is available).

Which leads into my next point: that the value of increasing information is not necessarily monotonically increasing. And that there are types of information we don't consider, such as internal information (where I am coming from) and relational information (how it all fits together).

It is a point of constraints. Evolution optimizes behavior inside of the constraints of what an animal can possibly know. But natural selection doesn't know that animals don't know everything. Obviously any animals that are aware of their inadequacy will win out over others that always think they are right; caution should win. But how does caution evolve? If there is a population of animals eating two prey items which they cannot distinguish (say noxious models and good mimics), and one organism evolves that knows that it cannot know which are models and which are mimics, then by definition it knows that models and mimics exist, which is distinguishing between models and mimics. Right?

Or if a population exists which samples from a distribution of prey energy amounts, and one individual evolves that knows that its sample is not completely representative of the universe of prey types, then does it not know something about the universe of prey types (if only that it is or is not adequately represented by the sample) that it by definition cannot know?

In statistics, we take a sample of a universe of data and hold it to be representative. We know that it should be representative because we have some idea of the larger universe from which we selected it. The point is that we have selected the sample. I don't think animals select a sample. I think they only have access to a sample.

Animals live local lives. They cannot know that the prey types they encounter are only one percent of the prey in a particular forest, or 0.00009% of all the animals of that species. They can only see what is given to them. Therefore they are not aware that any more exists. To them, the sample is the universe, and they base their decisions on it. They may have some uncertainty, but they cannot quantify it as we do when we know that our sample is 9% of the universe. What way of telling the size of the universe do animals have? None. Perhaps they have a rough idea that 90 bugs is not a good sample, but does not the number of bugs change all the time?



That second essay ends a little abruptly, doesn't it? I don't remember why. Anyway, that idea grew into my master's thesis, which would have grown into a Ph.D. dissertation if the department I was in at the time had been willing to consider simulation modeling a legitimate form of research. It was not, and I left science in a huff. (But I have written about the idea a few times over the years, and that makes me happy, so I'm good.)

In case my primary argument in that essay was not clear, I'll put it more simply: Never assume anyone knows what they don't know. That sounds obvious, but it's a hard habit to break.

Funny story: around the time I wrote that second essay, at a reception after a talk, I had the opportunity to ask John Krebs (of Stephens and Krebs foraging theory fame) a question about foraging theory. I have spent decades puzzling over the conversation, which went like this:
Me: What do you think about the idea that foraging theory anthropomorphizes animal knowledge and information use? I think there might be things we're not seeing because we don't think like other species do. I wonder what would happen if we approached information from a different point of view, from their point of view, as if we thought the way they think.
Krebs: How long have you been in graduate school?
Me: Two years.
Krebs: You'll get over it.
I still can't make out what he meant by that. Did he mean that ethologists don't anthropomorphize animal knowledge and information use? Or that they do, but they can't do anything about it? Or that nobody cares? Or that I should shut up and do as I was told? I still don't know.

But I wasn't the only one thinking about the issue. In the years after that, I attended several lectures on research that suspended assumptions about the way animals thought, and as a result, discovered some surprising things. In the study I remember best, researchers took birds of a species that was famous for having no courtship ritual whatsoever, filmed them interacting, and slowed down the film, to find an elaborate courtship ritual playing out so quickly that the human brain cannot see it happening. I remember being so excited during that talk that I could barely sit still, because it confirmed what I had been thinking about the way we went about studying animals and making claims about their behavior.

Another study I remember proved the now-well-known fact that putting colored bands on birds' legs and studying their social relations is a bad idea, because having a colored band on your leg changes your social standing. That seems obvious now, but it was quite a revelation at the time. Another study revealed that some male fish mate by pretending to be female fish. This pattern was hidden in plain sight for decades, because everyone who saw it assumed it must be a misunderstanding or a fluke. Then it was elephants communicating in wavelengths we can't hear, and plants sending messages in wavelengths we can't see, and the surprises just kept coming. I haven't exactly kept up with new developments in the field of ethology, but the little I have seen has given me hope that researchers are continuing to explore animal behavior in new and creative ways.

How these ideas influenced PNI

What does this essay have to do with participatory narrative inquiry? Lots. I can see influences on the development of PNI that came from each of the three points I made (about the limits of knowledge, the types of knowledge, and the value of information).

PNI and the limits of knowledge

You've probably heard about a thing in psychology called the Dunning-Kruger effect, where people become over-confident in an area because they are unaware of their ignorance. Back when I wrote that second essay, I was trying to express my feeling that ethologists had developed two simultaneous manifestations of a relational Dunning-Kruger effect, thus:
  1. The normal, self-reflective version, in which they overestimated their own knowledge about the knowledge of their study subjects, plus
  2. A vicarious version, in which they attributed knowledge of the limits of knowledge to their study subjects, when their study subjects had no such knowledge about the limits of their own knowledge.
What I didn't realize until recently is that (a) people don't just do this with respect to animals; and (b) I've never stopped thinking about the problem.

Let's think about animals for a second. Animals almost certainly don't sit around worrying all day about how much they know and how much they don't know. They know what they know, and they assume that's all there is to know. As far as we can tell, we are the only organisms that think about how much we don't know. So any random human being is likely to know more about the limits of their knowledge than any random dog or cat. But that doesn't mean we all know a lot about what we know and don't know; it just means we all know something about it.

I would guess that there is a normal distribution of awareness about knowledge limits. Some small number of people are probably aware of their ignorance to the point that they can take it into account in their decision making. The majority of us are dimly aware of the boundaries of our understanding, to the extent that we can apply rules of thumb and margins of error when we feel vaguely under-confident. And another small number of people are probably almost as clueless about the limits of their knowledge as any random dog or cat.

Figuring out how much any given person knows about how much they know is not an easy task, even when you can talk to them. How do you ask someone how much they don't know about something? You can test them to find out how much they know, but if you want them to estimate how much they don't know, don't you have to tell them the scope of the topic before they can make an estimate? And then don't they know more than they did? And then do you have to describe what's beyond that so they can make a new estimate? It's like trying to count the number of weeds in a pond when the only way you have of counting the number of weeds causes more weeds to grow.

So I'm not sure the question is that much easier to answer with people than it is with animals. But that doesn't mean we don't need to keep trying to answer the question; in fact, we need to answer it even more urgently with respect to each other. As social animals, we spend a lot of mental energy trying to figure out what other people need and how they will respond to the things we do and say. Everybody needs to do that in daily life, but when we are in a position to help people, we need to do it even more. If we think people know more about their needs and their limitations than they actually do, we are apt to predict their needs and responses wrongly, and we might end up hurting people instead of helping them.

Sometimes I think people give up trying and simply pretend they know what other people know about the limits of their knowledge. And then when someone asks them how they know that, they say things like "You'll get over it." Not getting over it — by actively pursuing answers to that question — is one of the goals of participatory narrative inquiry. In a sense, PNI came out of thirty years of my not getting over my original desire to make sense of perspectives that are different from my own.

Ignaz Semmelweis, Wikimedia Commons, Eugen Doby
A tragic example of what happens when you make erroneous assumptions about other people's knowledge of their limitations can be found in the story of Ignaz Semmelweis, the nineteenth-century doctor who famously tried (and failed) to convince other doctors to wash their hands after dissecting corpses and before treating pregnant women. (Actually, doctors were washing their hands, but with ordinary soap, which did not kill enough streptococcal bacteria to prevent subsequent infection.)

According to Wikipedia,
Semmelweis described desperate women begging on their knees not to be admitted to the First Clinic [where physicians also examined cadavers; in the Second Clinic, midwives did not, and the death rate was much lower]. Some women even preferred to give birth in the streets, pretending to have given sudden birth en route to the hospital (a practice known as street births), which meant they would still qualify for the child care benefits without having been admitted to the clinic. (Wikipedia)
Semmelweis wrote a series of articles advancing the theory that "cadaverous particles" were the sole cause of patient infections. His theory was attacked on many grounds, some reasonable, some questionable, and some simply prejudiced (such as the belief that his theory arose solely from his Catholic faith). He did not react well to these criticisms, becoming more and more combative, drinking heavily, and calling doctors who refused to change their practices "murderers." At the age of 42, Semmelweis was tricked into entering an insane asylum, held there against his will, and severely beaten, dying weeks later from his injuries. Only with the discovery of germ theory two decades later was he proven right — not as to his explanation of his findings, but as to his belief that lives could be saved by the measures he tried to promote.

The widespread rejection among Semmelweis' contemporaries of what today seems like common-sense advice has often been used as an example of blind perseverance in the face of contradictory evidence. But I'm not as interested in how other doctors reacted to Semmelweis' advice as I am in his failure to understand and adapt to their needs and limitations.

Ignaz Semmelweis was a man who cared deeply about his patients. He was "severely troubled" by the high incidence of puerperal fever in the wards he administered, writing that it "made me so miserable that life seemed worthless." These strong feelings set him apart from many doctors of the time; and later, his unique experiences set him even further apart. The death of a close friend and colleague, Jakob Kolletschka, forcibly and painfully challenged Semmelweis' views on infections and autopsies. He recounts the incident thus:
I was immediately overwhelmed by the sad news that Professor Kolletschka, whom I greatly admired, had died. . . . Kolletschka, Professor of Forensic Medicine, often conducted autopsies for legal purposes in the company of students. During one such exercise, his finger was pricked by a student with the same knife that was being used in the autopsy. . . . [H]e died of bilateral pleurisy, pericarditis, peritonitis, and meningitis. . . . Day and night I was haunted by the image of Kolletschka's disease and was forced to recognize, ever more decisively, that the disease from which Kolletschka died was identical to that from which so many maternity patients died. (Wikipedia)
Notice the words Semmelweis uses here. He was forced to recognize the connection, and ever more decisively, meaning that he must have revisited the tragedy over and over, as we do when someone close to us dies. Even his choice of the word haunted implies repetition, such that a place haunted by a ghost is described as being "frequented," that is, visited frequently. In this light, Semmelweis seems less a visionary than a man tormented by the consequences of his limited vision. If he had never experienced such a deep despair over his inability to make sense of the patterns he saw, he might have been as reluctant to examine the limits of his knowledge as the doctors he tried to convince.

It seems to me that Semmelweis' failure might have sprung in part from his inability to understand the impact of this experience on his awareness — and the impact of the lack of such an experience in the careers of his contemporaries. Consider the fact that one doctor Semmelweis did convince had a similar experience to his own:
Professor Gustav Adolf Michaelis from a maternity institution in Kiel replied positively to Semmelweis' suggestions — eventually he committed suicide, however, because he felt responsible for the death of his own cousin, whom he had examined after she gave birth. (Wikipedia)
Semmelweis seems to have assumed that other doctors were as haunted by their ignorance as he was; but it sounds like most of them were not. The theory of the four humours was in full force at that time, and most doctors probably felt no need to venture past its readily available explanations. They were satisfied with the state of their knowledge, saw no gulf beyond it, and were content to carry on as they had always done.

I wonder if Semmelweis would have gained more traction if, for example, he had refrained from posing any theory at all, and suggested changes to practice solely on the basis of the evidence he had collected. After all, he could have proposed his changes without attacking the predominant medical theories of the day. Neither he nor anyone else at the time could explain why the washing of hands with a chlorinated lime solution greatly reduced the incidence of infection in maternity wards; but the fact that it did reduce the incidence of infection was not in dispute.

As I said above, such an inability to imagine the experiences and mindsets of other people, based on erroneous assumptions about the nature and limitations of their knowledge, is something we directly seek to address and correct when we carry out projects in participatory narrative inquiry.

How do we do this? We ask people to tell us what happened to them, and we ask them questions about their knowledge and awareness during the events of the story. We ask questions like these:
  • How predictable was the outcome of this story? Did you know what was going to happen next?
  • What in this story surprised you? What do you think would surprise other people about it?
  • If this story had happened ten years ago, how do you think it  would have come out then? What about fifty years ago? What about in another location?
  • What could have changed the outcome of this story? What makes you say that?
  • What did the people in this story need? Did they get it? Who or what helped them get it? Who or what hindered them?
  • Does this sort of thing happen to you all the time, or is it rare? What about to other people you know? What about to people you don't know? Can you guess?
  • If you could go back in time to the start of this story, what would you say or do to help it turn out differently? What would you avoid changing?
The answers to these questions help us understand not only what happened to people but also what they know and don't know about it. Sometimes the most illuminating answer is "I don't know." And we sometimes ask follow-up questions, like:
  • Why did you say "I don't know"? 
  • What does that mean to you, that you didn't know?
  • What would you like to know?
  • How do you think you could find out? 
People facing situations like Ignaz Semmelweis faced can ask questions like these to understand (as much as anyone can) the perspectives, needs, and limitations of those they are trying to help.

PNI and the not-always-increasing value of increasing information

Now let's get back to the second point in the second essay: the value of increasing information. When I wrote that essay, I was concerned about an assumption I found distributed throughout the scientific literature on foraging theory, which was that the value of increasing information increased monotonically. In all of the models and theoretical frameworks I read on information use, more information was assumed to be more optimal than less information. I didn't see why that should always be the case. In particular, I thought the assumption might be problematical in situations where individual choices are interlinked in a complex network of mutual influence.

So I wrote a computer simulation to find out whether "smarter" individuals with somewhat better information about density-dependent resources would always out-compete "dumber" individuals with less information. ("Density-dependent resources" are resources whose value to each individual depends on the number of other individuals drawing from it, like a bird feeder that holds the same amount of food whether five or fifty birds visit it.)

According to foraging theory, there was no point in writing such a simulation because the outcome could be predicted in advance; but I wrote it anyway, because I was curious. Surprisingly, the "smarter" simulated allele did not fixate (exclude all others) in the population. Rather, the two alleles kept returning to a roughly 75/25 ratio, representing (for that simulated situation) a "mixed evolutionarily stable strategy," that is, one in which a mixture of strategies is more optimal than any one pure strategy.

A lot of birds, Wikimedia Commons, NASA
It took me a while to figure out why this was happening. After I spent some time watching my simulated organisms make their decisions, I realized that what I was seeing made perfect sense. The smart individuals would find out exactly where the best food sources were and rush to them, only to find all the other smart individuals there dividing up the food. The stupid individuals would wander aimlessly from place to place. Most of the time they'd get nothing but the crumbs left over, but sometimes they'd find themselves feasting at a "bad" food site that was nevertheless better than the "good" sites the smart crowd was picking to pieces. After a while, I couldn't get the joke "nobody goes there anymore, it's too crowded" out of my mind.

The result I got was counter-intuitive to foraging theory because there was an inconvenient trough in the value of increasing information. The smart organisms knew that a food source was better, which was more than the stupid organisms knew; but they didn't know what all the other organisms were about to do. Thus their intermediate level of information was sometimes better and sometimes worse, such that the net value of the increase was not enough to eliminate the relative value of stupidity. Thus the greatest fitness, at the population level, was in a mixture of strategies, including some that had no obvious value on their own. (I should mention that the idea of an optimal mixture of strategies goes all the way back to Cournot's 1838 concept of a duopoly; but still, the idea was not commonly applied to foraging theory at the time I was thinking about it.)

Now let's come back to participatory narrative inquiry. Situations in which complex interactions influence the options, choices, and behaviors of everyone involved are also situations in which PNI works best — and I now realize that this is probably not an accident. PNI is at its most useful at times when it seems like you know enough to come up with a viable solution, but you have been stymied by missing information you can't guess at. In fact, most PNI projects start from a situation in which, even though "everyone knows" what the problem is, prior attempts at solutions have shown the current level of knowledge to be insufficient. You could even say that the whole reason PNI exists is to compensate for troughs in the value of intermediate levels of information in complex situations.

That's why surprise is such an important part of PNI. I've noticed that on every PNI project, somebody is surprised by something. An assumption is overturned, a trough turns into a peak, and new options open up as a result. I've always found this to be profoundly satisfying, and now I know why.

PNI and types of information

The third thing that bothered me about foraging theory when I wrote that essay was how researchers used the word "information." Whenever people gave examples of information in the papers and books I read, it was nearly always about facts external to the organism: where food could be found, how much energy could be found in the food, weather conditions, and so on.

But that can't be the only information an organism needs, I thought. There must also be internal information, such as the organism's hunger or satiety, its health, its age, its reproductive state, and so on. An animal with excellent knowledge about its internal state should out-compete an animal with poor internal knowledge, right? But nobody seemed to be studying internal information, or even acknowledging that it existed.

Bird on branch, Wikimedia Commons,  Mathew Schwartz
And there must also be another type of information, I thought: some idea of how all the other pieces of information fit together. I called this relational information. For example, if I am a tiny bird perched on a branch in mid-winter, I must know that I am in danger if I don't obtain enough food to replenish my fat stores to a certain extent. Such information may only be "known" at the level of an instinctual urge, but it should exist in some way, because it must stand behind the decisions animals make about how much energy to expend on foraging. Should I stay on the branch and conserve my heat, or should I swoop down in search of food? Without internal and relational information it's hard to make such a decision.

So I wondered why researchers never seemed to pay attention to either internal or relational information, even in theoretical considerations of animal behavior. My guess was that these types of information were so much harder to observe and control that people tended to ignore them. It's easy to vary the values and distributions of food sources and then watch what animals do in those situations, especially when you can see them evaluating the obvious differences between the food sources. Trying to figure out what animals know about their internal states and how the world works is a more daunting challenge. But that doesn't mean those types of information don't exist or don't matter.

Now let's think about how this applies to participatory narrative inquiry, because, of course, it does. Just like the researchers whose papers I was reading back then, we all theorize about the mental and emotional states of the people whose needs, limitations, and probable responses are important to us. We do this individually every day, and we do it collectively when we embark on a project to solve problems or improve conditions in our communities and organizations. And like those researchers, we have an easier time thinking about external information than internal or relational information.

That's something I have noticed when I talk to people who are just starting out doing PNI work. If you visualize all the questions you could possibly ask about a story, arrayed in a set of concentric spheres around the story, people always seem to start out thinking about the outermost sphere. They ask questions about the basic facts of the story, like:
  • Where and when did this take place?
  • Who was involved?
  • What was the topic or theme of the story?
  • What problem was solved? Who solved it? What was the solution?
After they've gotten more practice thinking about projects, people start moving inward, inside the bubble of their participants' experiences, to where internal information is important. They start asking questions only a story's teller can answer, like:
  • How did you feel when this happened?
  • What surprised you about it? What did you learn from it?
  • What do you wish had happened?
  • What helped you in the story? What held you back?
Finally, experienced PNI practitioners move into the center, where relational information (that is, beliefs and values) can be found. They start asking questions about what the storyteller thinks the story means about the way the world works, like:
  • Why do you think this happened?
  • Does this happen often? Should it?
  • What would have changed the outcome of the story? Would that be better or worse?
  • Who needs to hear this story? What would they say if they heard it? What would happen to you if they heard it?
Another thing I've noticed is that the closer PNI moves to the center of these concentric spheres, the more it deviates from other modes of inquiry. When a PNI project asks questions anyone could answer about a story, it's hard to distinguish from any other kind of survey-based research (and it's hard to make a case for its use). In such a situation, the story is just another data point, and it's not all that critical of a data point either. You could ask people questions at the outermost level with or without a story, and the answers would not be that different. For example, you could ask people to give you a list of all the problems they solved in the past year, and you wouldn't get much of a different picture than if you asked them to tell you a story about a problem they solved.

When a PNI project asks questions closer to the center of experience, however, the story becomes much more than a data point. It becomes a vehicle by which participants can make sense of their experiences, drawing forth internal and relational information they didn't realize they had (or cared about). As a result, when PNI works well, by the end of the project, everyone learns something about themselves and each other.

So in a way, you could say that my work on PNI has been a continuation of my earlier attempts to get people to "move inward," closer to the center of the experiences and perspectives of those they seek to understand.

On experiment and reality

I have one more old essay to show you. It's an appendix to a paper I wrote for a graduate class, apparently in the sociology department, about an experiment on social interactions among fish. At first I didn't remember the project described in the paper, but as I read I began to remember bits of it. What I remember most is that I did the project in the "fish room" of the biology building basement. The light switch in that room was wired badly, and two or three times I got an electrical shock when I flipped the switch with wet hands. That's a thing you remember.

Most of the experiments I did in my early days as a wannabe-ethologist had to do with social interactions: dominance hierarchies, how kin find each other, tit-for-tat balances, methods of communication, social signaling, intention movements, and so on. I was intensely curious about the evolution of social interactions, because . . . well, I still can't understand how anybody could not be intensely curious about that.

Pumpkinseed sunfish, Wikimedia Commons, Kafziel
The experiment went this way. I netted 150 pumpkinseed sunfish from a pond and put them in a tank. (Or somebody netted them. It says "for use in another experiment.") From those 150 fish I picked out ten groups of three fish of roughly equal size (because any big-fish-little-fish contest is a foregone conclusion).

For each of the ten groups of three fish, I followed this procedure:
  1. Isolation: I put all three fish in tanks by themselves for five days.
  2. Establishment of dominance: I put two of the three fish together and watched them until I could see which one was dominant. (They peck at each other, like chickens.)
  3. Re-isolation: I isolated the loser of the dominance contest for another day. (The winner got to go back into the big tank.)
  4. Test: I put the loser from the previous encounter together with the third (still isolated) fish and watched what happened.
What was supposed to happen, according to prior research, was that the losers in the first contests would remember their low status and lose in the second contest as well. What did happen was that eight of the ten losers won the second time around. As I explained in the paper, this could have meant a wide variety of things, but it could not really be said to mean anything, because the sample size was so tiny. I knew that going in, and so did my professor. It was just a practice project to write a practice paper.

None of that is interesting. What is interesting (to me, now) is that I wrote an appendix to the paper, and that appendix, even though it's mostly a jokey thing I wrote to myself, connects with participatory narrative inquiry. I can't guess if I actually submitted the appendix with the paper or just kept a copy for my own amusement. In any case, here's what I wrote.



Appendix: The Poorly Understood and Sorely Neglected Behavior of Pumpkinseed Sunfish in Laboratory Tests.

As I reviewed the literature for this experiment, and again as I watched the fish setting up dominance relationships, it occurred to me that although many descriptions had been published of the social behavior of the pumpkinseed sunfish and other species, never had anyone attempted to describe the peculiar suite of behaviors that is shown when fish are placed in a testing tank and observed. I will now endeavor to present an extended ethogram of the experimental behavior of that species, with due attention to the fish-human interaction.

In the course of my work, I soon realized that I could divide the entire behavior of the fish in the test situation into a series of discrete stages that occur repeatedly and in a predictable sequence.

1.) Disbelief (D). When a fish is first placed in a strange tank, or the partition dividing a tank is removed, or some other equally amazing thing happens, the fish's first thought is — "I am dead." This has some basis in nature; when a fish is suddenly caught up and thrown into a new body of water, it is most probably in (a) another fish or (b) a net. Thus the fish upon entering the test arena spends some time in what others may call shock but which I prefer to call disbelief (mostly because it is a longer word, and it simply doesn't do to have scientists running around using small words). Now the state of this poor fish would be almost comical, if one were completely callous and cold-hearted (which I am not!); it lolls about on the bottom or in a corner, sometimes rocking gently, for a period of anywhere from ten seconds to half an hour.

2) Escape (E). At some point (as I have said, this is highly variable and begs further study) the fish suddenly realizes that it is alive. Its very next thought is — "If I'm alive, then . . . I'm trapped! I've got to get out of here!" It then proceeds to push its way out of what it assumes to be water but what most annoyingly turns out to be an invisible force field, or what we humans know better as glass. The fish, as any good Vulcan would do, assumes that there has to be a weak spot in the force field, "Somewhere where the ion magnifier exceeds its photon limit. It is only logical." With its mouth open and its gills flaring, it presses here and there and here and there and there and over in that place and down here and up there — you get the idea.

I can see another parallel for this behavior in nature. Surfaces in nature, be they pond bottoms or stream edges, are mostly made of stones; and stones often have fish-sized holes between them. So a fish trapped in, say, a small pool off a running stream, needs only to poke and prod until it finds a way out. The intensity of this behavior often gets quite high and varies substantially among individuals, due undoubtedly to some differences in susceptibility to claustrophobia.

3). Recognition (R). You may have noticed that so far I have not mentioned interactions between the two fish. Far from being unprofessional and unobservant, I reserved the recognition of another fish to its own stage. At some point one of the fish looks around and gasps — "Good God! There is another fish in here!" And it is from this realization that we get the data point "Attacked first," for that fish usually wants to get a good nip in before its fellow occupant itself reaches the R stage.

You may ask why the fish did not notice its companion before, especially when they both decided to poke at the same spot. Yes, this is another parallel in nature. Fish in the wild get bumped up quite a bit: things are always floating by, children will be throwing rocks, crayfish are scuttling around, outboard motors are making a ruckus. So even the most violent escape attempts by another fish are treated as the usual disturbance — get as far away from it as possible, but for heaven's sake don't stand there gawking at it! Thus it is only in a moment of lucid tranquility that the recognition stage arrives on the fish. To the nipped fish, the R stage is entered abruptly and assuredly, as nothing else feels quite like a pumpkinseed sunfish bite.

From this stage on begin the "normal" interactions we record on our data sheets and analyze, ignoring as good scientists the unseen (but standard! at any rate) behaviors described here.

Perturbations of the normal scheme of things are of two types: relapse and awareness. A relapse is caused by a large disturbance, such as the observer tripping over the blind or camera, dropping something loudly, or banging the testing tank with any number of things. (Not that any of these things has ever actually happened to me; I merely heard of them through other experimenters.) A relapse usually drives both occupants of the tank back to the disbelief stage, from which it is a long wait to realization of life, frantic escape, and back to aggression.

Awareness, the second type of perturbation, is often more devastating for the observer because of its psychological implications. This perturbation occurs when the observer is foolish enough to bump the blind or sit in such a way that a bit of his or her clothing shows (the observer who wears brightly colored clothing clearly knows nothing about fish), or cough (this has produced innumerable disasters to science). At this point the fish becomes aware of the fact that "Something . . . is out there . . . watching me." (Or us, if the R stage has been reached.) The fish assumes a position quite like that taken in the disbelief stage, with the exception that the fish faces the observer, glaring intensely this one thought: "I see you, you disgusting finless giant; I know what you're doing; and whatever it is you are waiting for me to do I will try my hardest to avoid." At this time the observer quite predictably mutters (inaudibly, of course, so as to prevent a relapse) several epithets that would not evoke full cooperation if heard and understood.

This concludes the extended ethogram of the true behavior of the pumpkinseed sunfish, adding precious insight to our scientific understanding of this interesting species.



That essay is a silly little thing, but I had something serious in mind when I wrote it, and I haven't stopped thinking about it in the years since. The more you read about the science of behavior in any species (including our own), the more obvious it becomes that a lot of the findings we rely on were derived in artificial contexts, just like my ridiculous project watching fish interact in empty tanks and pretending it meant anything at all about what their lives would be like in a natural setting. (It was a practice project, but the experiments it referenced and sought to replicate followed similar procedures and drew similar conclusions.)

The most obvious example of such blindness in human research is the much discussed fact that almost all psychological and sociological research — research that tells us how "humans" think and feel — is done on WEIRD (Western, Educated, Industrialized, Rich, Democratic) university students. The WEIRD acronym comes from the instantly-famous 2010 paper "The weirdest people in the world?" (in Behavioral and Brain Sciences, by Joseph Henrich, Steven J. Heine and Ara Norenzayan). Other researchers brought up the issue before that paper (for example, the "carpentered world hypothesis" was first put forth in 1973), but the WEIRD name has given the discussion new energy.

As a 2010 New York Times article put it:
[A] randomly selected American undergraduate [is] 4,000 times likelier to be a subject [of psychological research] than a random non-Westerner. . . . Western psychologists routinely generalize about “human” traits from data on this slender subpopulation, and psychologists elsewhere cite these papers as evidence. . . . [R]elying on WEIRD subjects can make others feel alienated, with their ways of thinking framed as deviant, not different.
I'm not going to cite any of the studies that demonstrate the flaws of WEIRD research here — they're easy to find — but I would like to mention a few things I noticed in recent discussions that connect to participatory narrative inquiry.

In a blog post called "Psychology Secrets: Most Psychology Studies Are College Student Biased" (on the PsychCentral blog), John Grohol lists the reasons psychologists are still not widening their research populations. Using university students is convenient; it's cheap; it's the way things have always been done; and it's good enough for the time being. You'll get over it, basically.

Grohol then says this:
There’s little to be done about this state of affairs, unfortunately. Journals will continue to accept such studies (indeed, there are entire journals devoted to these kinds of studies). Authors of such studies will continue to fail to note this limitation when writing about their findings (few authors mention it, except in passing). We’ve simply become accustomed to a lower quality of research than we’d otherwise demand from a profession. 
Perhaps it’s because the findings of such research rarely result in anything much useful — what I call “actionable” behavior. These studies seem to offer snippets of insights into disjointed pieces of American behavior. Then someone publishes a book about them, pulling them all together, and suggesting there’s an overarching theme that can be followed. (If you dig into the research such books are based upon, they are nearly always lacking.) 
Don’t get me wrong — it can be very entertaining and often interesting to read such books and studies. But the contribution to our real understanding of human behavior is increasingly being called into question.
I have learned over the years that if I try to defend participatory narrative inquiry as being "scientifically valid" I will fail. PNI just doesn't hold up as a scientific endeavor. Its participants are given too much control over the process for PNI to prove anything conclusively. There's no control group. The sample is self-selected and non-representative. Interpretation is biased and variable. There are no inter-interpreter validation checks. Conclusions are idiosyncratic and local. Results cannot be replicated, not even later on the same day. What it all means depends on whom you ask, and when, and how.

This is what I mean when I say that PNI is not a science; it's a conversation. When you invite people to tell whatever stories they want to, interpret their stories however they like, talk about their stories in groups, and draw their own conclusions, "proof" isn't a very useful word. "Useful" is a useful word. Above all else, PNI aims to be useful.

In a way, PNI is the ultimate anti-WEIRD research paradigm, because it aims for a real understanding of human behavior — that is, an understanding that is contextually situated, internally relevant, externally meaningless (and happy to be so), and purposefully, aspirationally, hope-fully actionable.

Here's one more quote about WEIRD research, from a Slate article, that relates to PNI:
So the next time you see a study telling you that semen is an effective antidepressant, or that men are funnier than women, or whether penis size really matters, take a closer look. Is that study WEIRDly made up of college psychology students? And would that population maybe have something about it that makes their reactions drastically different from yours? If so, give the study the squinty eye of context. As we often add “… in bed” to our reading of the fortunes in fortune cookies, it’s well worth adding “… in a population of Westernized, educated, industrialized, rich, and democratic college students” to many of these studies. It can help explain many of the strange conclusions.
The purpose of PNI is, precisely, to apply the "squinty eye of context" to statements about what is normal, or real, or human, so that they can grow into insights we can use in our lives and communities.

The types and categories of research

As I said above, I take this connection across three decades to mean that PNI was in a sense fated to happen when complexity theory worked its way into the study of social behavior. As a nice side effect, it also means that my professional career has been a lot less rambling and accidental than I thought it was. At least I've rambled over some of the same spots, and that's a comfort.

I can't help but wonder, though, why it took me so long to realize that I was still working on the same issues. Why did I not see that my work on PNI was a continuation of "not getting over" my early concerns about hasty assumptions and unexamined perspectives in social behavior? I don't know. Maybe it was because I left science in a huff. Maybe the idea of "leaving science" was the problem in the first place. Maybe science, or research, shouldn't be so easy to leave.

I don't know if it's because of reaching the twenty-year mark or what, but I've noticed that I've been describing my career differently over the past year or two than I used to. People always ask how I got started doing story work, probably because I don't sound like any sociologists or anthropologists (or storytellers) they know. I used to say "it was an accident" and describe how I applied for a job at IBM Research because my husband was already working there and we could commute together, and I ended up getting hooked on "this story stuff" as a result. That's all true, but lately I've noticed myself saying, "I started out as an animal behaviorist, but after a while I switched species." That always gets a laugh, but probably the deeper reason I say it is that I'd like to have a more coherent story to tell myself. But it's not a fictional story; it's a real connection. So why didn't I see it?

Maybe it's not just me. Maybe it's the way we all think about research. Maybe it's too organized. Maybe it has too many types and categories. Maybe sometimes — temporarily, thoughtfully, and purposefully — we need to place everything on the same level and let new connections appear. Yes, we need more diversity in our research populations (both researcher and researched), but maybe we need new connections among some other things too:
  • sociology, psychology, anthropology, and ethology;
  • proof, utility, and action; 
  • participation, observation, and experimentation; 
  • contextual and universal conclusions; 
  • academia, business, government, and even some out-there independent scholars like me, who bounce around from one field to another, thinking they've crossed vast distances when they've really just been pacing the same small circles for decades.
Why don't we all walk around together finding out useful things? That sounds good. Let's do that.