PNI Justified

This excerpt, about the history and provenance of PNI, is from More Work with Stories: Advanced Topics in Participatory Narrative Inquiry (which is at this time about 90% written). People keep telling me they find this part of the second book particularly useful and interesting (because they're curious about where this all came from). So I thought I'd bring this excerpt "up" here to the blog while the second book waits to be published.


I hate it when people claim their truths are more truthful than everybody else’s truths. Because then I have to go and find out what other people say about whether that’s true, and then what people say about that, and so on. I've read a lot of books that say "do this" and "do that" without much of an explanation why you should, other than a thorough condemnation of everything else. No, what I like best is when people can tell me a story about how something has been useful to them in a real situation. If they used something and it helped them do something, I might be able to use it too. I can use that sort of thing far more than I can use absolute statements of truth and falsity. What would be the use of a cookbook that had nothing in it but praise for olive oil and condemnation for butter? What could you do with that? Not much, at least not by itself. Experience is the best teacher, mine, yours and ours.

Still, in introducing PNI to you I wanted to give you some idea of why I recommend the things I recommend. So I started writing down some questions people have often asked about justifications for elements of PNI, like, “Why does it matter that the stories are raw?” I soon realized that every answer I could give was a story about how I (and others) learned things that changed the way we did things ever after. This reminded me of the most frequent piece of feedback about earlier editions of this book [actually, the other book; I'll be fixing that]: that there were too few stories in it. So instead of making claims to truth about PNI I decided to tell you some stories about how it developed the form it has today. I have written them in chronological order so you can follow PNI as it grew: asking questions about stories; listening to stories (rather than telling them); helping people make sense of collected stories; asking people about their own stories; keeping stories raw; catalyzing sensemaking.

Let me begin by saying that I did not make any of the discoveries that formed PNI all by myself. Over the years I have worked with a large number of colleagues, clients, participants, questioners and correspondents (all of whom you will find thanked in the acknowledgements appendix at the end of Working with Stories). In each of these stories I recount events that depended on the work of several people. The reason I say "I" a lot is that I am talking about the particular moments in which I first encountered the insights that became cornerstones of PNI methods. Other people also discovered those same insights, sometimes at the same moment I did and sometimes at different moments. In other words, these are my memories alone, but they are not my accomplishments alone.

My policy in recounting these stories has been to name people only when I can name them in a positive light; when mistakes were made, if they were not my own I describe them only in a roundabout way. My own mistakes I am happy to divulge, because I value them and hope you will too. I have a rule which people who know me will recognize: it's called the embarrassment rule. The rule states that if I look back on work I did a year or two ago and don't find it embarrassing, that's a bad thing because it shows I'm not making progress. These stories show simply magnificent progress.

Why ask questions about stories?

The first story I have to tell you about the development of PNI took place in 1999, the first year I worked on organizational narrative, at IBM Research. As you might guess, a research group working with stories was a bit of a misfit in IBM's computer culture. There was a danger of being considered insufficiently serious, so we found it expedient to consider ways we could prove both our utility to IBM and our legitimacy to its culture. One of the waves passing through the computer world at that time was XML. (XML stands for eXtensible Markup Language and is simply a standard method of describing how a set of documents will be described.) So the idea of using XML to do something with stories, something that might address IBM's real needs, came up. One of IBM's real needs was that of information overload as an inhibitor of organizational learning. So I was asked to consider how we could use XML as a tool to address issues of information overload related to organizational story databases, perhaps of best practices or expert advice.

My first ideas about the project's goals reflected that first focus. I thought about helping people organize stories so they could find them again; about helping people select and sort stories to reduce how many they would need to read to "zero in" on the solution that would solve their problem; about helping people summarize and visualize stories so they could skim hundreds at a glance. Because XML is a system for specifying metadata that accompanies information, my task was to identify metadata that would be required to accomplish the goals of organization, selection, sorting, problem solving, summarization and visualization of stories. (Metadata is just data that describes other data, like a name that describes a picture.)

How to begin understanding what metadata people might want to collect about stories to meet those goals? I already knew that the idea of classifying and deconstructing stories was nothing new. Aristotle distinguished tragic from comedic drama and epic from lyric poetry. In 1916 Georges Polti proposed that all stories could be classified into thirty-six dramatic situations (including such categories as "The Slaying Of A Kinsman Unrecognized" and "An Enemy Loved"). Everyone generates and exchanges metadata about stories every day, in discourse, memory and anticipation. In fact, people telling stories often include explicit metadata about the story or the storytelling situation to prove that the story is worth listening to—"I’ll never do that again" or "That was an incredible experience."

So I asked myself this question: What are all the questions anyone could possibly ask about a story? From that I arrived at another question: What are all the questions anyone has ever asked, or recommended asking, about stories? The idea was to arrive at a global list from which one could draw sets of questions for particular contexts of use. My original list of fields to consider, in rough order of the degree of attention paid, was: narratology, folklore study (comparative and contextual), professional fiction writing, professional storytelling, case-based reasoning, narrative organizational study, narrative inquiry and analysis, narrative psychology, narrative philosophy, knowledge management, knowledge representation, artificial intelligence, information retrieval, literary theory, and journalism.

Having decided on this list of fields, I sought the seminal books and papers in each field, then began looking in them for instances of metadata -- questions, categories, segmentations, classifications, analyses. Everything that didn't start out as a question I reframed as a question. I'm a natural organizer and am never happier than when I have hundreds of similar-but-not-quite-identical things to put into little piles; and that's just what I did.

I found that looking for story metadata was like breathing: it was everywhere. In fact the problem was not to find metadata related to stories; the problem was to make sense of the huge mass of it and reduce it to something tractable. So I read and read, and after I reached a feeling of satiation in every field I had intended to cover (this took nearly three months, which was a rush job compared to the way I like to do things) I stopped writing things down and started cutting things apart. I snipped the questions I had written into little slips of paper, then played with them. I allowed a structure to emerge slowly, continually checking and adjusting to take account of new perspectives. At a few points I reiterated the design by taking apart the whole structure and putting it back together again.

The number of questions topped out at around four hundred, and they formed slowly into three large groupings at the top level of a hierarchy several levels deep. The hierarchy of questions represented a fairly inclusive mapping of the things people asked about when they asked about stories. The three largest groupings were what I now call the story dimensions of story form, function and phenomenon. As far as I know, the XML standard itself was never published and was never used in any computer database.

Just as important as the discovery of story dimensions, I think, was my increased understanding of what asking questions about stories could do. I went into the project thinking of only the most pedantic, though worthwhile, reasons to ask questions about stories. I came out of it with a far stronger vision of what questions could do, and that vision has influenced all of my later work.

Some of this discovery was particularly inspired by specific examples of question-asking I found in the sources I read. I'd like to tell you about some of these examples so you can see for yourself where these ideas came from.

Why ask questions about story form?

Metadata on story form can help people compose better, stronger and more compelling stories; understand the working parts of stories they have been told; and think about stories they tell themselves. Two elements of my explorations into story form stand out most.

The first was a quote in Doug Lipman's excellent book The Storytelling Coach. In one part of the book, Lipman explains three types of "suggestion" a storytelling coach can give to a person working on a story: a positive suggestion (what if you did this), a personal reaction (when you said this I felt that) and a question. Said Lipman:
It may seem odd to classify questions as a form of "suggestion." Yet they rank as my most powerful kind of tool for drawing on your creativity while directing you toward specific improvements. Questions point you toward answers within you, the storyteller--not within me, the coach.
I can still remember the moment when I read that statement. Something within it spoke to me of a purpose much more exciting than simply finding a story in a database. I began to see that using questions to explore stories might have impacts on making sense of the world, not just finding information.

The second piece of story form exploration I remember well was a moment when I was using the software Dramatica Pro. This software, which still exists and has a wide following, is a tool that helps screenwriters and novelists develop and improve their stories. Use of Dramatica Pro consists, in the main, of answering many dozens of inter-related questions about a story's characters, plot, theme and so on. The tool essentially embodies knowledge about story form and creates a facilitation in which writers are guided through the application of that knowledge to their particular story.

Our group bought a copy of Dramatica Pro, and I played with it as part of my exploration of professional screenwriting tools. Not having a story in mind, I thought I'd play with a folk tale. Casting about randomly, I chose the story of Little Red Riding Hood. I knew it well (so I thought) having heard and read many versions of it over the years. As I answered the questions put to me by Dramatica Pro an awareness grew, then suddenly a door opened up through which I saw the familiar story in a completely new light. I had never before had the faintest inkling of the sexual nature of Little Red Riding Hood until that day. I quickly looked up the folk tale on the internet and found the issue much discussed, with many viewing the story as a cautionary tale to girls about sexual violence. This was astounding to me. When I chose the fable to consider, nothing could have been further from my mind. I remember the moment when I realized my answer to one of the questions put to me by Dramatica Pro was "rape." I jumped up from my desk and began pacing the room, possibilities bounding all around me. If I could come to such an eye-opening discovery about a folk tale I had known (or thought I had known) since childhood, what fountains of insight might people be able to discover about stories from their own organizations and communities?

Why ask questions about story function?

You will probably have recognized that my original goals for the story XML project all had to do with story function. I'll type them here so you don't have to find them again: organization, selection, sorting, problem solving, summarization, visualization. All function, right? That shows the mindframe I had going into the project, though it expanded manyfold coming out on the other side.

In the area of story function I remember one very important experience that helped me understand what answering questions about stories might do for people. Roger Schank was, and is, one of the strongest proponents of the use of stories for knowledge transfer, mainly in a field called case-based reasoning. I was reading about the work of Schank and others on this topic, and I came across a fascinating account by Jorn Barger, a programmer who had worked on some of Schank's story databases. This is the bit that made me jump up out of my seat:
All the links from story to story in the Ask-Tom casebase had to be 'hand-crafted' or hardwired, which ultimately meant looking at every possible pair of clips and asking whether either would make an interesting followup to the other, and which of the eight CAC-links it made most sense under.
What does this mean? The "Ask-Tom casebase" was a collection of videotaped storytellings built to help people learn about complicated, knowledge-rich topics. Barger was describing the process by which the database's system of typed links between stories was created. To me, in the context of compiling a list of all the questions anybody might ask about stories, this behind-the-scenes comment was nothing less than revelatory. Why? Because through it I realized that the creation of such typed links, an activity which the AI researchers found an onerous task and seemed embarrassed to admit was being done by clerical help, was a perfect sensemaking activity for people telling each other stories. Their problem was my solution! A system that helped people think about why and how stories connected with each other would inevitably help them make sense of the stories, and thus the topic they needed to explore. For the purposes of building an expert knowledge system this might be clerical work; but for the purpose of making sense of complex topics in support of decision making, it was empowerment.

This observation dovetailed with one my husband and I had earlier when we built our educational simulations of the natural world. To our surprise we found that we learned much more by building our simulations than anyone else could possibly learn by using them! In fact, if we had been able secure funding to go on, we would not have built better simulations but would have built a simulation-builder so that other people could learn what we learned by building their own models of the natural world. In exactly the same way, building a web of typed, annotated links among stories can help people understand much more about a topic than simply making use of links created by others. This "don't build it, build a builder for it" idea has resurfaced in many of the projects I have done in the years since, both in workshop exercises and in online story exchange.

Why ask questions about story phenomenon?

Of the three dimensions, the utility of questions about story phenomenon surprised me the most. Did you know, by the way, that people have been studying everyday stories told in organizations since the 1980s? Many people I meet today do not seem to be aware of this and because of it miss some real nuggets of insight buried in the research literature. The writings of David Boje, Mary Boyce, Yiannis Gabriel, Alan Wilkins and Joanne Martin stand out as particularly insightful. I strongly advise anyone interested in the field to look up some of their papers, and I list some I found most useful in the References appendix.

When I ponder what got me excited about asking questions about story phenomonen, three stories come to mind, mostly from that literature, as having shown me the way. I'll tell them to you now.

In Joanne Martin's work I found the uniqueness paradox of organizational stories. Martin's fascinating 1983 paper presents an apparent contradiction in the stories told in organizations, thus:
Researchers have noticed that organizational cultures, and in particular organizational stories, carry a claim to uniqueness -- that one institution is unlike any other. ... In spite of these claims to uniqueness, cultural manifestations share common elements and express common concerns.

In other words, everyone at Company A says Company A is unique because it uniquely values its employees. But everyone at Company B says Company B is unique because it uniquely values its employees. The paradox is that they are both right. I interpret this to mean that the meaning of the term "valuing employees" differs among organizations in such a way that they probably are unique, but only in the details. And it is in the details that organizational stories operate. People tell stories about their organization or community to communicate, and negotiate, the details of its unique character. What excited me about this paradox was not so much that it existed, but that listening to the stories people told could help you understand the unique character of the organization. And how better to find out whether stories represent uniqueness than to ask questions about them?

The second story that stuck with me in my reading about story phenomenon was the nine-day fortnight. Veterans of the field will recognize this story from its title, but for everyone else I'll reproduce the quote that introduced me to it. It is from a paper by Alan Wilkins (whose writing I found exceptionally clear and insightful, I might add, to those wary of scientific jargon). It goes like this:
. . most employees at one company I researched have been told the story about how the company avoided a mass layoff in the early 1970s when almost every other company in the industry was forced to lay off employees in large numbers. Top management chose to avoid a layoff of 10 percent of their employees by asking everyone in the company, including themselves, to take a ten percent cut in salary and come to work only nine out of ten days. This experience became known as the “nine-day fortnight” by some and is apparently used as a script for the future of the company. In 1974 the company was again confronted with a drop in orders, and it went to the “nine-day fortnight” scheme for a short period. When other companies in the industry began layoffs, the old-timers used this story to quiet the anxiety of concerned newcomers…. Employees occasionally tell [this] story to communicate that this is the "company with a heart". Everyone I talked to in the company knew the story, which is used both as a symbol and a script.

What excited me about this story was the way in which it took on a life of its own within the organization. When you consider story form, a story is like a brilliant diamond you turn from side to side. When you consider story function, a story is like a tool you apply with skill to a task. But when you consider story phenomenon, a story is like a river flowing through the life of the organization. How much better to ask questions about the river as it moved through time and space and change, and how much more that could offer to those who want to make decisions about the life of the organization.

The third story I want to tell you is from the work of the folklorist Richard Bauman. His slim volume Story, Performance and Event introduced me to the possibilities of considering the contexts of storytelling. If you will permit me, I will ease my burden by quoting myself quoting Bauman (this is from a chapter I wrote, with Dave Snowden, in the book Strategic Networks: Learning to Compete).
[Some] stories .... may appear to be "about" nothing. Anyone looking for concrete evidence of "knowledge transfer" or "peer learning" – or even truth – may discard these stories, which are in some ways the most important to retain. Bauman (1986) describes how stories may be patently untrue at a purely factual level, but may reveal much deeper truths about the community in which they are told. Bauman quotes one man, during “an exploration of storytelling and dog-trading in Canton, Texas”, who says, “when you get out there in the field with a bunch of coon hunters, and get you a chew of tobacco in your mouth, and the dogs start running, you better start telling some lies, or you won’t be out there long.” In other words, among coon-hunters lying is a mark of truthfulness, that your word, deep down, can be trusted: that you belong.

Such storytellings are critical determinants of identity negotiation. Says Bauman: "Since at least the time when a distinctive body of American folk humor first emerged during the early years of the American republic, the hunter and the trader have occupied a privileged place in American folklore. Dog trading at Canton is a thriving contemporary incarnation of this American folk tradition. The tall tales and personal narratives of its participants place them in unbroken continuity with the generations of hunters, traders, and storytellers that have given American folklore some of its most distinctive characteristics." In other words, these hunters tell the stories they tell to "place" themselves within the "unbroken continuity" of a larger cultural identity. When one coon hunter told Bauman, "any man who keeps more'n one hound'll lie to you", he was representing his identity as a member of a noble group, not complaining or bragging. You can imagine that someone observing storytellings like these ... and looking for evidence of "best practices" transferred would conclude that the group performed no function and should not be supported, when in fact they could be on the verge of reinventing the organisation.
This insight, that context can upend content -- even to the remarkable extent that lying can be seen as a sign of truthfulness -- played a part in convincing me that if any questions could be asked and answered about a story, they should be questions of context. You might suppose I will now say that only questions about story phenomenon have proven useful in real story projects. But that is not the case. In fact all three dimensions of story work together in a synergistic way to help people understand the full meaning of stories told.

Practical proof

All of these ideas were enticing, and reinterpreting Little Red Riding Hood was fascinating. But when did I first find out that asking questions about stories would be useful to organizations? Here is the story of my first proof of the concept. In 2001 I was working with Dave Snowden and Sharon Darwent on story projects for IBM clients. We knew that bringing my ideas about questions together with their archetypes (which I call story elements) should be useful, but we weren't sure exactly how to go about merging them. We found a client particularly interested in new ideas (thank heaven for such people). This client had collected some forty stories about a problem they wanted to address.

I received the audiotaped stories and transcribed them, then set out to answer some questions about them. I looked at my list of four hundred questions. It would take forever to answer even a hundred questions about each of forty stories, so I needed to prioritize. I looked through my list and chose my favorite ten or so questions to ask, those I felt would be the most revealing. It was important that I had the stories on tape, because I could hear the emotions in the voices of the tellers and audiences. As I recall most of the questions had to do with emotional context, like "Why was the story told?" and "How did the audience respond?" Yes of course my interpretation of the stories was biased, but at the time that seemed a reasonable approach. At any rate I annotated all of the stories with answers to the questions. I also linked the stories to some sets of story elements employees of the client had developed in a workshop setting.

I developed some very simple prototype software to display the stories and patterns in answers to the questions. The interface was laughably simple. I believe it had nothing in it but some bar graphs arranged on a grid. However, the results amazed all of us: myself, Dave and Sharon, and our client. With only forty stories, a handful of questions and story elements, and the simplest possible viewing device, patterns simply jumped out of the stories. The one I remember best was that stories that involved customers doing stupid things with the product were more likely to be hearsay than direct experience, and stories from direct experience tended to involve customers who used the product correctly. Here was a river of meaning revealed! People were passing around rumors that were largely disconnected with reality. The client was able to improve their training to help staff members understand customers better, and they had a very specific set of perceptions to counter in the process. We asked questions about stories, and in concert with story elements that made sense to the organization, we discovered actionable insights for positive change.

What better proof of concept could there be? I can remember the moment when I was playing with my prototype and noticed that the stupid-customer stories were mostly rumors. It was like watching a door open onto a scene that had been previously hidden. The process of asking questions about stories has proven to be so useful that I have come to expect such a revelation to appear to someone at some point in every project. I have rarely been disappointed in that expectation.

Why listen to stories?

You may be surprised to hear it, but I started out in working with stories in the same way many people do, on the telling side. I got excited about all the advice on "how to tell a great story" and assumed that only the best, most compelling, most carefully crafted stories could "get things done," whatever it was you wanted to do. How did I change my focus from telling to listening? I'll tell you.

My second year at IBM Research, in 2000, was spent on a project researching how storytelling could improve computer-aided learning (see the story, called "Incorporating narrative into e-learning," in the project stories chapter). As I started out the project, I tried with increasing frustration several different ways to help instructors write purposeful stories that would help people learn how to use software or do any number of things more quickly and easily. I kept failing. The stories I crafted were always less compelling, less memorable, and less educational than the experiences they were based on (from my own experience and from things I'd heard). This was true even though I was "improving" the stories using all the wonderful advice I could find. I had read around a dozen books on story form, from narratology textbooks to books for professional storytellers to McKee's screenwriting bible Story. I followed their expert instructions to the letter, but somehow every time I improved the stories an essential spark was lost. Like the dogged worker I am I kept trying, nose to the grindstone.

At the same time, my colleagues and I were listening to stories. We were collecting lots of stories from real people in real workshops. However, we weren't collecting stories to show to anyone. The idea of "getting stories to where they need to be" had not yet entered my mind. We were collecting stories to find out what people did (task analysis), what they needed (needs analysis) and what they lacked (gap analysis). I did expect that we might be able to draw some "raw material" for our perfectly crafted stories from the workshops, in the same way a novelist visits cities they plan to write about, as background preparation for the real work of building perfect stories. But it went no further than that.

I remember the moment when I realized that the perfectly crafted stories I was trying to create were sitting all around me quietly waiting to be noticed. I was sitting at my desk looking at the stories I had written and the stories we had collected. Suddenly I saw that the raw stories of personal experience we had collected from real people, without any expectation of retelling them, were already the stories I was trying to build. I was starving in paradise. I remember having a sudden mental image, like a waking dream. In the dream I was walking on a road, struggling to get to a distant city which seemed to recede further with each step I took. But it was not a road at all. It was a bridge, and under it a gently undulating river of stories flowed directly into the city. All I had to do was get off the bridge, step onto the waiting boat, and glide forward on the river of stories.

Why did I discount the stories we had collected? Why did I not see that they were waiting to carry me where I wanted to go? Why did I continue banging my head against the wall when the answer was right in front of me? Because I had a dangerously narrow idea of what a story could be. (I have pondered this dangerously narrow idea, in its many manifestations, ever since.)

After this revelation I told my colleagues about the idea of using stories we had collected instead of trying to write our own. They loved the idea as much as I did, so we changed the project. We abandoned our original ideas about how writing "good" stories would improve e-learning and instead concentrated on figuring out the best ways of helping people get the good stories regular people already tell to where they needed to be. As a result the learning resources we were creating, and our ability to help other people create such resources, improved tremendously. The project that threatened to fail became a resounding success.

This experience, and many similar ones after it, convinced me that true, raw, real stories of personal experience are more useful for almost every task you can imagine than are stories of pure fiction. In the few situations where fictional stories are preferable as the end result -- and there are such situations, of course -- considering true stories will create a far more effective fiction than creating one from whole cloth (if that is even possible). The use of raw stories of personal experience has become one of the cornerstones of participatory narrative inquiry.

I've since come to realize that people who work with stories in organizations and communities (and here I am not talking about professional storytellers) seem to go through three phases, which roughly match my three dimensions of story (form, function and phenomenon). People seem to start out, as I did, infatuated with story form: they memorize McKee's Story and try to turn every story into a "great" story. Once they get past that they start thinking about how they can "use" story function to change situations, inject learning, propel messages, and so on (all of which is fairly mechanical thinking). And finally they arrive at the phenomenon stage where they begin to see stories as elements in a complex ecology. They start thinking about ways to tend stories, herd them, take care of them, and get them where they need to go. That final stage, in my opinion, is the best place to end up when you want to work with stories in communities and organizations.

Why ask people to make sense of collected stories?

Another question I often get comes from clients who like the idea of finding patterns in stories, but do not like the idea of asking people to make sense of their own stories in sensemaking sessions. This question often comes from people who pay for projects but are not among the group of people being asked to tell stories. It might be a steering committee, say, or a human-resources department, or an agency whose responsibility is to oversee benefits to a group of people in need. Such a client might say, "Why should we let people make sense of their own stories? Shouldn't we be the ones to do that? Or some expert consultants?" My position on that -- now -- is that having expert consultants work with an organization's or community's stories is an exercise in futility. But that was not the position I once held.

The story I have to tell about making sense of collected stories is yet another story of being dragged into the light of truth. This story took place soon after I started working with Dave Snowden and Sharon Darwent doing story projects with IBM clients. This is mostly their story, in fact, and it is a story of accomplishment. I participated only on the fringes of it, in discussion over the phone, though I occupied a more central role in other similar stories afterward. Sharon and Dave had started out just as I had, writing crafted stories to help clients achieve goals, and they had made their own independent transition to collecting and valuing raw stories. At the time we joined forces, we all believed that expert interpretation of stories was best, both in answering questions about stories and in building larger stories out of them. I did it; they did it; we all considered it a strength of our expertise. We thought asking the people who told the stories to build things out of them -- well, we didn't think about it, that's what happened. It didn't register on our radar.

The turning point came on one of the first projects I helped support as a newcomer to the group in 2001. In this project the group had helped the client collect videotapes of something like a hundred retiring employees describing their long careers. In our enthusiasm we had allowed too many people to generate too many hours of videotape, and we realized that we could never get through them all in time to meet the project deadline. After a flurry of frantic discussion we decided to ask the employees themselves to watch the videotapes. We planned to distribute the videos so that every participant saw a few interviews and every interview was seen by a few participants. Then we would invite people to a workshop in which they would interpret the stories together and come up with their own conclusions, saving us the trouble. (I make this sound like a hugging-all-around solution, but actually there was a lot of recrimination about who got the bright idea of collecting so many stories without thinking through how we would process them all. I must say that the person who collected too many stories was the same person who came up with the excellent idea that saved the project and changed the approach; so we forgave that person in the end.)

We were worried going into this workshop that we would have a lot of work to do after these uninitiated, amateur interpreters had finished their exercise in understanding. But we decided to go ahead anyway, thinking that at least our task would be reduced. You can imagine our astonishment when we found that the quality of the results exceeded our previous finely tuned expert interpretations. Not only that, but when we reported the results, unadorned by our expertise, they resonated better with our client as well. The amateurs didn't falter or fail. They outran the experts by a mile. This was the first manifestation of the PNI principle that people know their stories.

After that project we abandoned all attempts to build things with stories ourselves and instead concentrated our efforts on exercises that helped people build understandings from their own stories. The insight inspired much of our subsequent work on narrative sensemaking in groups. Before this project I had been using grounded theory, which is a system for enabling the emergence of theory grounded in collected materials. I abandoned that practice after we started supporting group sensemaking, but later came to understand two things: that narrative sensemaking can be a form of grounded theory for collaborative groups; and that expert attention to stories and patterns does have a place in story work, when it is used as catalytic material. (That story is in the section "Why catalyze sensemaking?" later in this chapter.)

The claim that participatory sensemaking is superior to expert analysis, more than any other in this list, is difficult for many project funders to accept. It is one thing to allow the people being researched to tell stories; it is another to allow them to answer questions about their stories; but it is something powerful, sometimes powerfully dangerous, to allow the people being researched to build things out of the stories they have told. Why is that? I think it is because built things take shape and begin to have collective lives. They become useful to those who built them. They become tools, and there is always the worry that tools can become weapons. But that worry is mostly an illusion: only the guilty are suspicious.

Let us say you own a coffee shop. You want to find out what your customers think. You ask them to tell you stories. You ask them what their stories mean. You look for patterns in those stories and answers. This is all well and good; but are you willing to let them use those stories to build a vision of what your coffee shop should be like in five or ten years? What will they be able to do with that? Will they be able to tell you what to do with your coffee shop? Is that what you want?

Most people would consider that a nightmare scenario, at least when it is described in that way. But that is not the only way to describe participatory sensemaking. That scenario misrepresents a few elements of what might happen. First, it makes it sound like the coffee shop's owners would be excluded from the sensemaking or that their voices would be drowned out. That is a common fear among project funders. Second, and more dangerously, that scenario assumes that sensemaking can result in only one story told at one level.

Let me paint you a different picture. Let us say you own a coffee shop. You want to find out what your customers think. You ask them to tell you stories. You ask them what their stories mean. You look for patterns in those stories and answers. You then hold a workshop where you ask your customers to come to the coffee shop, look at the stories and patterns, and engage with you in a series of story-building exercises in which each group important to the coffee shop will get a chance to tell their unique story. When you are finished, one wall shows a story built by the shop's regular customers, for some of whom the shop has been a second home for decades. Another wall shows a story built by tourists who just happened to be in the city that day and saw the sign. Another shows a story built by employees of the shop, current and former. On the fourth wall is a vision of the shop, in the past present and future, by three generations of the shop's owners, including yourself. The four walls of your shop represent a nested story, a story that contains many perspectives. Is there conflict among the stories? Of course there is. But it all comes down to this. Which is more useful to you, the owner of the shop? One wall or four?

Allowing only outside experts to make sense of collected stories jeopardizes success in working with stories for two reasons. It cannot help getting essential things wrong, through not understanding subtle nuances of context which only insiders can know. And it is incapable of making useful insights fully resonate and changes actually happen inside a community, because it is not "of us." It is a paradox of control. When you let people work with their own stories and make sense of their own situations, your ability to make better decisions will grow alongside theirs. When you do not let this take place, your decision-making ability will still grow, but slowly. You can share your sense-making with the people who told the stories for reasons of egalitarianism, but you don't even need that justification. You can be ruthlessly selfish and still see that people should work with their own stories. It helps you as much as it helps them.

Why ask people questions about their own stories?

In this little history of PNI you have already read about how I discovered the merits of collecting and using real stories of personal experience. You have read how I discovered the benefits of asking questions about stories. You have read how I discovered that asking people in groups to make sense of collected stories produces deeper insights than outside experts can hope to achieve. You have read how I discovered each of these things not by virtue of the tremendous foresight housed in my prodigious brain, but in retrospect, after circumstances had dragged the solution in front of me and patiently waited while I ignored the obvious. With this preparation you will not be surprised to hear that I ignored the now-perfectly-obvious benefits of asking people what their own stories meant. Let me tell you how I found this out.

In the first two years after I started working with Sharon Darwent and Dave Snowden on story projects for IBM clients, we carried out something like twenty story projects. In each project we asked our clients -- this was the project funders, not the participants -- to answer questions about the stories we and they collected. Just like the people building expert systems, we saw question answering as "clerical work" some poor unskilled worker had to wade through. Often clients agreed to do this, but when they actually saw the dozens or hundreds of stories collected, most balked. What could we do but answer the questions ourselves? I remember many a late night when Sharon and I sat with hundreds of stories marking answers to questions such as emotional tone and reported origin. This turned out to be an opportunity disguised as a problem, by the way. I found I loved answering questions about stories, even when it did take up half the night. I learned a lot about stories and storytelling by doing it. In fact I recommend answering some questions about stories yourself, because it helps you learn more about stories than any textbook can. When you sit with stories they speak to you, but you must sit with many stories, not a few; and you must give them the time, attention and respect they deserve. It sounds strange to say it, but it is true.

However, as much as I liked the work, we needed to reduce the work hours we were putting in per project. It just wasn't cost-effective to carry on this way. We talked about this problem constantly, and after several low-ratio projects like this we agreed to try something new. We started trying, tentatively at first, to ask people to answer questions about stories they had just told. Imagine our surprise when we found out that people not only didn't mind doing it, but seemed to get something useful out of it. In retrospect nothing could be more obvious, but it was not obvious going in. We looked for compliance and found empowerment.

I cannot speak for any of the other people involved in this work, but I myself did not see this revelation coming. I was dragged kicking and screaming into doing it right. I have tried to remember on which project we first asked people to interpret their own stories, but I can't remember which it was. I think this is because it happened gradually. At first we combined questions we answered with questions participants answered because we were unsure people would comply with our request to "help us with the clerical work" of answering questions about stories. What I do remember, and very well, is that as we began to gather more and deeper reflections from the tellers of stories, the patterns we found increased in utility by orders of magnitude. It was only when we started paying real attention to the interpretations people made of their own stories that some of the real "wow" patterns started jumping out of what we had collected. We could never have imagined some of the things participants told us about their own stories, not when they all spoke at once. People spoke of hope and fear and responsibility and courage. They knew things about their own stories that no outsider could see, even if they read the same stories for weeks or months on end. It was as though we had been searching in the dark, and someone switched on a bright light.

I now consider this to be one of the principles of PNI: people know their stories. There is no better foundation on which to work with stories than stories combined with what their tellers say about them.

If you go back and reread the paragraphs in the section called "Why ask questions about stories?" you will see that even though I didn't see this revelation coming, I certainly should have seen it. The signs were all there. When Doug Lipman said asking people questions about the stories they were building was helpful, I should have realized the same would be true for people talking from their own experience. When I rediscovered Little Red Riding Hood with Dramatica Pro, I should have realized that people could rediscover the stories they told themselves about their own lives. When I saw that building links between stories was not clerical work, as Roger Schank thought, but an opportunity for sensemaking, I should have seen that another type of clerical work -- the type I was spending late nights doing -- was just as useful. When I read about the nine-day fortnight, I should have noticed that Alan Wilkins said, "Everyone I talked to in the company knew the story." I should have thought: maybe asking people about the story might have informed his understanding of what the story meant to the organization. Maybe I recall these inklings the most strongly not because they impressed me at the time, before I discovered asking people about their own stories, but because they supported that realization so well later, in retrospect. It's a theory.

I am fully aware that some people will read this story of discovery and remain unconvinced that asking people about their stories is more useful than having experts analyze them from afar. Certainly many in the field of narrative analysis will question it. Is that not the role of the expert, to know more than the "informant" who tells the story? As I hope the above story shows, I do not know this by abstract principles but by hard-won experience; but let me explain it in more abstract terms, for those who prefer their explanations that way.

One of the problems with direct surveys is that it is so very easy to create situations in which the acceptable answer is embedded in the question. In fact it is nearly impossible not to do this. It is so very easy to find out what people believe you want them to say, and so very difficult to find out what they actually believe. Let's say you want to ask about politics in a workplace. Can you ask, "Are things getting more political or less political at work?" I suppose you can ask, but why bother? Is anyone going to answer a question like that honestly? Many employee satisfaction surveys are like this. I like to joke that they could be replaced with one giant question: "Do you want to keep your job?" The exercise is one of guessing the right answers, not of exploring issues. The whole thing becomes a charade, a grotesque play that amuses no one and changes nothing.

Now consider what happens when you ask people to tell you stories, then ask them what their stories mean. In the first stage you say something like, "Tell me what happened the last time you woke up in the morning and wished you were sick so you wouldn't have to go to work." This means: I invite you to enwrap your feelings, beliefs and perceptions in the protective social ritual of a storytelling event. I promise to treat your story package with respect and care, as any socialized adult would know to do after giving such a signal.

After the story has been told, in the second stage, you say, "Please answer these questions about your story." This means: Let us look together at the securely wrapped package you have placed on the table between us. My eyes are focused on the package. I am not looking at you; I am not asking you to tell me how you feel. I invite you to focus with me and tell me only what is in this story. Then you say something like, "How strong would you say the theme of "politics" is in this story? Do es it dominate the story? Is it a side issue? A non-issue? Oh, and by the way, when did that story take place?"

If you set dozens or hundreds of such stories and answer sets next to each other, you have just found out far more about what people believe about politics in the workplace than you ever could have by asking them directly. You have mapped the prominence of workplace politics, its relationship to many other issues found in the story packages you gathered, and its change over time. You can use that map to explore the issue of workplace politics in ways that an "instrument" of direct questioning can never provide. (And the people themselves can use it too, if you will let them.)

Questions about stories are nothing like questions about people. Questions about stories communicate respectful attention and negotiated truth, not interrogation and proof. They give participants the freedom to speak at a protective distance from their feelings. You've heard stories about children who were helped by talking about a stuffed puppet when the real subject of the discussion was themselves? Asking questions about stories uses the same approach. It would be difficult to get any adult to speak through a puppet, but most people will speak through their stories. Most people recognize the ancient ritual of storytelling, understand what it is for, and know how to respond. Asking people to tell stories is part of this, but I have come to believe that asking people about their stories is just as important. When I think of this situation I always remember the quote from Oscar Wilde: "Give a man a mask and he will tell you the truth."

Let us say that you don't choose to progress to the second stage. Say you gather stories and interpret them yourself without asking any questions about them. Say you are an expert in workplace politics, and you decide what the stories mean about workplace politics. My experience has been that no outsider can be fully aware of the meanings of stories to those who told them. You simply cannot penetrate to the meaning of a story in context, no matter how expert you are or how well you have studied the population of interest. Every storyteller is the best expert on their own stories.

But it is not necessary to create a battle between storyteller and expert: they can bring their complementary assets to work together. Asking people what their stories mean does not bar experts from considering the stories. In fact I often juxtapose storyteller interpretations with those made by others, both in the community and outside it. Why this is not a battle is another essential strength of stories (and principle of PNI): stories nest. You can compose a story that includes the original story, the storyteller's view of it, the view of others in the community, and your view as an outside expert. Is that not a richer, more intricate story than anything you could possibly come up with as an expert working alone? Surely so. What if your views conflict? What if you interpret the story differently than the storyteller did? All the better. Conflict only makes the story richer and more productive.

Why keep stories raw?

I have spoken to many people who collect stories over the years. Many agree that listening to stories is useful and empowering to people. However, quite a few people have not agreed with my stance that it is better to leave stories alone, to keep them in their "raw" form. They tend to think it is better to improve stories by "cleaning them up" to make them read more nicely. They might remove pauses, restarts, and apparently off-topic additions. They might annotate elements that seem strange, incoherent or wrong. I say: Don't mess with the stories. Why? Let me tell you a few stories about that.

One of the first story projects I ever did was for a company that wanted to think about how its customers perceived one of its products. They had collected some stories from customers about the product, and some of the stories contained some pretty strange rumors about what you could do with the product and what it could do to you, most of them wrong. (You know, X brand of soap can kill your cat, that sort of thing.) The stories were to be given out to company staff so that they could better understand the customer's point of view and help dispel some of the rumors. One of the managers on the project wanted to edit the collected stories to remove all "errors" and replace what the customers had actually said with "facts" about the product. After much pleading I managed to talk him out of doing this, but only by agreeing that he could place a "fact" addendum after each story denouncing what the customer said and setting things straight.

In the end the project succeeded in educating the company's staff about customer perceptions. But if the manager had "corrected" the stories it would probably have failed. Why? Because the collected stories revealed two nested levels of story. Outside the original customer stories grew a second layer of conflict between competing views of truth and fact. The manager wanted to collapse the story nesting down to produce only one factually correct story. If the goal of the project had been to educate the staff about the facts, this might have been a reasonable decision, though you would hardly need customer stories to meet that goal; a lecture would suffice. But for the stated goal of helping staff members understand customer perceptions, retaining the two-level story by keeping the "fact addendum" separate was essential.

Did I see this one coming? Well, I guess I am happy to report that for at least one of these stories my intuition was on target. I remember getting an email from the story-changing manager with his first batch of corrected stories attached, and having a strong visceral reaction to what I saw. It was partly a reaction of respect, that the stories represented the voices of the customers, and the voices should not be silenced. But I also pictured the staff members who were to read these stories. I realized that they would learn nothing if the stories were corrected because the outermost story, the story of conflict, was the story they most needed to hear. Finding a way to communicate this insight to the manager was the hard part of that project.

Here is a second story about raw stories, this one not something that happened to me but to two of my colleagues. For anonymity I'll call them Colleague A and Colleague B. This happened not long after the factual-addendum incident. Colleague A and Colleague B had done a story project in an organization in which stories were collected from, and story elements were created by, two groups: some employees and the managers above them in the corporate hierarchy. After both groups had completed their work, separately, Colleague A attempted to hold a meeting in which the managers were shown both sets of story elements. As I recall it, they had placed large drawings of the story elements produced by both groups (with the help of a cartoonist) around the room on the walls. One of the managers strode up to the drawings and looked them over. Then he returned to the conference table, banged his fist on the table, said, "This meeting is over," and walked out the door before Colleague A had a chance to say another word.

After many attempts, my colleagues managed to arrange a second meeting with the same managers. This time Colleague B tried to explain the story elements to the managers. The same manager who had walked out before launched himself at Colleague B, grabbed him by the lapels of his jacket, and slammed him up against the wall. "You can't come in here and tell me," said the man, "that those people said those things about us." Colleague B, to his infinite merit (I could not have done it), calmly explained that the drawings the man was so angry about were produced by the exact same method as those produced by the group of managers that included the angry man himself. It was only when he heard this that the man relaxed his grip and walked back to the drawings. The room was silent while the man looked at the drawings made by both groups. Then he returned to the table ready to talk about the issues raised.

Why did the manager calm down? Because he understood at last that nothing had been created by the outside consultants. Everything he saw was raw, authentic, real. My colleagues had not composed any stories, nor had they "improved" any. They had only helped the stories get to where they needed to go. Because the results were authentic, the man could respect their sources and listen to their messages. Such authenticity is critical to success in story work. In my experience, if there are any alterations to the actual words spoken, for any reason, the story project is damaged, sometimes beyond retrieval. I have seen several less emotional versions of this story played out in the years since.

There are times when you need to keep some stories away from the larger group because they are particularly inflammatory or malicious, and you do sometimes need to remove identifying details, but you should never disguise or alter the meaningful content of the stories told.

Yes, when a story is created to make a persuasive argument or sell a product, sometimes (but not always) a raw story is not "good enough." But for sharing experiences and arriving at new insights, the best story is a raw story. The irreplaceable authenticity of raw stories creates opportunities for understanding the experiences and perspectives of other people that are impossible to come by any other way. The utility of authentic stories cannot be replicated by design, and both authenticity and utility can be destroyed when stories are modified.

Why catalyze sensemaking?

At the point in the life of PNI in which we now find ourselves, I was working on story projects where we listened to stories, kept them raw and unchanged, asked their tellers questions about them, and asked people to work with their own stories in sensemaking sessions. Somehow, and I'm not really sure how this happened, the projects we worked on got bigger. Instead of fifty or a hundred stories we started helping clients gather several hundred or even a thousand. Maybe the project goals got bigger, or the clients got bigger, I'm not sure. At any rate we started hitting processing problems. People were finding the sheer mass of stories too large to handle. They could not possibly read them all, even though each story was only a paragraph or two long. We showed them how to use the software that helped them look through the stories and answers to find patterns. Some clients rose to the task and found their own patterns, but some clients, maybe the busiest ones or those most used to outside help, wanted us to do that for them. They said, "Can't you just tell us what the stories say? Can't you boil them down for us?"

I was wary, you might even say paranoid, about boiling down stories. As an outsider I could never hope to understand the context of the stories like their tellers, and others in their organization or community, could. I refused to do this for some time, but finally one client was very persistent, or we were very motivated to work with them, or something, and I agreed to look at the stories and answers for the client.

It didn't go well.  I found what I thought were strong trends in the data, and I wrote a report describing them. The client did not see what I saw. They were insulted, defensive, angry. They found my report biased and misleading. They responded with attacks on my professional ability and personal ethics. I remember being so upset that I stopped checking email for days, afraid of what I might find next. In the end the project was saved, not by me but by some excellent restorative work by a colleague. (You can read about it in the project story called "Shooting the messenger" in the project stories section.)

That first attempt at adding analysis to the mix represented a major turning point in the work that became PNI. You could even say that project created the method of catalysis, because if it had not happened I might never have questioned the way I had been doing things. But as it did happen, I spent a lot of time thinking about it in the months and years afterward. I never wanted to create those negative emotions again in those I was trying to help. This was not so much because their responses hurt me, but because it destroyed the beneficial effect such a project could have. From the safety of time I can see the immense value that painful mistake brought.

Telling people straight out what I saw and what I thought it meant, as a result of analysis, was not just counter-productive; it was disrespectful to their own clearly superior knowledge of the subject matter. I knew I needed to find a better way. I needed to catalyze thought and discussion, to open things up to possibility, not close them down to defense and attack. Over the course of the next several projects, during which I made other but smaller and less painful mistakes, I gradually came up with the rules I now use and recommend to anyone catalyzing sensemaking. You can read about how these rules play out in the chapters on narrative catalysis, but I will give you a quick synopsis here. I came up with alliterative names for the rules so I could remember them well. They are as follows.
  1. Separate Statements into observations anyone can see, interpretations of what the observations mean, and ideas on actions that could be taken as a result. Make no statement that is not thus identified.
  2. Provide Provoking Perspectives in the form of multiple competing interpretations for each observation. Never tell truths; always provide possibilities.
  3. Maintain Mischief by making the provocative nature of the communication clear and present at all times. When the burden of proof comes near, push it back to where it belongs, in the minds of those using the report to support their sensemaking and decision making.
  4. Explore Exhaustively through all avenues available. Remove the possibility of cherry-picking by examining the whole tree: every fruit, flower, leaf, twig and root. This ensures that the observations offered are everything anyone can see and not a biased subset.
  5. Prepare for Participation by creating reports that facilitate group discussion and sensemaking. This includes keeping things brief and easily taken up so as to spur discussion over digestion.
I have now seen these rules work wonders in dozens of projects. They turn analysis into catalysis, shouting matches into constructive dialogues, and threats to the status quo into aids for planning a better future.
Having seen both sides I can now clearly state my belief. When a project involves the feelings, beliefs and perspectives of human beings, any method of analysis that results in one set of unequivocal conclusions and is constructed outside the community in which the conclusions will be applied will fail. It may not fail not right away, but eventually and surely it will fail. Conclusions cannot flourish in foreign soil. Transplanted conclusions may grow for a time, and they may even seem vigorous, above the soil. But that growth is dependent on the artificial fertilizer of strong inputs of energy. When the energy stops the conclusions will die, because their roots are weak.

I know this is a strong statement, and I know many will disagree with it. I have read volumes about the many elaborate contraptions researchers build to manage their controlling influence on conclusions about the feelings, beliefs and experiences of other people. This may be fine for general research whose goal is not related to decision making. But when the ultimate goal is to support decision making for positive change, none of these contraptions work, not in a lasting way. The only options are to keep all analysis within the community or to give up the hammer of analysis for the many lenses of of catalysis.

The next obvious question, of course, is whether catalysis itself is an elaborate contraption that doesn't work in the long term. Of course I have thought of that, being the nervous person I am. Am I doing what I accuse them of doing? (As Joseph Heller famously said in Catch-22, "Just because you're paranoid doesn't mean they aren't after you.") The truth is, it is not always possible to follow every rule of catalysis to perfection. All real projects have to deal with real issues of power and control and limitations. Sometimes the person doing catalysis is insufficiently experienced, or those funding the project want more control over the end result, or there isn't time for exhaustive exploration, or you can't get enough people to participate in sensemaking. Still, what I have seen is that the closer a project hews to the rules of catalysis, the better catalysis works, and the better and more long-lasting is the result.

The second story I want to tell you about catalysis is about statistics. As I grew the basic rules of catalysis over the course of several projects (whose results kept improving) I began to be increasingly frustrated by the limitations of simple analysis. By comparing counts of how many people said this or that about their stories I was able to look at trends, but I could not say much about whether a trend was strong or weak. I was doing a lot of what I called "eye-balling" at that time -- staring at graphs trying to make sense of them. I kept picturing one of my favorite Far Side cartoons, "Early microbiologists," where cavemen in laboratory coats sit at tables and peer intently into petri dishes without the aid of microscopes. That was exactly how I felt!

I had taken statistics courses in graduate school (with Robert Sokal, who was not only a leader in the field of biological statistics at the time but also my boss for several years and a good friend). But as much as I felt statistical methods were superlative tools for biological study, I was wary of using them when it came to looking at the feelings, beliefs and experiences of people. They sang a siren song of certainty, and I was concerned that they would lead me back into the lotus-covered land of drawing conclusions for other people. Still, I felt the need to find out what was possible. So after discussion with my colleagues, I added some simple statistical tests to the software we used and tried them out on the next project.

Reader, I misjudged statistics. I misjudged it badly. Statistics can be a good friend to catalysis. To my surprise I saw a step change in the utility of catalysis to project results when I was able to switch from saying, "Gee, maybe these things could be related?" to "The R value of this correlation is 0.52 and the correlation is significant at the 0.05 level." What I failed to understand at first is that the purpose of statistics is to create limited agreement among people who reasonably disagree. In this it is much like storytelling. Each has a set of ground rules, and each operates within those rules to negotiate truths in relative safety. Our collective ability to work within the rules of statistics, that if I follow this procedure correctly we agree to accept this result, parallels and complements our collective ability to work within the rules of storytelling, that if I establish this setting and characters we agree to consider this experience. Yes, statistics presents a particular and narrow perspective on data. Yes, this perspective can be manipulated to control beliefs and perceptions about what has been observed. But stories have the same weaknesses and the same strengths.

The approach I now recommend for catalytic work relies on mixed-methods analysis, an approach that combines qualitative work (essentially, reading the stories) and quantitative work, which includes the statistical analysis of trends in answers people gave about their stories. Why a mixture of qualitative and quantitative? Because this is the natural way to consider quantities of stories people tell and their interpretations of those stories. Reading the literature on mixed-methods research is like reading my own writings on narrative work. Here is Jennifer Greene in her book Mixed Methods in Social Inquiry:
A mixed methods way of thinking involves an openness to multiple ways of seeing and hearing, multiple ways of making sense of the social world, and multiple standpoints on what is important and to be valued and cherished. A mixed methods way of thinking rests on assumptions that there are multiple legitimate approaches to social inquiry, that any given approach to social inquiry is inevitably partial, and that thereby multiple approaches can generate more complete and meaningful understanding of complex human phenomena. A mixed methods way of thinking means genuine acceptance of other ways of seeing and knowing as legitimate. A mixed methods way of thinking involves an active engagement with difference and diversity.
Sounds like I wrote it, doesn't it? That tells you something.

There is one problem with the use of statistics in catalysis: not everyone can do it, or do it well. Yes, this is a barrier. But catalysis is not an essential component of PNI. I place it in the optional triangle because it requires additional skill and preparation. It enhances PNI but it is not required. Also, statistics in catalysis is like spice: a little goes a long way. Isn't that just like a good story?

No comments: