Showing posts with label PNI. Show all posts
Showing posts with label PNI. Show all posts

Saturday, February 1, 2025

More progress


Hello big crazy world. I have just finished updating the Sensemaking chapter of Working with Stories for its fourth edition. It took months, but it's done, and it's a lot better than it was.

Only three chapters left! After those are done, probably in March or April, I will release the first draft of WWS4 and the third draft of Working with Stories Simplified on workingwithstories.org.

After the first two books are done, I'll finish the last two books (WWS Sourcebook and WWS In Depth). I expect that to take until sometime in the summer. At that point all four books will be available on the web site.

Then it's on to publication: finalizing pagination, adding indexes, checking bibliographies, and so on. I always seem to need to read through proof copies of my books five times to find all the typos and imperfect sentences, so I guess I'll be doing that again. I hope to have all four books available on Amazon (in print and Kindle versions) by sometime in the fall. I don't make that much on Amazon purchases, but every little bit helps.

A nascent possibility

I am considering the prospect of rebooting my PNI Practicum courses, starting in the summer or fall, with a new open enrollment system. Instead of publishing course start dates, I would ask people to tell me which course they want to take and when they want to take it (in what months, on what days, at what times). I would keep track of these requests in a sort of waiting list. If and when any of the lists got up to eight people, we would all meet to decide when our course meetings would happen.

I would probably offer the courses at the same prices I used before, with a new price for the new short course (Prelude $600, Level I $1600, Level II $2000). If I could fill up one course of each type per year, I could probably keep giving the courses for some time. 

Of course, now that I've written and released the instructions for running the courses, nobody needs me to run them; they can just download the materials and take the courses on their own. I'm glad I did that. I hope people are using them. 

At the same time, I think it's at least possible that some people will want to take courses I run myself. That's the business model I've been using for a long time: information wants to be free, and bespoke advice wants to be compensated so it can continue to give information away for free.

Let me know if you have any suggestions about this idea. 

Some other changes

I have made two decisions that will impact how the WWS books will look and work, and I wonder if anyone would like to tell me how they feel about them.

WWS chapter-ending summaries, questions, and activities. I have been putting off updating these parts of Working with Stories until I finish rewriting all of the chapters. But I am starting to think that it would be better to leave them out of the next edition.
  • Working with Stories Simplified is a chapter-by-chapter summary of Working with Stories. So I don't really need end-of-chapter summaries anymore.
  • In the ~12 years since I wrote the chapter-ending questions and activities in WWS, nobody has ever mentioned them to  me. People mention lots of parts of the book, but nobody has ever mentioned those. So I think they must not be that pivotal.
  • I can finish the book faster if I leave out those parts.
  • People should not be doing "activities" when they are reading WWS. They should be doing projects. Also, there are many project-related activities (things to try) in the texts of the chapters themselves (and there are more now than there were then).
  • Originally, I wanted to pose questions to get people to think about what they were reading (as opposed to following it by rote). But I do have lots of questions scattered throughout the chapters.
  • Leaving out the chapter-ending parts would take about 20 pages off WWS. The page count is hovering around 600 pages right now. I was hoping it would go down to 500, but given the extra white space and the new writing, I don't think that's going to happen. Still, I would like to keep it down as much as I can.
WWS-S photographs. Ever since I start thinking about writing Working with Stories Simplified, about three years ago, I have called it "the picture book." Now that I am working on (the third draft of) WWS-S (as I work on the fourth edition of WWS), I am starting to hate its photographs.
  • The basic idea of WWS-S was to write a book for people who hate long books so much that they can't bear to even begin to read WWS. (I have met quite a few of these people, and I want to respect and help them.) But photos add a lot of pages. I would like to get WWS-S down to 150 pages, or even 100. Right now, with photos, it's at 220.
  • What looks right in a slide show doesn't look right in a book. For one thing, to fit the pictures into the book, I have to shrink them down a lot, and it is hard to see the details. For another thing, the visual style of the photos is all over the place. It looks messy and maybe even amateurish.
  • Some of the photos are not very illustrative. I tried to find the best photos I could find to convey each concept, but I have to admit that I didn't always succeed. The overall effect is scattershot: sometimes helpful but sometimes confusing and distracting.
  • To use the photos I will have to print the WWS-S interior in color, and that will reduce its benefit as a shorter, cheaper version of WWS. I was resigned to the extra cost when I thought the photos added a lot of value, but now I'm not so sure.
  • I have been experimentally taking photos out. For roughly 80% of them, the removal feels like a relief, like an obstacle to understanding has been removed. For the other 20% it feels like a loss, like an aid to understanding has been removed. But if I print the book interior in black and white, I think that 20% will go down to 10%. So I am thinking that I will keep only the very best photos, the ones that still feel necessary in black and white.
  • Of course the best solution, quality-wise, would be to replace the photos with drawings. I could do that, but it would take a long time. This is volunteer work, and I have a limited budget.
If you have any opinions or suggestions about these decisions, I would absolutely love to hear them. I am eager to get these books done and out into the world, all grown up, living their best lives, being read and used by people who want to make things better for everyone.
 
Finally, I'd like to say a special thanks to everyone who has been helping me out with feedback and encouragement so far. I appreciate it very much.


Saturday, July 6, 2024

One down, three to go

I have just uploaded my finished book manuscript for Working with Stories Simplified. You can download it on the "More" page at workingwithstories.org. 

Now an update and an explanation.

When I wrote here last, I was looking for a job, having given up on making (enough) money from consulting and online courses. 

I looked for work full-time for about four months. It didn't work (difficult job market, weird resume), so about two months ago I decided to stop wasting my time. It looks like I will have to either reboot my consulting career or find a job unrelated to what I've been doing for the past 25 years. I've already tried the reboot option (that was the online courses), so I think it's time to do the second thing.

I'm fine with that, but I can't do it with all of this knowledge still stuck in my head! So I talked to my husband, and we decided to use some of our retirement savings to finish my four-book revision of Working with Stories. So I'm doing that, and unless I get a job (or a lot of consulting) doing PNI soon, this will be my final contribution to the field. It's time to hand PNI over to a new generation of thinkers and doers, and I'm enthused about getting all four books done at last. 

Next I will work on the fourth edition of Working with Stories, updating it to reflect everything I've learned over the past ten years and trimming out some of the less-used parts. When that's done, I'll finish the Sourcebook and the In Depth book. My plan is to finish all of this by the end of this year. Then I'll be ready to turn the page and see what comes next.

I do not plan to publish any of these books on Amazon (print or Kindle) until I have finished all of their content. But I'll be putting them up on my website in PDF format as soon as they are ready to read. (If necessary I can hire someone to do the technical parts of the publishing process.)

Wish me luck! If you have any comments on the new Simplified book, or if there is anything you think I should change (or add or remove) in any of the WWS books, now is the time to tell me.




Friday, February 23, 2024

New paper, new network name

Hey everybody. I wanted to tell you about a new paper on Participatory Narrative Inquiry that has just come out. Written mostly by my colleague Rachel Colla (though I helped a little), the paper makes a strong case for the use of PNI in Wellbeing Research. I was so impressed by how Rachel pulled together the growing body of research literature connected to PNI. It's worth a read!

Over at the Participatory Narrative Practitioner Network, we have decided to drop the last part of our name. Now we are just the Participatory Narrative Practitioners. We have also switched from Zulip to Discord for our ongoing chats. For the time being we are using Discord for our monthly meetings as well. The best thing about Discord is that the voice-chat line is always open, so we can meet up anytime anyone wants to. To join us, visit pnpnet.org and click the Discord link.

My job search is going pretty well so far. I sure hope people are reading my cover letters, because I am putting a lot of thought into them! If I have not already asked you for advice or to be a reference, and you'd be interested in either thing, please do drop me a line.

Monday, May 1, 2023

New course and book developments

I have updates for you. 

Working with Stories is changing

I spent most of 2022 writing materials for my new PNI Practicum courses (about which more below). 

  • I wrote a "picture book" version of Working with Stories, one that says the same thing but with far fewer words and a lot more diagrams and photographs. 
  • I wrote a "story form library" of 36 question sets to suit a variety of PNI project goals.

Both of these resources have been appreciated by my students. However, I don't want to keep them behind a paywall. It's not how I do things. So this year, while running the courses, I have been working on getting the new materials out into the world. 

I've been thinking about what people need when they are learning how to do PNI. And I've been thinking about what people have said to me about Working with Stories over the past eight years. Based on what I have learned, I think WWS wants to be four books: 

  • the original book (trimmed and updated)
  • a shorter, simpler version (the picture book)
  • a resource library (story forms and case studies)
  • a book with abundant details (for the nerds who want everything)

So that's what I'm working on - when I'm not working on the courses that are going on right now, that is.

I think it's going to take me at least several months to get all of these books ready to publish. So I decided to put them up in draft form now while I work on finishing them. You can find the new books on the More page of the WWS web site. 

These are the new book cover designs (so far). Coincidentally, the original book has four pictures on the cover. It's almost like WWS knew it wanted to be four books before I did.

Four new WWS books

So: have at the new stuff, and please send feedback.

The PNI Practicum courses are changing

The PNI Practicum courses are going very well. People are doing lots of wonderful projects, and we are all learning and making useful mistakes together. For myself, I have learned a lot about how to give online courses (well, how not to give online courses; but that works too). So my next set of courses (starting in July) will incorporate many changes.

Some changes have to do with what will happen in the course.

  • Calls will be recorded. I wasn't sure if people would want to be recorded in our course meetings. Turns out they do. So, all Zoom calls are now being recorded and are available to all students for review as long as the course goes on. So if you miss a call, you can see and hear what happened in your absence.
  • We will use Miro. I had wanted to show people that you can use a variety of online tools to facilitate interactive sessions. But Miro is so much better than every other option that I'm switching to it entirely.
  • Students will make presentations. To give people better opportunities to practice selling PNI to their participants and funders, in the next set of courses, each student will be asked to make two brief presentations to the class. 
    • Early on, each student will pitch their chosen project as if they were soliciting approval for it and participation in it.
    • At the end of the course, each student will make a brief presentation on what happened in their project: its goals, plans, challenges, surprises, and outcomes.
 Some changes will be structural.
  • There will be a mid-course break. From now on, there will be a one-week break between parts 4 and 5 (weeks 8 and 9) of each course. This will help people catch up if they have fallen behind, and it will give us all a spring or fall break.
  • One meeting time will be different. In the next courses, our Zoom calls will happen at 1700 and 2100 (was 2300) UTC. This should help when people in farther-apart time zones want to be on the same calls.

Some changes will be to the course requirements.

  • NarraFirma will be required in the II-level course. I had been making a special effort not to require the use of NarraFirma, in case people wanted to use other things. However, all of my students thought I should require everyone to use NF, so everyone can learn how to use it together. So now, if you want to take the PNI Practicum II course, you will need to use NarraFirma. (The I-level course still requires no particular software.)
  • Course fees will need to be paid two weeks ahead. To avoid last-minute scheduling difficulties, anyone who wants to take either course must pay the full course fee two weeks before the course starts.

And finally, I have made some changes to how I will promote and manage the courses.

  • The course syllabi will be available to review before you sign up. I have posted the syllabi for the two courses on the PNI Practicum web site, so you can see what is going to happen in each part of each course.
  • Refunds will be pro-rated. If you need to drop out of a course for any reason, I will refund your course fee on a pro-rated basis, counting how many weeks you have attended (and not counting the first week, which is covered by the nonrefundable deposit). However, I ask people not to take the course (or dropping out) lightly, since dropping out will affect the peer learning experiences of everyone else in the course.
  • There will be a new 6-student minimum. If either course does not have at least six people signed up (and paid in full) by the time the course fee is due, the course will not run, and I will send out refunds (including of deposits) to those who have signed up.
  • There might be an extra course. If either course fills up completely and people still want to take it (if that happens to you, tell me), I will open up one more course of that type. If at least 6 people want to take that course, I will give it.
If you have any questions about the PNI Practicum courses, or if you have any suggestions about Working with Stories, reach out via email (cfkurtz@cfkurtz.com).



Monday, April 10, 2023

Here I am talking to Madelyn Blair about stories

Hey everybody. Recently I had a wonderful conversation with Madelyn Blair, one of my role models in getting out there and doing things in the world. It was about stories and working with them, and it was part of an episode of her Unlocked TV show.

(Watch out, the music starts suddenly. Made me jump.)

For those who are new to story work, you might find our conversation informative. For those who know me and my spiel well, the conversation will probably be pretty familiar - it's the same stories I always tell. Though of course I keep polishing them, don't I, and that can be interesting in and of itself. 

That's one of the things I find most interesting about stories: they have stories. I try to remember this when somebody starts telling me a story I've already heard ten times. When I catch myself thinking, "Ugh, there they go again," I (try to) challenge myself to think, "Ah, it's that story. I remember it well. I wonder what it's been up to since I last heard it." I don't always succeed in meeting this challenge, but when I do, I always find out something new.

It's like being in the woods. Even though I have spent time in my particular bit of forest a thousand times, I find that if I can be quiet and pay attention to it for at least fifteen minutes, something new always happens. Sometimes it is something as dramatic as an owl teaching its baby how to fly, a squirrel rushing up and reading me the riot act, or a woodpecker poking its way up and down a tree. Sometimes it's something as simple as a conversation between birds, an operatically creaking tree, or a busy insect going about its workday next to my boot. And sometimes the thing that happens is in my own mind. I hear or see something, and it brings up something new and different. That's an event too. 

Things are always happening, fascinating things, even in the things I know very well. The trick, I find, is to stop not noticing them. I don't know if that's helpful to you - maybe it's just more sighing of my branches - but here I am writing it anyway, because I'm here, because you're here.

Thank you, Madelyn, for inviting me onto your program. I enjoyed the experience very much.

 


Friday, February 25, 2022

Mail bag: How many stories?

Last week somebody asked me a question via email that I've already answered lots of times: How many stories should a PNI project collect? 

I was about to say "it's on page whatever in my book," like I usually do, but then I thought -- why don't I write something new this time, just to see what happens? I'm glad I did, because I think my answer is getting better as I keep doing more projects. Anyway, here's what I wrote. Maybe it will be helpful to you as well.

The "how many stories" question comes up often when people are planning story projects. The answer is a bit complicated, but it depends on six things: issues, ambitions, abstractions, experiences, engagement, and people.

Issues: One or many?

If you want to talk about one big, simple issue, you need one set of stories. However, if you want to talk about multiple issues, or one very complex issue with a lot of other issues embedded within it, you need more stories. 

One way I like to use to figure out if an issue is complex is to keep asking "And what issues lie within that?" and then stop when the answer is "there aren't any issues within it."

For whatever number of stories you plan to collect, you must multiply it by the number of discrete issues you want to talk about. For example, if I wanted to help people talk about jobs and homelessness, I would gather two sets of stories (with some common questions to tie them together), so people can explore each issue with the depth it requires.

Ambitions: Exploratory or in-depth?

If you want to:

  • prove without a doubt that something is happening (in a way that cannot be dismissed),
  • represent the voices of people who have not been heard (in a way that cannot be ignored),
  • help people think through an issue deeply enough to arrive at useful conclusions and plans (in a way that will not fall apart later on), then
  • you need more stories than if you just want to explore a topic and see what happens.

Ambitious projects need 2-4 times as many stories as exploratory projects. In an ambitious project, the patterns in the stories must be obvious, plentiful, and complex enough to be explored in depth. In an exploratory project, it's okay if the patterns are just interesting hints at things people might want to explore more fully in the future. 

Abstractions: Concrete or vague?

If you want to explore abstract issues that are difficult to explain in ordinary words, you will need more stories than if you want to explore simple, concrete issues. 

For example, say you want to know how people feel about the new traffic lights in your neighborhood. You can just ask people how they feel about the new traffic lights in your neighborhood. But if you want to explore how your community is building resilience for a 21st century future, or some other string of jargon that means a lot to some people and nothing to others, you might have trouble gathering relevant stories. Most likely, you'll get a lot of "scattershot" stories based on people's guesses as to what you might be asking them to talk about.

A good test is to write down a question you would like to ask people, then translate it into simple, everyday language. Search for the "1000 most common words" in whatever language the question will be in, then remove all the words in the question that are not in that list. Then ask yourself: if you frame your question in common words, will the stories told in response adequately address the issue you want to address? If yes, just ask the question that way, and you're fine. You won't need extra stories.

But if rephrasing your question with common words will push it far away from the issue you want to address, then you will need to collect more stories, so that some of the scattershot stories you collect will fall onto your target.

Experiences: With stories, or with stories and patterns?

If you want people to meet in rooms, share stories, and do some sensemaking exercises together, you can gather as few as twenty stories per session. You might do that a few times within a project, but as long as it's people talking, you can see and work with patterns in a few dozen to several dozen stories.

On the other hand, if you want to do what I call catalysis (which is just analysis without the definitive conclusions), you need at least 100 stories to start finding statistical patterns in your data (answers to questions about stories). At 100 stories most such patterns tend to be weak. At 150 or 200 stories patterns are stronger (and less likely to be considered fake or irrelevant). I get pretty nervous when I have do catalysis with only 100 stories to work with. At around 200 stories I start to feel more comfortable, because the patterns I find are easy to see and talk about (without worrying that people will say "there's nothing there"). 

This more-is-better trend continues until about 600 stories, when you start running into diminishing returns. At that point you are better off using your time to collect stories on a different issue (unless, of course, some other aspect of this list means you need to push the number up for other reasons).

Catalysis is not important to, or even advisable for, every PNI project. Sometimes you do need to generate a lot of graphs and statistics. But sometimes you can get the same result with fewer stories by having people work with the stories directly, in sensemaking exercises. It all depends on what sorts of experiences you want people to have.

I always advise people to imagine the people they want to help or reach (whoever they are) responding to patterns in the stories and other data they plan to collect. If you can picture those people looking at graphs and statistical patterns and saying, "Oh, wow, now I get it," then you want those things to show those people, so you need catalysis.

But if you can picture the same people saying the same things because they are working with the stories directly (i.e., without any graphs and statistics), you don't need catalysis. In fact, it might be a bad idea. It might waste time you can use for other, more important things, like talking to more people, holding more sessions, covering more issues, getting more stories to more people, helping more people learn how to gather and work with stories, or iterating over the project more times.

On my web site I have an excerpt from a catalysis report which a client allowed me to share. If you look at it, you can see what the patterns that come out of catalysis tend to look like. If that seems like it would not be useful to your project, you don't need catalysis, and you don't need hundreds of stories. On the other hand, if that sort of report seems like just the kind of thing you need, then you can look at the numbers listed on the second page of the report. Those are typical numbers for projects that support catalysis well.

Engagement: Deep conversations, or messages in bottles?


A lot of "what works" in story work has to do with facilitation and engagement. I once saw a project with 80 stories work far better (in the sense of generating more useful insights) than a project with 1600 stories. 

  • The stories in the first project came from a group session with 20 people that was carried out by an expert facilitator who helped the people in the room feel welcome, safe, and heard. As a result, the people really spoke to the issues, and their stories and answers to questions contained many striking insights. 
  • The second project used a web form that had embedded in it some constraining expectations about what respondents ought to say. Those 1600 people said more surface-level things, so even with 20 times more stories, less useful insight came out of the project. It was still a good project, but it did not explore its issues as deeply as the project with 80 stories.

So there is a quality-quantity balance. The more quality you can get in your stories (in terms of how deeply and authentically people can explore the issues at hand), the fewer stories will provide the same result. Conversely, if for some reason you cannot gather quality stories (maybe people are reluctant, or you can't talk to them in person), a greater quantity of stories can make up for it, to some extent. 

On some projects, quality is the primary constraint (so you need more stories), and on other projects, quantity is the primary constraint (so you need deeper engagement in the stories you can collect).

People: Small or large community? Small or large need?

The more people you want to listen to, and the more the people in that group need to feel heard, the more stories you need to collect. Participatory story work never results in statistical sampling (because it's self-selecting), but you do need more stories to talk about issues in a community of 10,000 than in a community of 100. And you need more stories in a community with a strong need to be heard than in a community where people have already had plenty of chances to speak up.

My general rule is that if at least 20% of the people in any community have shared stories in a project, people tend to feel that the collected stories are representative of the community. In cases where people in a community feel especially unheard, that percent has to go up, maybe to 30% or 40%. The story collection also has to be balanced to represent all relevant viewpoints, but that is the shape of the collection, not its size. 

Having said that, a rule of thumb based on percentage doesn't work as well if the population is huge. If, say, there are 50,000 people in a community, hearing from 20,000 of them might pose logistical problems. I have seen story projects collect 10,000 stories, but it's not the norm. Most projects have fewer than 1000 stories, just because the people doing the projects have limited time to gather and work with the stories.

In the case of a larger community, it's reasonable to say that 20-40% of the community should be invited to share stories. After all, it's more about who is allowed to speak than who actually speaks.

Web-based surveys tend to get a 5-10% response rate, so if 20,000 people are invited to speak, you would get something like 1000-2000 stories, which is doable logistically.

If stories are collected in person, in interviews or groups, it's hard to get 1000 stories, even if you invite 20,000 people. It takes more time and energy to come to a session or interview than to fill out a web form, so instead of a 5-10% response rate you will tend to get more like 1%. On the other hand, stories gathered in interviews and sessions are so much deeper and richer than web-collected stories that smaller numbers of stories may not be a problem (see above).

Another thing is that, if a project contains multiple sub-projects that explore different issues (also see above), they can together add up to hearing from 20-40% of the population, even when the population is large. You can link sub-projects together by using some common questions. If you do that, you can get to huge numbers of stories, spread across sub-projects within a larger, overarching project.

 



Thursday, December 27, 2018

A little bit of history repeating

Wikimedia Commons,  Hubert Berberich
So a few weeks ago, I was looking for something on my "old stuff" hard drive, and I ran into some essays I wrote around 1988. That was back when I had my IBM portable PC. It weighed thirty pounds and had a little orange-text screen, and it was a pretty good heater if you sat in bed with it on your lap (and legs).

Anyway, as I said, the other day, instead of looking for whatever I was supposed to be looking for, I started reading these old essays. And I noticed something strange about them. They were written around the time I discovered complexity theory and roughly a decade before I learned anything about stories. At the time I thought I'd be an ethologist (animal behaviorist) forever, and I gave little thought to my own species. But here's the strange thing. Those old essays sound a lot like the things I've been writing lately about participatory narrative inquiry. I think you might be interested in hearing about that.

I've always thought PNI started during my two years at IBM Research (first during my explorations of questions about stories, and then when Neal Keller and I created what we called the "Narrative Insight Method"), then developed further through my research and project work with Dave Snowden and Sharon Darwent. But I wrote those essays ten years before IBM. I wasn't thinking about stories, or even people, in 1988. I was thinking, however, about how organisms of every species look back on their experiences and make decisions.

I'll show you some of the writing so you can judge for yourself, but if this connection is real, it means that at least some of the roots of PNI go back not twenty but thirty years ago, to the days when complexity theory changed the way I thought about social behavior. And if that's true, it raises the possibility that PNI developed because it is the inevitable result of taking complexity into account when considering the behavior of social species such as our own.

On hierarchy as help and hindrance

This is the first of the three essays I think you might like to read. It came directly out of a feverish run through some books and articles about complexity and chaos.


Multiplicity. Even the word is too long. Have you ever sat very still and thought about how many there are of everything? Try it for a while — but only for a little while, because it's dangerous. You can go in either direction; the confusion is marvelous in both the infinite and the infinitesimal. Think big: towns, nations, worlds, galaxies. Think small: bodies, molecules, electrons, empty whizzing space. Space in either direction.

It's a paradoxical result: the contemplation of complexity leads to the homogeneity of the void. Everything there is turns out to be only a small part of everything there isn't. If the universe were made of numbers, most of them would be zero.

So here we are, a bundle of neurons and some other cells, in the middle of this complex void. We, among all the animals, have the ability to see outside our native scale to other measures of time and space. How do we cope? How do we read the mail or shop for food without suddenly, paralyzingly, confronting the enormity of it all?

The answer lies in a special feature of the human mind that seems to have evolved specifically to deal with the burden of awareness: hierarchy. We organize things. We divide time into centuries, years, seconds. We divide space into light years, kilometers, microns. Think about anything we experience, and we will have arranged it hierarchically. What is a child's first reaction to a number of blocks? To pile them up. To make, not a group of equal components, but a smaller number of nested units composed of those components. In hierarchy lies safety.

It is precisely for this reason that it is necessary, at times, to put away the crutch of hierarchy and try to stand unaided on the shifting sands of complexity. Maintaining an awareness of other-than-categorical connections among elements of disparate origin requires that we — sometimes, temporarily — place them all on the same level. To discover similarity in the shape of a leaf, a differential equation, and the swoop of a flute, we must suspend our hierarchical definitions and allow new connections to leap up from a flat sea of perception.

As a visual image, I like to shape each piece of information into a tiny sand grain in a flat wide desert. All are equal; all contain only the crucial property of being observed. Then experience, intuition, and thought, like a warm wind, catch up these grains and form them into new and ephemeral patterns of truth.

Letting the mind loose in this way, by consciously breaking down some of the barriers that subdivide our experience, allows our integrative genius to work on the raw material of reality and produce exciting results.



Grand Canyon, Wikimedia Commons, Pescaiolo
I remember the image that was foremost in my mind when I wrote that essay: the Grand Canyon (in the Western US). I spent a lot of time in those years thinking about making sense of complexity, and I kept going back in my mind to the times I had visited the Grand Canyon and had been stopped in my tracks by its complexity.

How is it possible, I wondered, to live in full awareness of the complexity in the universe? In its enormity, its detail, its mesmerizing intricacy, its worlds within worlds? Must we become numb and stupid to carry on in the face of such wonder? Can we?

And I remember how I solved the conundrum — or rather, how the solution came about, because it was more of a reception than a creation. One day, in the midst of this dilemma, I was eating a sandwich while contemplating the blades of grass in a field (another Grand Canyon) when the answer suddenly came to me: The elements. The alphabet. The types and categories of things. In the Bible, Adam gives names to the types and categories of animals. Why does he do that? Because he has to figure out some way to live in a sea of complexity. So do we.

We cannot cope with an inconceivable number of things, but we can cope with an inconceivable number of combinations of a conceivable number of things. Focusing on the classes instead of the instantiations makes it possible to live life without being overcome with awe. The hierarchies we create are the fictions we need to stop our over-developed awareness from damaging our sanity. From this perspective, what Plato was after was not truth itself, but fiction whose purpose is to help us cope with truth.

Just look at how our hierarchies help us. The alphabet shapes the wild sounds we make and hear into neat, predictable groupings. The periodic table (and the types and categories of stones) makes the Grand Canyon not only bearable, but enjoyable. Biological nomenclature corrals the countless hordes of beasts and vermin into compact species, nested inside genera, families, orders, phyla, and kingdoms. The laws of physics transform the shocking realities of physical life — rushing, falling, colliding — into manageable formulas. Wherever we find unpredictable complexity, we build predictable, complicated maps to help us make our way through it. Without those maps we would be lost.

But the solution of complication comes with a price, and the price is amnesia. At the start, our maps are conscious creations, and we discuss and experiment as we refine them to suit our needs. But eventually, inevitably, we forget that our structures are fictions and our conditions are choices, and our maps become our prisons. Every map we build becomes the territory it once represented, and only in the places where it has worn bare can you see the reality that still lies beneath it.

How this idea influenced PNI

The fingerprints of this idea are all over participatory narrative inquiry. To begin with, all PNI projects start by suspending assumptions about "the way things are" and preparing to listen to the way things really are, in the minds of the people who have lived through whatever it is we want to think about. This is nothing less than the deliberate destruction of hierarchy — temporarily, thoughtfully, and for a reason. We roll up the map and put it aside, and we walk unaided on the ground.

I have said before that when you listen to someone telling you a story, you have to listen past what you thought they were going to say, past what you want them to say, and past what they ought to say, until you get to what they are actually saying. In practice, this means that in PNI we don't address research questions or gather information or question informants or apply instruments. We start conversations, and we listen. We let people choose the experiences they want to tell us about, and we invite them to reflect on those experiences with us. The way we set up the context for the conversation, the questions we ask, and the way we ask them — all of these things work together to push past the structures of our lives to the reality that lies beneath them.

We are not, of course, so deluded as to believe that we succeed in this entirely. Every PNI project succeeds and fails when it comes to getting to ground truth. But we try and we learn. I learn something new about engaging participants and helping them delve into the insights of their experiences on every project I work on; and so do all of us who are doing PNI work.

The idea of temporarily and purposefully dissolving structure comes up again in PNI's technique of narrative catalysis, where we look at patterns in the questions people answered about their stories. One of the rules of catalysis is to generate and consider deliberately competing interpretations of each pattern we find. As a result, catalysis never generates answers or findings, but it always generates questions, food for thought, material for making sense of the map in relation to what lies beneath it.

Sensemaking is the place in PNI where the map and the land come into the strongest interaction. It is in sensemaking that the map is rolled out again, but (to extend the metaphor) with a group of engaged people standing under it, actively mediating between the map and the land it represents, negotiating, adjusting, rewriting. When PNI works well, the map emerges from sensemaking new-made, better, more capable of supporting life — until the next time it needs updating.

So you could say that PNI is a solution to the solution of life in a complex world. It's that little spot of yin in the yang that makes the yin survivable.

Is PNI unique in this? Of course not. Lots of methods and approaches do similar things for similar reasons. All the same, I find it fascinating to realize that the roots of PNI stretch further back than I thought they did, and further out than social science or psychology or, really, anything human. I knew nothing about sensemaking (in the way Weick and Dervin wrote about it) back then; but coming from the study of social behavior in a variety of species, I arrived at a similar place. That's just . . . cool.

On optimality and incomplete information

Here's the second essay. This one was from a little later, when I was over having my mind blown by complexity theory and was starting to use it to hammer away at foraging theory (the particular part of ethology I found most interesting).



When biologists speak of the use of information by animals, they usually consider the question of what an animal should optimally do given that its information is less than perfect. In my opinion, the study of "imperfect information," as it is called, has been marred by two problems.

First, information has always been assumed to be about the environment. But if one considers the totality of information that could possibly be used to make decisions, it also includes information about the internal state of the individual making a decision and information about how the environment affects the future internal state of the individual.

Second, studies of imperfect information have a hidden assumption of awareness that may or may not be realistic. They ask the question of what an animal should do based on its knowledge that its knowledge is incomplete. For example, Stephens and Krebs (1986) ask, "How should a long-term rate maximizer treat recognizable types when it knows that they are divided into indistinguishable subtypes?"

Do we have any proof that animals are at all aware that the information they hold is incomplete? Is not the knowledge of the inadequacy of one's knowledge a type of information in itself, a type of information that we cannot assume animals have access to? I would hold that animals always act as if they had complete information, since they cannot know that their information is incomplete. The question then becomes one of constrained optimization within the information base available.

More interestingly, the behavior of animals acting optimally with incomplete information is then removed from its promise of being optimal in the overall sense, in the sense that the animal always performs the correct behavior for the conditions at hand. This should more closely approximate real behavior than theories that assume knowledge of ignorance. In other words, knowing that you know nothing is knowing something, and this is something that we cannot assume animals know.

If you look at incomplete information in this way, it is a lot simpler. Optimization just becomes optimization under a blanket of uncertainty, and is no longer especially correct or adaptive. Maximally optimal organisms might still make wrong decisions based on incomplete information, because optimality and infallibility might not always be perfectly linked. This means that we should watch not what should evolve, but what does evolve given the amount of information available (including information about what information is available).

Which leads into my next point: that the value of increasing information is not necessarily monotonically increasing. And that there are types of information we don't consider, such as internal information (where I am coming from) and relational information (how it all fits together).

It is a point of constraints. Evolution optimizes behavior inside of the constraints of what an animal can possibly know. But natural selection doesn't know that animals don't know everything. Obviously any animals that are aware of their inadequacy will win out over others that always think they are right; caution should win. But how does caution evolve? If there is a population of animals eating two prey items which they cannot distinguish (say noxious models and good mimics), and one organism evolves that knows that it cannot know which are models and which are mimics, then by definition it knows that models and mimics exist, which is distinguishing between models and mimics. Right?

Or if a population exists which samples from a distribution of prey energy amounts, and one individual evolves that knows that its sample is not completely representative of the universe of prey types, then does it not know something about the universe of prey types (if only that it is or is not adequately represented by the sample) that it by definition cannot know?

In statistics, we take a sample of a universe of data and hold it to be representative. We know that it should be representative because we have some idea of the larger universe from which we selected it. The point is that we have selected the sample. I don't think animals select a sample. I think they only have access to a sample.

Animals live local lives. They cannot know that the prey types they encounter are only one percent of the prey in a particular forest, or 0.00009% of all the animals of that species. They can only see what is given to them. Therefore they are not aware that any more exists. To them, the sample is the universe, and they base their decisions on it. They may have some uncertainty, but they cannot quantify it as we do when we know that our sample is 9% of the universe. What way of telling the size of the universe do animals have? None. Perhaps they have a rough idea that 90 bugs is not a good sample, but does not the number of bugs change all the time?



That second essay ends a little abruptly, doesn't it? I don't remember why. Anyway, that idea grew into my master's thesis, which would have grown into a Ph.D. dissertation if the department I was in at the time had been willing to consider simulation modeling a legitimate form of research. It was not, and I left science in a huff. (But I have written about the idea a few times over the years, and that makes me happy, so I'm good.)

In case my primary argument in that essay was not clear, I'll put it more simply: Never assume anyone knows what they don't know. That sounds obvious, but it's a hard habit to break.

Funny story: around the time I wrote that second essay, at a reception after a talk, I had the opportunity to ask John Krebs (of Stephens and Krebs foraging theory fame) a question about foraging theory. I have spent decades puzzling over the conversation, which went like this:
Me: What do you think about the idea that foraging theory anthropomorphizes animal knowledge and information use? I think there might be things we're not seeing because we don't think like other species do. I wonder what would happen if we approached information from a different point of view, from their point of view, as if we thought the way they think.
Krebs: How long have you been in graduate school?
Me: Two years.
Krebs: You'll get over it.
I still can't make out what he meant by that. Did he mean that ethologists don't anthropomorphize animal knowledge and information use? Or that they do, but they can't do anything about it? Or that nobody cares? Or that I should shut up and do as I was told? I still don't know.

But I wasn't the only one thinking about the issue. In the years after that, I attended several lectures on research that suspended assumptions about the way animals thought, and as a result, discovered some surprising things. In the study I remember best, researchers took birds of a species that was famous for having no courtship ritual whatsoever, filmed them interacting, and slowed down the film, to find an elaborate courtship ritual playing out so quickly that the human brain cannot see it happening. I remember being so excited during that talk that I could barely sit still, because it confirmed what I had been thinking about the way we went about studying animals and making claims about their behavior.

Another study I remember proved the now-well-known fact that putting colored bands on birds' legs and studying their social relations is a bad idea, because having a colored band on your leg changes your social standing. That seems obvious now, but it was quite a revelation at the time. Another study revealed that some male fish mate by pretending to be female fish. This pattern was hidden in plain sight for decades, because everyone who saw it assumed it must be a misunderstanding or a fluke. Then it was elephants communicating in wavelengths we can't hear, and plants sending messages in wavelengths we can't see, and the surprises just kept coming. I haven't exactly kept up with new developments in the field of ethology, but the little I have seen has given me hope that researchers are continuing to explore animal behavior in new and creative ways.

How these ideas influenced PNI

What does this essay have to do with participatory narrative inquiry? Lots. I can see influences on the development of PNI that came from each of the three points I made (about the limits of knowledge, the types of knowledge, and the value of information).

PNI and the limits of knowledge

You've probably heard about a thing in psychology called the Dunning-Kruger effect, where people become over-confident in an area because they are unaware of their ignorance. Back when I wrote that second essay, I was trying to express my feeling that ethologists had developed two simultaneous manifestations of a relational Dunning-Kruger effect, thus:
  1. The normal, self-reflective version, in which they overestimated their own knowledge about the knowledge of their study subjects, plus
  2. A vicarious version, in which they attributed knowledge of the limits of knowledge to their study subjects, when their study subjects had no such knowledge about the limits of their own knowledge.
What I didn't realize until recently is that (a) people don't just do this with respect to animals; and (b) I've never stopped thinking about the problem.

Let's think about animals for a second. Animals almost certainly don't sit around worrying all day about how much they know and how much they don't know. They know what they know, and they assume that's all there is to know. As far as we can tell, we are the only organisms that think about how much we don't know. So any random human being is likely to know more about the limits of their knowledge than any random dog or cat. But that doesn't mean we all know a lot about what we know and don't know; it just means we all know something about it.

I would guess that there is a normal distribution of awareness about knowledge limits. Some small number of people are probably aware of their ignorance to the point that they can take it into account in their decision making. The majority of us are dimly aware of the boundaries of our understanding, to the extent that we can apply rules of thumb and margins of error when we feel vaguely under-confident. And another small number of people are probably almost as clueless about the limits of their knowledge as any random dog or cat.

Figuring out how much any given person knows about how much they know is not an easy task, even when you can talk to them. How do you ask someone how much they don't know about something? You can test them to find out how much they know, but if you want them to estimate how much they don't know, don't you have to tell them the scope of the topic before they can make an estimate? And then don't they know more than they did? And then do you have to describe what's beyond that so they can make a new estimate? It's like trying to count the number of weeds in a pond when the only way you have of counting the number of weeds causes more weeds to grow.

So I'm not sure the question is that much easier to answer with people than it is with animals. But that doesn't mean we don't need to keep trying to answer the question; in fact, we need to answer it even more urgently with respect to each other. As social animals, we spend a lot of mental energy trying to figure out what other people need and how they will respond to the things we do and say. Everybody needs to do that in daily life, but when we are in a position to help people, we need to do it even more. If we think people know more about their needs and their limitations than they actually do, we are apt to predict their needs and responses wrongly, and we might end up hurting people instead of helping them.

Sometimes I think people give up trying and simply pretend they know what other people know about the limits of their knowledge. And then when someone asks them how they know that, they say things like "You'll get over it." Not getting over it — by actively pursuing answers to that question — is one of the goals of participatory narrative inquiry. In a sense, PNI came out of thirty years of my not getting over my original desire to make sense of perspectives that are different from my own.

Ignaz Semmelweis, Wikimedia Commons, Eugen Doby
A tragic example of what happens when you make erroneous assumptions about other people's knowledge of their limitations can be found in the story of Ignaz Semmelweis, the nineteenth-century doctor who famously tried (and failed) to convince other doctors to wash their hands after dissecting corpses and before treating pregnant women. (Actually, doctors were washing their hands, but with ordinary soap, which did not kill enough streptococcal bacteria to prevent subsequent infection.)

According to Wikipedia,
Semmelweis described desperate women begging on their knees not to be admitted to the First Clinic [where physicians also examined cadavers; in the Second Clinic, midwives did not, and the death rate was much lower]. Some women even preferred to give birth in the streets, pretending to have given sudden birth en route to the hospital (a practice known as street births), which meant they would still qualify for the child care benefits without having been admitted to the clinic. (Wikipedia)
Semmelweis wrote a series of articles advancing the theory that "cadaverous particles" were the sole cause of patient infections. His theory was attacked on many grounds, some reasonable, some questionable, and some simply prejudiced (such as the belief that his theory arose solely from his Catholic faith). He did not react well to these criticisms, becoming more and more combative, drinking heavily, and calling doctors who refused to change their practices "murderers." At the age of 42, Semmelweis was tricked into entering an insane asylum, held there against his will, and severely beaten, dying weeks later from his injuries. Only with the discovery of germ theory two decades later was he proven right — not as to his explanation of his findings, but as to his belief that lives could be saved by the measures he tried to promote.

The widespread rejection among Semmelweis' contemporaries of what today seems like common-sense advice has often been used as an example of blind perseverance in the face of contradictory evidence. But I'm not as interested in how other doctors reacted to Semmelweis' advice as I am in his failure to understand and adapt to their needs and limitations.

Ignaz Semmelweis was a man who cared deeply about his patients. He was "severely troubled" by the high incidence of puerperal fever in the wards he administered, writing that it "made me so miserable that life seemed worthless." These strong feelings set him apart from many doctors of the time; and later, his unique experiences set him even further apart. The death of a close friend and colleague, Jakob Kolletschka, forcibly and painfully challenged Semmelweis' views on infections and autopsies. He recounts the incident thus:
I was immediately overwhelmed by the sad news that Professor Kolletschka, whom I greatly admired, had died. . . . Kolletschka, Professor of Forensic Medicine, often conducted autopsies for legal purposes in the company of students. During one such exercise, his finger was pricked by a student with the same knife that was being used in the autopsy. . . . [H]e died of bilateral pleurisy, pericarditis, peritonitis, and meningitis. . . . Day and night I was haunted by the image of Kolletschka's disease and was forced to recognize, ever more decisively, that the disease from which Kolletschka died was identical to that from which so many maternity patients died. (Wikipedia)
Notice the words Semmelweis uses here. He was forced to recognize the connection, and ever more decisively, meaning that he must have revisited the tragedy over and over, as we do when someone close to us dies. Even his choice of the word haunted implies repetition, such that a place haunted by a ghost is described as being "frequented," that is, visited frequently. In this light, Semmelweis seems less a visionary than a man tormented by the consequences of his limited vision. If he had never experienced such a deep despair over his inability to make sense of the patterns he saw, he might have been as reluctant to examine the limits of his knowledge as the doctors he tried to convince.

It seems to me that Semmelweis' failure might have sprung in part from his inability to understand the impact of this experience on his awareness — and the impact of the lack of such an experience in the careers of his contemporaries. Consider the fact that one doctor Semmelweis did convince had a similar experience to his own:
Professor Gustav Adolf Michaelis from a maternity institution in Kiel replied positively to Semmelweis' suggestions — eventually he committed suicide, however, because he felt responsible for the death of his own cousin, whom he had examined after she gave birth. (Wikipedia)
Semmelweis seems to have assumed that other doctors were as haunted by their ignorance as he was; but it sounds like most of them were not. The theory of the four humours was in full force at that time, and most doctors probably felt no need to venture past its readily available explanations. They were satisfied with the state of their knowledge, saw no gulf beyond it, and were content to carry on as they had always done.

I wonder if Semmelweis would have gained more traction if, for example, he had refrained from posing any theory at all, and suggested changes to practice solely on the basis of the evidence he had collected. After all, he could have proposed his changes without attacking the predominant medical theories of the day. Neither he nor anyone else at the time could explain why the washing of hands with a chlorinated lime solution greatly reduced the incidence of infection in maternity wards; but the fact that it did reduce the incidence of infection was not in dispute.

As I said above, such an inability to imagine the experiences and mindsets of other people, based on erroneous assumptions about the nature and limitations of their knowledge, is something we directly seek to address and correct when we carry out projects in participatory narrative inquiry.

How do we do this? We ask people to tell us what happened to them, and we ask them questions about their knowledge and awareness during the events of the story. We ask questions like these:
  • How predictable was the outcome of this story? Did you know what was going to happen next?
  • What in this story surprised you? What do you think would surprise other people about it?
  • If this story had happened ten years ago, how do you think it  would have come out then? What about fifty years ago? What about in another location?
  • What could have changed the outcome of this story? What makes you say that?
  • What did the people in this story need? Did they get it? Who or what helped them get it? Who or what hindered them?
  • Does this sort of thing happen to you all the time, or is it rare? What about to other people you know? What about to people you don't know? Can you guess?
  • If you could go back in time to the start of this story, what would you say or do to help it turn out differently? What would you avoid changing?
The answers to these questions help us understand not only what happened to people but also what they know and don't know about it. Sometimes the most illuminating answer is "I don't know." And we sometimes ask follow-up questions, like:
  • Why did you say "I don't know"? 
  • What does that mean to you, that you didn't know?
  • What would you like to know?
  • How do you think you could find out? 
People facing situations like Ignaz Semmelweis faced can ask questions like these to understand (as much as anyone can) the perspectives, needs, and limitations of those they are trying to help.

PNI and the not-always-increasing value of increasing information

Now let's get back to the second point in the second essay: the value of increasing information. When I wrote that essay, I was concerned about an assumption I found distributed throughout the scientific literature on foraging theory, which was that the value of increasing information increased monotonically. In all of the models and theoretical frameworks I read on information use, more information was assumed to be more optimal than less information. I didn't see why that should always be the case. In particular, I thought the assumption might be problematical in situations where individual choices are interlinked in a complex network of mutual influence.

So I wrote a computer simulation to find out whether "smarter" individuals with somewhat better information about density-dependent resources would always out-compete "dumber" individuals with less information. ("Density-dependent resources" are resources whose value to each individual depends on the number of other individuals drawing from it, like a bird feeder that holds the same amount of food whether five or fifty birds visit it.)

According to foraging theory, there was no point in writing such a simulation because the outcome could be predicted in advance; but I wrote it anyway, because I was curious. Surprisingly, the "smarter" simulated allele did not fixate (exclude all others) in the population. Rather, the two alleles kept returning to a roughly 75/25 ratio, representing (for that simulated situation) a "mixed evolutionarily stable strategy," that is, one in which a mixture of strategies is more optimal than any one pure strategy.

A lot of birds, Wikimedia Commons, NASA
It took me a while to figure out why this was happening. After I spent some time watching my simulated organisms make their decisions, I realized that what I was seeing made perfect sense. The smart individuals would find out exactly where the best food sources were and rush to them, only to find all the other smart individuals there dividing up the food. The stupid individuals would wander aimlessly from place to place. Most of the time they'd get nothing but the crumbs left over, but sometimes they'd find themselves feasting at a "bad" food site that was nevertheless better than the "good" sites the smart crowd was picking to pieces. After a while, I couldn't get the joke "nobody goes there anymore, it's too crowded" out of my mind.

The result I got was counter-intuitive to foraging theory because there was an inconvenient trough in the value of increasing information. The smart organisms knew that a food source was better, which was more than the stupid organisms knew; but they didn't know what all the other organisms were about to do. Thus their intermediate level of information was sometimes better and sometimes worse, such that the net value of the increase was not enough to eliminate the relative value of stupidity. Thus the greatest fitness, at the population level, was in a mixture of strategies, including some that had no obvious value on their own. (I should mention that the idea of an optimal mixture of strategies goes all the way back to Cournot's 1838 concept of a duopoly; but still, the idea was not commonly applied to foraging theory at the time I was thinking about it.)

Now let's come back to participatory narrative inquiry. Situations in which complex interactions influence the options, choices, and behaviors of everyone involved are also situations in which PNI works best — and I now realize that this is probably not an accident. PNI is at its most useful at times when it seems like you know enough to come up with a viable solution, but you have been stymied by missing information you can't guess at. In fact, most PNI projects start from a situation in which, even though "everyone knows" what the problem is, prior attempts at solutions have shown the current level of knowledge to be insufficient. You could even say that the whole reason PNI exists is to compensate for troughs in the value of intermediate levels of information in complex situations.

That's why surprise is such an important part of PNI. I've noticed that on every PNI project, somebody is surprised by something. An assumption is overturned, a trough turns into a peak, and new options open up as a result. I've always found this to be profoundly satisfying, and now I know why.

PNI and types of information

The third thing that bothered me about foraging theory when I wrote that essay was how researchers used the word "information." Whenever people gave examples of information in the papers and books I read, it was nearly always about facts external to the organism: where food could be found, how much energy could be found in the food, weather conditions, and so on.

But that can't be the only information an organism needs, I thought. There must also be internal information, such as the organism's hunger or satiety, its health, its age, its reproductive state, and so on. An animal with excellent knowledge about its internal state should out-compete an animal with poor internal knowledge, right? But nobody seemed to be studying internal information, or even acknowledging that it existed.

Bird on branch, Wikimedia Commons,  Mathew Schwartz
And there must also be another type of information, I thought: some idea of how all the other pieces of information fit together. I called this relational information. For example, if I am a tiny bird perched on a branch in mid-winter, I must know that I am in danger if I don't obtain enough food to replenish my fat stores to a certain extent. Such information may only be "known" at the level of an instinctual urge, but it should exist in some way, because it must stand behind the decisions animals make about how much energy to expend on foraging. Should I stay on the branch and conserve my heat, or should I swoop down in search of food? Without internal and relational information it's hard to make such a decision.

So I wondered why researchers never seemed to pay attention to either internal or relational information, even in theoretical considerations of animal behavior. My guess was that these types of information were so much harder to observe and control that people tended to ignore them. It's easy to vary the values and distributions of food sources and then watch what animals do in those situations, especially when you can see them evaluating the obvious differences between the food sources. Trying to figure out what animals know about their internal states and how the world works is a more daunting challenge. But that doesn't mean those types of information don't exist or don't matter.

Now let's think about how this applies to participatory narrative inquiry, because, of course, it does. Just like the researchers whose papers I was reading back then, we all theorize about the mental and emotional states of the people whose needs, limitations, and probable responses are important to us. We do this individually every day, and we do it collectively when we embark on a project to solve problems or improve conditions in our communities and organizations. And like those researchers, we have an easier time thinking about external information than internal or relational information.

That's something I have noticed when I talk to people who are just starting out doing PNI work. If you visualize all the questions you could possibly ask about a story, arrayed in a set of concentric spheres around the story, people always seem to start out thinking about the outermost sphere. They ask questions about the basic facts of the story, like:
  • Where and when did this take place?
  • Who was involved?
  • What was the topic or theme of the story?
  • What problem was solved? Who solved it? What was the solution?
After they've gotten more practice thinking about projects, people start moving inward, inside the bubble of their participants' experiences, to where internal information is important. They start asking questions only a story's teller can answer, like:
  • How did you feel when this happened?
  • What surprised you about it? What did you learn from it?
  • What do you wish had happened?
  • What helped you in the story? What held you back?
Finally, experienced PNI practitioners move into the center, where relational information (that is, beliefs and values) can be found. They start asking questions about what the storyteller thinks the story means about the way the world works, like:
  • Why do you think this happened?
  • Does this happen often? Should it?
  • What would have changed the outcome of the story? Would that be better or worse?
  • Who needs to hear this story? What would they say if they heard it? What would happen to you if they heard it?
Another thing I've noticed is that the closer PNI moves to the center of these concentric spheres, the more it deviates from other modes of inquiry. When a PNI project asks questions anyone could answer about a story, it's hard to distinguish from any other kind of survey-based research (and it's hard to make a case for its use). In such a situation, the story is just another data point, and it's not all that critical of a data point either. You could ask people questions at the outermost level with or without a story, and the answers would not be that different. For example, you could ask people to give you a list of all the problems they solved in the past year, and you wouldn't get much of a different picture than if you asked them to tell you a story about a problem they solved.

When a PNI project asks questions closer to the center of experience, however, the story becomes much more than a data point. It becomes a vehicle by which participants can make sense of their experiences, drawing forth internal and relational information they didn't realize they had (or cared about). As a result, when PNI works well, by the end of the project, everyone learns something about themselves and each other.

So in a way, you could say that my work on PNI has been a continuation of my earlier attempts to get people to "move inward," closer to the center of the experiences and perspectives of those they seek to understand.

On experiment and reality

I have one more old essay to show you. It's an appendix to a paper I wrote for a graduate class, apparently in the sociology department, about an experiment on social interactions among fish. At first I didn't remember the project described in the paper, but as I read I began to remember bits of it. What I remember most is that I did the project in the "fish room" of the biology building basement. The light switch in that room was wired badly, and two or three times I got an electrical shock when I flipped the switch with wet hands. That's a thing you remember.

Most of the experiments I did in my early days as a wannabe-ethologist had to do with social interactions: dominance hierarchies, how kin find each other, tit-for-tat balances, methods of communication, social signaling, intention movements, and so on. I was intensely curious about the evolution of social interactions, because . . . well, I still can't understand how anybody could not be intensely curious about that.

Pumpkinseed sunfish, Wikimedia Commons, Kafziel
The experiment went this way. I netted 150 pumpkinseed sunfish from a pond and put them in a tank. (Or somebody netted them. It says "for use in another experiment.") From those 150 fish I picked out ten groups of three fish of roughly equal size (because any big-fish-little-fish contest is a foregone conclusion).

For each of the ten groups of three fish, I followed this procedure:
  1. Isolation: I put all three fish in tanks by themselves for five days.
  2. Establishment of dominance: I put two of the three fish together and watched them until I could see which one was dominant. (They peck at each other, like chickens.)
  3. Re-isolation: I isolated the loser of the dominance contest for another day. (The winner got to go back into the big tank.)
  4. Test: I put the loser from the previous encounter together with the third (still isolated) fish and watched what happened.
What was supposed to happen, according to prior research, was that the losers in the first contests would remember their low status and lose in the second contest as well. What did happen was that eight of the ten losers won the second time around. As I explained in the paper, this could have meant a wide variety of things, but it could not really be said to mean anything, because the sample size was so tiny. I knew that going in, and so did my professor. It was just a practice project to write a practice paper.

None of that is interesting. What is interesting (to me, now) is that I wrote an appendix to the paper, and that appendix, even though it's mostly a jokey thing I wrote to myself, connects with participatory narrative inquiry. I can't guess if I actually submitted the appendix with the paper or just kept a copy for my own amusement. In any case, here's what I wrote.



Appendix: The Poorly Understood and Sorely Neglected Behavior of Pumpkinseed Sunfish in Laboratory Tests.

As I reviewed the literature for this experiment, and again as I watched the fish setting up dominance relationships, it occurred to me that although many descriptions had been published of the social behavior of the pumpkinseed sunfish and other species, never had anyone attempted to describe the peculiar suite of behaviors that is shown when fish are placed in a testing tank and observed. I will now endeavor to present an extended ethogram of the experimental behavior of that species, with due attention to the fish-human interaction.

In the course of my work, I soon realized that I could divide the entire behavior of the fish in the test situation into a series of discrete stages that occur repeatedly and in a predictable sequence.

1.) Disbelief (D). When a fish is first placed in a strange tank, or the partition dividing a tank is removed, or some other equally amazing thing happens, the fish's first thought is — "I am dead." This has some basis in nature; when a fish is suddenly caught up and thrown into a new body of water, it is most probably in (a) another fish or (b) a net. Thus the fish upon entering the test arena spends some time in what others may call shock but which I prefer to call disbelief (mostly because it is a longer word, and it simply doesn't do to have scientists running around using small words). Now the state of this poor fish would be almost comical, if one were completely callous and cold-hearted (which I am not!); it lolls about on the bottom or in a corner, sometimes rocking gently, for a period of anywhere from ten seconds to half an hour.

2) Escape (E). At some point (as I have said, this is highly variable and begs further study) the fish suddenly realizes that it is alive. Its very next thought is — "If I'm alive, then . . . I'm trapped! I've got to get out of here!" It then proceeds to push its way out of what it assumes to be water but what most annoyingly turns out to be an invisible force field, or what we humans know better as glass. The fish, as any good Vulcan would do, assumes that there has to be a weak spot in the force field, "Somewhere where the ion magnifier exceeds its photon limit. It is only logical." With its mouth open and its gills flaring, it presses here and there and here and there and there and over in that place and down here and up there — you get the idea.

I can see another parallel for this behavior in nature. Surfaces in nature, be they pond bottoms or stream edges, are mostly made of stones; and stones often have fish-sized holes between them. So a fish trapped in, say, a small pool off a running stream, needs only to poke and prod until it finds a way out. The intensity of this behavior often gets quite high and varies substantially among individuals, due undoubtedly to some differences in susceptibility to claustrophobia.

3). Recognition (R). You may have noticed that so far I have not mentioned interactions between the two fish. Far from being unprofessional and unobservant, I reserved the recognition of another fish to its own stage. At some point one of the fish looks around and gasps — "Good God! There is another fish in here!" And it is from this realization that we get the data point "Attacked first," for that fish usually wants to get a good nip in before its fellow occupant itself reaches the R stage.

You may ask why the fish did not notice its companion before, especially when they both decided to poke at the same spot. Yes, this is another parallel in nature. Fish in the wild get bumped up quite a bit: things are always floating by, children will be throwing rocks, crayfish are scuttling around, outboard motors are making a ruckus. So even the most violent escape attempts by another fish are treated as the usual disturbance — get as far away from it as possible, but for heaven's sake don't stand there gawking at it! Thus it is only in a moment of lucid tranquility that the recognition stage arrives on the fish. To the nipped fish, the R stage is entered abruptly and assuredly, as nothing else feels quite like a pumpkinseed sunfish bite.

From this stage on begin the "normal" interactions we record on our data sheets and analyze, ignoring as good scientists the unseen (but standard! at any rate) behaviors described here.

Perturbations of the normal scheme of things are of two types: relapse and awareness. A relapse is caused by a large disturbance, such as the observer tripping over the blind or camera, dropping something loudly, or banging the testing tank with any number of things. (Not that any of these things has ever actually happened to me; I merely heard of them through other experimenters.) A relapse usually drives both occupants of the tank back to the disbelief stage, from which it is a long wait to realization of life, frantic escape, and back to aggression.

Awareness, the second type of perturbation, is often more devastating for the observer because of its psychological implications. This perturbation occurs when the observer is foolish enough to bump the blind or sit in such a way that a bit of his or her clothing shows (the observer who wears brightly colored clothing clearly knows nothing about fish), or cough (this has produced innumerable disasters to science). At this point the fish becomes aware of the fact that "Something . . . is out there . . . watching me." (Or us, if the R stage has been reached.) The fish assumes a position quite like that taken in the disbelief stage, with the exception that the fish faces the observer, glaring intensely this one thought: "I see you, you disgusting finless giant; I know what you're doing; and whatever it is you are waiting for me to do I will try my hardest to avoid." At this time the observer quite predictably mutters (inaudibly, of course, so as to prevent a relapse) several epithets that would not evoke full cooperation if heard and understood.

This concludes the extended ethogram of the true behavior of the pumpkinseed sunfish, adding precious insight to our scientific understanding of this interesting species.



That essay is a silly little thing, but I had something serious in mind when I wrote it, and I haven't stopped thinking about it in the years since. The more you read about the science of behavior in any species (including our own), the more obvious it becomes that a lot of the findings we rely on were derived in artificial contexts, just like my ridiculous project watching fish interact in empty tanks and pretending it meant anything at all about what their lives would be like in a natural setting. (It was a practice project, but the experiments it referenced and sought to replicate followed similar procedures and drew similar conclusions.)

The most obvious example of such blindness in human research is the much discussed fact that almost all psychological and sociological research — research that tells us how "humans" think and feel — is done on WEIRD (Western, Educated, Industrialized, Rich, Democratic) university students. The WEIRD acronym comes from the instantly-famous 2010 paper "The weirdest people in the world?" (in Behavioral and Brain Sciences, by Joseph Henrich, Steven J. Heine and Ara Norenzayan). Other researchers brought up the issue before that paper (for example, the "carpentered world hypothesis" was first put forth in 1973), but the WEIRD name has given the discussion new energy.

As a 2010 New York Times article put it:
[A] randomly selected American undergraduate [is] 4,000 times likelier to be a subject [of psychological research] than a random non-Westerner. . . . Western psychologists routinely generalize about “human” traits from data on this slender subpopulation, and psychologists elsewhere cite these papers as evidence. . . . [R]elying on WEIRD subjects can make others feel alienated, with their ways of thinking framed as deviant, not different.
I'm not going to cite any of the studies that demonstrate the flaws of WEIRD research here — they're easy to find — but I would like to mention a few things I noticed in recent discussions that connect to participatory narrative inquiry.

In a blog post called "Psychology Secrets: Most Psychology Studies Are College Student Biased" (on the PsychCentral blog), John Grohol lists the reasons psychologists are still not widening their research populations. Using university students is convenient; it's cheap; it's the way things have always been done; and it's good enough for the time being. You'll get over it, basically.

Grohol then says this:
There’s little to be done about this state of affairs, unfortunately. Journals will continue to accept such studies (indeed, there are entire journals devoted to these kinds of studies). Authors of such studies will continue to fail to note this limitation when writing about their findings (few authors mention it, except in passing). We’ve simply become accustomed to a lower quality of research than we’d otherwise demand from a profession. 
Perhaps it’s because the findings of such research rarely result in anything much useful — what I call “actionable” behavior. These studies seem to offer snippets of insights into disjointed pieces of American behavior. Then someone publishes a book about them, pulling them all together, and suggesting there’s an overarching theme that can be followed. (If you dig into the research such books are based upon, they are nearly always lacking.) 
Don’t get me wrong — it can be very entertaining and often interesting to read such books and studies. But the contribution to our real understanding of human behavior is increasingly being called into question.
I have learned over the years that if I try to defend participatory narrative inquiry as being "scientifically valid" I will fail. PNI just doesn't hold up as a scientific endeavor. Its participants are given too much control over the process for PNI to prove anything conclusively. There's no control group. The sample is self-selected and non-representative. Interpretation is biased and variable. There are no inter-interpreter validation checks. Conclusions are idiosyncratic and local. Results cannot be replicated, not even later on the same day. What it all means depends on whom you ask, and when, and how.

This is what I mean when I say that PNI is not a science; it's a conversation. When you invite people to tell whatever stories they want to, interpret their stories however they like, talk about their stories in groups, and draw their own conclusions, "proof" isn't a very useful word. "Useful" is a useful word. Above all else, PNI aims to be useful.

In a way, PNI is the ultimate anti-WEIRD research paradigm, because it aims for a real understanding of human behavior — that is, an understanding that is contextually situated, internally relevant, externally meaningless (and happy to be so), and purposefully, aspirationally, hope-fully actionable.

Here's one more quote about WEIRD research, from a Slate article, that relates to PNI:
So the next time you see a study telling you that semen is an effective antidepressant, or that men are funnier than women, or whether penis size really matters, take a closer look. Is that study WEIRDly made up of college psychology students? And would that population maybe have something about it that makes their reactions drastically different from yours? If so, give the study the squinty eye of context. As we often add “… in bed” to our reading of the fortunes in fortune cookies, it’s well worth adding “… in a population of Westernized, educated, industrialized, rich, and democratic college students” to many of these studies. It can help explain many of the strange conclusions.
The purpose of PNI is, precisely, to apply the "squinty eye of context" to statements about what is normal, or real, or human, so that they can grow into insights we can use in our lives and communities.

The types and categories of research

As I said above, I take this connection across three decades to mean that PNI was in a sense fated to happen when complexity theory worked its way into the study of social behavior. As a nice side effect, it also means that my professional career has been a lot less rambling and accidental than I thought it was. At least I've rambled over some of the same spots, and that's a comfort.

I can't help but wonder, though, why it took me so long to realize that I was still working on the same issues. Why did I not see that my work on PNI was a continuation of "not getting over" my early concerns about hasty assumptions and unexamined perspectives in social behavior? I don't know. Maybe it was because I left science in a huff. Maybe the idea of "leaving science" was the problem in the first place. Maybe science, or research, shouldn't be so easy to leave.

I don't know if it's because of reaching the twenty-year mark or what, but I've noticed that I've been describing my career differently over the past year or two than I used to. People always ask how I got started doing story work, probably because I don't sound like any sociologists or anthropologists (or storytellers) they know. I used to say "it was an accident" and describe how I applied for a job at IBM Research because my husband was already working there and we could commute together, and I ended up getting hooked on "this story stuff" as a result. That's all true, but lately I've noticed myself saying, "I started out as an animal behaviorist, but after a while I switched species." That always gets a laugh, but probably the deeper reason I say it is that I'd like to have a more coherent story to tell myself. But it's not a fictional story; it's a real connection. So why didn't I see it?

Maybe it's not just me. Maybe it's the way we all think about research. Maybe it's too organized. Maybe it has too many types and categories. Maybe sometimes — temporarily, thoughtfully, and purposefully — we need to place everything on the same level and let new connections appear. Yes, we need more diversity in our research populations (both researcher and researched), but maybe we need new connections among some other things too:
  • sociology, psychology, anthropology, and ethology;
  • proof, utility, and action; 
  • participation, observation, and experimentation; 
  • contextual and universal conclusions; 
  • academia, business, government, and even some out-there independent scholars like me, who bounce around from one field to another, thinking they've crossed vast distances when they've really just been pacing the same small circles for decades.
Why don't we all walk around together finding out useful things? That sounds good. Let's do that.