Article published in Integration and Implementation Insights.
I don’t see the world in pictures. I mean, I see the world in all its beautiful shapes and colors and shadings, but I don’t interpret the world that way. I interpret the world through the stories I create. My interpretations of these stories are my own mental models of how I view the world. What I can do then, to share this mental model, is create a more formalized model, whether it is a simple picture (in my case a very badly drawn one), or a system dynamics model, or an agent-based model. People think of models as images, as representations, as visualizations, as simulations. As tools to represent, to simplify, to teach, and to share. And they are all these things, and we need them to function as these things, but they are also stories, and can be interpreted and shared as such.
I think this recognition of models as narratives is important for people like me, whose brain freezes when faced with charts and graphs and moving pieces, whose eyes dart away when faced with circles and arrows and loops. But it’s not just for people who aren’t visual learners. It’s also for people who use stories as a way of explaining the world. It’s for people who shape stories, and therefore truths and beliefs and values, through relationships and connections. My point, which should be obvious since I haven’t been subtle about it, is that it’s useful to think of models not only as visual or quantitative translations and simulations, but as narratives, with characters and themes and arcs.
Now I’m no Richard Adams, but I can create a system dynamics model of a rabbit warren, complete with the white death (poison from farmers protecting their crops), elil (predators like skunks and hawks and foxes), population growth, warren expansion, and everything else that is so wonderfully described in Watership Down (and if you haven’t read it (Adams 1972), do yourself a favor and go read it; yes, it’s a children’s book, but it’s really a story about courage and hope, life and death, good and evil and in between). Ok, so my model might not have actual characters, but the story is there, of life and death and growth and danger and adaptation. You have to supply Hazel and Bigwig and Dandelion yourself.
I look at a causal loop diagram, or a cognitive map, and my brain can’t quite understand what’s going on. But if I populate those images with my imagination, with a story, I can suddenly see the journey of a thought process, or the feedback loops in a system. Rabbits, for example, make lots and lots of babies, and I understand the feedback at work there, but I understand it better if I actually think of the rabbits and the babies (plus, bunnies are cute!), instead of just thinking of a circle with a plus sign.
This is true of any model (not that we should all create elaborate stories in our minds, but that it might be helpful to think a little more narratively sometimes, or for some people), even ones more complicated than Adams’ fictional warren. In fact, it might be even more useful to think narratively about complicated models, because we then become invested in the components and the relationships and the outcomes. In one sense it’s a humanizing act. OK, maybe not everyone wants to humanize their models, and maybe it can even be damaging to humanize your models, but for me, it helps me understand the processes being modeled, and therefore understand and interpret the results. I might make decisions based on a semi-fictionalized version of a model output, but since models themselves are semi-fictionalized representations of reality, I’m okay with that.
Adams, R. (1972). Watership Down. Collins: London, UK.
Biography: Alison Singer is a PhD student in the department of Community Sustainability at Michigan State University. She is studying perception and decision-making, and how models can act as interventions to shift cognitive processes. She is a member of the Participatory Modeling Pursuit funded by the National Socio-Environmental Synthesis Center (SESYNC).