Wouldn’t it be great to uncover some of the trouble spots in a program’s design before that program was implemented?
At last year’s evaluation conference, I was able to attend a session put on by Jeff Wasbes and Jonathan Morell. The result: I’ve had “attempt computer simulation” on my todo list for the past year.
Given that I have no education or practical experience in this area, I asked Jeff if he would like to help me give the subject a cartooning, he accepted. With the exception of the cartoons, all of the expertise you find in this post is courtesy of Jeff.
A couple notes:
Before we jump into the post, I have a couple administrative notes.
- I’ve packaged together 101 of my cartoons for a quick and easy download. I’ll probably only have one of these types of sets up at any given time, so if you’re interested, I suggest downloading it sooner than later.
- I have one more post planned for the blogging series but it might take a bit of time to put together. I do think you’ll like it.
About Jeff Wasbes
Jeff is the Director of Performance and Systems Analysis for the SUNY Charter Schools Institute. He is a skilled practitioner in Monitoring and Evaluation and focused on utilizing data for decision making and management support. His specialization is in decision support systems that embrace complexity and focus on results.
Evalutors are limited in their decision making ability by bounded rationality.
Herbert Simon suggests that people’s decision making ability is limited by their ability to access information and by their cognitive limitations to process information within a complex environment. Simon also suggests that people use heuristics to compensate for their lack of ability to reach an optimal decision under these circumstances.
So we use models to help us map out the complex interactions among multiple agents operating in a system. The value add for computer simulations is that not only do we get to map out the interactions between the agents (called combinatorially complex), but we get to see how their interactions play out over time (dynamically complex) according to some set of rules.
Do we arrive at optimal decision making? Probably not. Does it help us get closer? Certainly.
Evaluators already create models.
Logic models, concept maps, stochastic statistical models – I could go on. A common criticism of computer simulations is that they are too deterministic because the agents interact according to a set of rules that we decide. I suspect that the viewpoint of someone making that argument is that a computer model exists in its current form in perpetuity.
In fact, just like a logic model, a computer simulation model can and should be adjusted as new information comes to light with the goal of improving its accuracy. I would add that, well, physics is deterministic, too.
I liken computer modelling in the context of evaluation to trying to map out the physics of a social system. As evaluators, we are constantly looking for and talking about “patterns in the data”. By modelling a system, we are trying to explicate what causes those data patterns to emerge. We can then experiment with the model to figure out the points of greatest leverage to manipulate those data patterns toward our desired outcome.
Please note that I am talking about data patterns and not about discrete data points. I don’t believe that models can (or should be used to) predict a set of values that will exist at some specific point in the future. But I do believe that they can support our decision making by forcing us to think deeply about the causes of patterns in the data and about how we can manipulate the structure of the system to move it toward the patterns that we want to see.
George E.P. Box said, “Essentially, all models are wrong, but some are useful.” I’m going to come back to this quote a lot.
Half the fun is getting there.
Maybe even more useful than the final model or the results it produces is the process of building the model. Some modelers like to engage the client(s), retreat to a room somewhere, model, then bring out the product for feedback from the client. Others like to engage the clients in real time in developing the model – some even gather in a workshop setting and participate in a group model building exercise.
Whatever the method, much of the utility of building a simulation model comes from forcing the relevant stakeholders to think deeply about the causal connections that are producing the problem behavior (without a problem, we wouldn’t have need for a program, right? So I think it’s a good idea to understand the problem pretty deeply if we are going to evaluate it effectively.)
Building common understanding about the causes of a problem and about how the evaluand is designed to mitigate the problem gives the program and the evaluation thereof greater chances for success.
Modelling draws out concrete applications from abstract concepts.
What exactly is emergence? Why is a system “policy resistant”? What causes a bifurcation? Or a phase shift? How does a system adapt? These are all concepts that systems thinkers talk about, but I have a sense that many folks who are interested in systems approaches to evaluation have an intuitive sense about what these terms and phrases mean, but they don’t understand their mechanics on an atomic level.
Learning some modelling basics provides user with an understanding of what those mechanics are. Understanding how feedback works in a system gives one an elemental understanding of how a system could find a state of equilibrium and maintain it over time in light of several perturbations. This elemental understanding is necessary to having the ability to manipulate the complex behavior of systems.
Evaluators are equipped to make assumptions.
Everyone is equipped to make assumptions. We all do it everyday.
Usually we do it without having to justify our assumptions to anybody because (a) the system that forms the context within which we are making the assumptions doesn’t break down and (b) we are generally pretty accurate when making them. If the system happens to fail, then we have some explaining to do.
Evaluators can draw on a wealth of information and experience to form reasonable assumptions and set expectations. The piece that people struggle with is understanding the unintended consequences that arise because of feedback effects that play out over time – and with how our assumptions might cause us to overlook thinking about these things.
Why? Because we use mental models to navigate complex systems. These mental models contain all of our assumptions about how the system’s pieces work together and how the system will react to the actions that we take.
Unfortunately, most of us are not cognitively equipped to reason through complex, nonlinear interactions – we tend to think about cause and effect using direct linear models. In other words, our mental models (encompassing mostly linear assumptions) don’t match the behavior of real-world complex systems (encompassing lots of non-linear interactions).
So we can use models to help us build out our mental models and better understand the non-linear effects of actions that happen within the system. The value add that computer simulation brings – that traditional logic modelling techniques don’t offer – is that we can run a simulation to see the feedback effects play out over some simulated time horizon. See the next point.
Without any simulation, you’re going to wait years to figure anything out, and it won’t be cheap.
The biggest value add that computer simulation offers is a cheap and quick way to experiment with complex systems over long simulated time horizons. Not only that, but we can feel free to experiment with high risk decisions in a low cost environment.
The only people who are hurt in a computer simulation are fake people made of 1’s and 0’s – they aren’t kids, mothers, brothers, the disadvantaged, the poor, etc. To my knowledge, no one has been hurt or died during a computer simulation. There was the movie War Games, where we came close, but that was also fiction.
Software exists that makes simulation possible.
Not only does it exit, but it’s FREE! (for personal and academic use. If you want to use it commercially, buy a license.) www.vensim.com. There are lots of other packages, too, but I am not aware of others that offer a free version.
Simulation is do-able.
As with any new endeavor, I would advise that one should consult someone with experience for a little guidance, but people can freely download the software and get started modelling. There are plenty of sites and internet repositories with established models from which one could draw inspiration and ideas.
If Agent Based Modelling is your thing, then look here: http://ccl.northwestern.
At Evaluation 2013
If you found this post interesting, Jeff will be giving a presentation at Evaluation 2013 along with a solid group of presenters including John Gargani, Tarek Azzam, Jonathan Morell, Benjamin Sims, Stephan Eidenbenz, and Michael Strong.
The session is titled: Complexity Made Simple: Systems Models for Evaluation Practice. Here is the abstract:
This multipaper session addresses three questions: What are systems, complexity, and models? Why are they relevant to evaluation? How can evaluators use them in practice? We present concrete examples of computer simulations of complex systems applied to formative and summative evaluations. Our two aims are to communicate the simplicity, elegance, and power of these simulations, and to investigate the role that research on evaluation should play as simulations are introduced into modern evaluation practice.
Any evaluators reading this post simulating your models?
Anyone considering giving it a try? Love to hear about it in the comments.