SROI : In Search of a Verb

The concept of “social return on investment” is absolutely core to balancing the proverbial double (or triple) bottom line of social enterprise. It’s therefore no surprise that the need to accurately and consistently evaluate and express that value has been a topic of much discussion and hot debate. It’s a critical dialog–but I think the current conversation has a verb problem.

Much of the time, these conversations refer to SROI measurement. First off, only things that exist on an ratio scale can even BE measured. And I think we can all agree that there is no “absolute zero” on the scale of social good and that the “units” are hardly regular or continuous. (Seems to me we’d be lucky to even agree on an ordinal scale for something as context-dependent as social good.) So, in the strictest sense, measuring SROI is not even an option.

Organizations that acknowledge the stickiness of the measurement issue often claim to calculate SROI instead…It sounds less concrete perhaps, but often ends up just as arbitrary. One well-known (and arguably quite effective) US foundation literally uses a multiplier termed the “(Foundation Name) Factor” to calculate how much of the “measured” social change is attributable to their programs. Most SROI calculation schema I’ve encountered have produced this same unidimensional, artificial, even misleading oversimplification–though the amount of time and effort required to arrive there varies widely.

I’m in no way suggesting we stop looking for ways to wrap our heads around the effects of our efforts, but I think the obsession with quantification does not serve us well. So…

Should SROI be measured? Good luck with that.
Should it be calculated? Perhaps, when it fits.
Should it be demonstrated? Whenever possible.
Should it be explored? Always.

A New Prescription for Innovator Growing Pains?

Aaron Sklar’s exposition on the potentially analgesic effects of integrated evaluation really got me thinking. He points out that innovation is by nature uncomfortable, and suggests carefully-defined and continually re-defined meaningful metrics can play a role in easing that discomfort by clarifying the”end” to keep in mind.

Perhaps there’s even more to it than that:

So often in life, discomfort is the result of poorly managed expectations: It’s the classic “this won’t hurt a bit” you hear from the well-meaning nurse as she jabs a 4″ needle into your hip, the regularly-spaced reassurances of how important your call is while you wait interminably on hold, the gut-wrenching panic when you try on “your size” at a new boutique only to discover you can’t even button the trousers.

In addition to, or perhaps as a result of providing structure in a new (ad)venture, integrated, authentic, continual evaluation creates a different set of expectations in an organization. We expect to discover things that don’t work, we expect middle-of-the-ride course corrections (and the accompanying jolts), we expect transparency and honest critique, and we expect iteration.

It’s amazing the levels of “discomfort” we can adapt to if we expect it, and the performance we have the capacity to achieve through it is even more exciting.

The Trouble with Stakes

Last evening, during the President’s health-care speech, I found myself frustrated. Why can’t someone just talk to me straight!? Why can’t anyone simply compare the perspectives, analyze the arguments, and explore the implications free from rhetoric, empassioned mantras, scare tactics, and tear-jerking stories. Why can’t we have some kind of genuinely objective perspective?

The answer’s pretty simple: the genuinely objective observers don’t CARE enough to do the careful analysis.

The people who care, the ones who invest time and energy and resources, are the one who have something on the line. They have a stake.

The connection from there was at once natural and surprising. So often in the non-profit and social entrepreneurship worlds, we extoll the virtues of (and even decry the absence of) objective, third-party impact assessments and evaluations. We proclaim (often quite correctly) that it is impossible for those at the heart of a venture, doing the day-to-day work, pouring their blood, sweat and tears into their programs to accurately assess their own impact and effectiveness.

The problem, of course, with these stakeholders (and any stakeholder) is that they CARE.

Essentially, we’re saying that in order to provide a reliable assessment, you must not be a stakeholder in the venture. You must not care.

Admittedly, this is a bit of a hyperbole. But it seems worth looking at. If what we want from non-profit and social entrepreneurship evaluation is thorough exploration, careful analysis and strategic recommendations, can we truly rely on evaluators without a stake?

“Quality” in Open Education

Training quest 3 has us exploring “ideals of quality” across two of the largest/highest profile open education initiatives.  I hear “quality” and immediately think in terms of comparative worth–excellence along any number of dimensions from durability to fit to taste and texture. While I could easily write a post about OLI’s Modern Biology animations or student argumentation skills in MIT’s Seminar in Ethnography and Fieldwork, discussions of quality as a global characteristic don’t seem particularly fruitful here.

But what if we think instead in terms of the first definition of quality: “an essential or distinctive characteristic, property, or attribute.” Instead of value, then, quality is more about values.

So, what do MIT and OLI value? What do they consider the essential or distinctive characteristics of what they’re trying to do, of who they are as organizations? Continue reading

Today, I Wish…

…that I could just come up with the titles of articles and they would research, analyze and write themselves. I’m pretty good at the title thing. Less so the rest. Currently in various stages of [in]completion in my word processor:

  • “Pipe dreams: What evaluation educators can learn from students’ visions of the ideal evaluation tool”
  • “Evaluators by assignment: Truth and consequences of the mass amateurization of evaluation”
  • “Openness and the information economy: Market share, value propositions and competitive advantage”
  • “Better than free: A capacity-building approach to pro-bono”
  • “Leavening the internet: A latter-day-saint guide to new media”

I’ll let you know when they start writing themselves.

Too Mad to Think of a Clever Title…

Today was parent-teacher conferences. At least as close as we get to parent-teacher conferences here. Started out with a big meeting in the church where we did things like show them the new silverware we finally got around to buying last week [the kids were sharing broken spoons before—not a big deal, really, we share everything around here, but still] and talk about the progress of a couple of alumni and the programs we’ve been involved in. A couple of parents raised some concerns; “my son was sick and missed an exam and is therefore failing a class” to which the response was “Nilsa and Roberto and Juana were all sicker than your son and they’re still at the top of their classes.” And “I live 9 hours away and can’t just visit every month to check on my son, but when I call to see how he’s doing, no one returns my calls.” To which the response was basically “keep calling.” [In fairness, Celsa gave out her cell number to the whole group, but even I can guess the rate at which such phone calls are generally returned]

The biggest frustration, though, came with the “research” the director of the school was so excited about conducting. [He wasn’t here, by the way, apparently he’s doing some consulting for another school in Nicaragua.] I was all excited when they started handing out surveys to the parents…asking what changes [positive and negative] they had observed in their student, whether the school had fulfilled their expectations, how they would rank areas of improvement, what concerns they had about their student in particular, etc.

Anyway, I was all excited until I started looking around and realized that fully two thirds of them COULDN’T READ IT! Aigh! Even the ones who could read it couldn’t understand the university-level research language it was written in. I think it’s fair to say I was bitterly disappointed. What are we saying when we set up a system to only accept feedback from people outside the sector we’re supposedly set up to serve!? There’s just something very wrong here.