Hi everyone. I hope you had a relaxing Thanksgiving break (if you’re in the US). I know it’s hard to get back to work after a long weekend, which is why I am here in bed eating leftover mashed potatoes and listening to early-90s Hip Hop. Just remember, though, that your work makes a difference (Read “Welcome back to work, you sexy Jedi unicorn,” if you need a quick pick-me-up)
Unfortunately, however, the difference you are making is complex, which means it is challenging to measure. And this explains the crappy metrics of effectiveness our sector has been subjected to. Chief among them, of course, is overhead rate, one of the most insipid and destructive zombie concepts ever unleashed on nonprofits, as I and others have written about repeatedly (See: “How to deal with uninformed nonprofit watchdogs around the holidays.”)
But overhead is not the only challenge we’ve been facing. We need to talk about Impact Per Dollar (IPD), the practice of measuring nonprofit effectiveness by calculating units of service that donors can buy with their donations. For instance, let’s say homeless shelter A provides people with 5,000 nights of shelter each year and its total program cost is $50,000. Divide 50K by 5,000, and you get $10 per night of shelter per person. Compare this with Shelter B that provides 2,000 nights of shelter at a program cost of $40,000 per year, which figures to be $20 per night of shelter per person. The argument then, according to IPD, Shelter A is twice as effective as Shelter B, and donors would be fools to donate to Shelter B.
I’m making all these numbers up. But I am not basing it on nothing. This is how many people think nonprofit effectiveness should be measured, and their advocacy of this method to an unwitting public causes all sorts of harm to our work. Here are several reasons why, with special thanks to colleagues Julia Coffman, Jara Dean-Coffey, Beth G. Raps, and others for their insights:
- It only measures the easily measurable: Shelters provided, trees planted, food distributed are important and they can be readily measured through IPD. But what about organizations focusing on advocacy, systems change, civic engagement, capacity building, leadership, pass-through funding, or other critical programs that are much more difficult to measure?
- It does not factor in geographic differences: As colleague Beth G. Raps puts it “program Y’s location may have a much higher cost of living, may pay their staff a more appropriately living wage, and/or provide different or more services. In many ways, the comparison may not be accurate. Comparing two homelessness programs may still be comparing apples and oranges.”
- It does not measure long-term outcomes: As logic models have been beaten into all of us, we know the difference between outputs and outcomes. IPD measures outputs. $10 may provide a shelter for one person for one night, but what does that actually do for this person in the long run? What if the folks served by Shelter B, which is twice as “expensive” as Shelter A, are less likely to experience homelessness in the future? “Cheaper” does not necessarily mean better.
- It ignores the complexity of nonprofit work: Nonprofits are complex, often with multiple intersecting strategies. What if a nonprofit is doing advocacy and systems-level work in addition to providing direct services? Measuring only its direct service work, instead of measuring all its strategies holistically, is misleading.
- It fails to take equity into account: IPD measurements often do not account for the fact that people from marginalized backgrounds are often more “expensive” to serve, requiring specialized services, translation/interpretation, transportation, childcare, etc. So then the organizations that serve marginalized folks will be seen as less efficient and thus less effective, which means donors will give less to them.
- It drives resources away from smaller grassroots organizations: As evaluator Julia Coffman says, “I worry about the ‘success to the successful’ archetype, and whether [this] system will reinforce the inequitable direction of resources to organizations that have the most resources already.” Again, the smaller organizations that struggle most for resources are often led by and serving communities of color and other marginalized communities.
- It exacerbates the public’s ignorance about nonprofit work: Sorry, but there’s no possible way that $10 or $20 or whatever provides a night of shelter for anyone. It is not as simple as that. Nonprofits programs require all elements (staff, board, volunteers, donors, funders, infrastructure, etc.) working together in concert to get anything done. IPD fails to measure the real costs of programs.
- It furthers misconceptions and complacency about societal problems. Entrenched problems like homelessness will take significant resources to solve. Continuing to let people, especially wealthy people, think that their $20 makes a difference encourages them to be complacent. We need to get wealthy folks to understand the severity of systemic problems so they can do their part, particularly pay more taxes.
- It perpetuates the nonprofit hunger games: To do this work effectively, we all have to understand that our missions are interrelated and collaborate accordingly. Inaccurate rating systems like IPD serve to further drive wedges between missions, encouraging us to compete with one another for ratings and donations instead of figuring out where in the nonprofit ecosystem we are most needed and how to most effectively deploy our resources.
- It devalues the intrinsic worth of individuals: It saddens me every time I see something like “$25 provides enough food for a low-income person for a week,” as if people’s lives, their worth, their suffering within unjust systems, can be distilled down with simple numbers like that. We should encourage donors to see people as people and to help because it’s the right thing to do. Impact Per Dollar instead turns marginalized people into transactional economic units, which further dehumanizes people.
For all these above reasons, and others that you might think of, we need to end Impact Per Dollar as a way to measure nonprofit effectiveness. Nonprofit work is way more complex than that. When there is complexity, it is tempting to bring order to chaos through simple metrics that anyone can understand. The danger of doing that is it gives inaccurate information while giving the illusion of accuracy. As evaluator Jara Dean-Coffey points out “It is time, past time really, for us to embrace complexity, multiple truths, and more nuanced definitions of ‘impact’ that tend to individuals, systems, organizations and the spaces in between.”
Nonprofits need to do our part. We need to stop constantly reinforcing IPD with donors. Many of us still use it in our marketing. $50 helps five kids go to summer camp. $100 plants 400 trees. $5 provides enough hummus for one vegan at a community meeting. This is all transactional BS that we’ve been spouting. No wonder people continue to be enamored with Impact Per Dollar and overhead rates. We inaccurately simplify it for donors, and then we go to happy hour and whine about how donors don’t understand our work. Start using holistic messaging: “Your support, along with all of us working together, helped us serve 5,000 people this year.”
Before the Solutions Privilege people write me with “Well, what’s YOUR way to measure nonprofit effectiveness, huh?!” I already wrote about this in “How the concept of effectiveness has screwed nonprofits and the people we serve.” It boils down to challenging established white norms of data and evaluation and working with the people most affected by injustice to define impact, as they would have the more relevant data into their own lives and the programs that are supposed to help them.
—
Donate to Vu’s organization
Write an anonymous review of a foundation on GrantAdvisor.org