Worst Practices

By Stuart Andreason and Jake Segal (Social Finance)

The Best Practices Survey is a familiar fixture of our sector. With so many nooks and crannies across our thousands of state and local governments and millions of nonprofits that, yes, there are plenty of good ideas hidden in place.

But most of the time, the problem isn't a lack of good ideas but a lack of good practice. It's about breaking away from the things holding us back. Instead of doing one more thing right, we should stop doing so many things wrong.  

So, a few months ago—after years of musing on it half-jokingly—the two of us decided to launch the inaugural edition of the Worst Practices Survey. We think it's time to look those Worst Practices squarely in the eye and decide never to repeat them.

We conducted wide-ranging outreach (…put a survey on LinkedIn) and established a representative sample (…a few dozen people we know) from across different parts of the social sector (to be fair, we actually did get some good diversity here—a nice balance of practitioners, foundations, and government leaders). Here’s what we heard.

CHASING WATERFALLS

In keeping with the season, by far the most common gripe we hear is about breaking bad habits. This isn't just about ending virtue signaling. It's about escaping from faulty mental models.  

  • Avoid "magic bullet" thinking. Funders—especially philanthropic and government grantmakers—are "addicted to the belief that they just need to fund innovative new ideas that will magically solve things." They want unicorns, so they're underwhelmed by even the handsomest horses.

  • Self-indulgence is a dangerous drug. Accountability is often self-imposed, which can lead to "vanity metrics" and pseudo-academic evaluations designed to support a thesis. (This feedback hit close to home, with one respondent noting that "third-party ROI studies”—which both BGI and Social Finance, at times, do—"are a joke. There are no standards and they're useless. I always ignore them and you should too.") 

  • Demonstrations that don't have a ready audience. We don't operate in highly functioning markets (and often we aren't operating in markets at all). So the idea that a great model, once demonstrated, will be taken up around the nation is…dubious.

GRADING YOUR OWN HOMEWORK  

When it came to poor evaluation practices, respondents did not hold back. (I'm sure this is, in part, because we're a couple of nerds and our network oversamples other nerds.) (But I'll tell you, these really speak to us.) 

  • Do measure more. (Or at all.) One commenter put it plainly: far too often, we simply "refuse to evaluate or reassess program effectiveness" at all. That makes it impossible to know what's working and what to improve.

  • But don’t measure self-servingly. There was a lot of consternation about "programs padding the numbers" and "cherry-picking data to match hypotheses." Poor evaluation practices that make a favored program look good undermine the purpose of evaluation. Similarly, the bottom-drawer effect—"not releasing a study when the data is bad or not favorable"—weakens the field. This is a common complaint in the academic world as well: journals and research are biased towards positive results. Studies that find no effect rarely get published.

  • Garbage in, garbage out. This one's for my true nerd friends: "bad meta-analyses…overstate the strength of evidence by including studies with high risk of bias or focusing only on short-term or intermediate outcomes." Yes; yes, they do—and it drives us crazy. These are the synthetic CDOs[1] of the impact world: evidence that seems solid but whose underlying quality is hard to assess.  

COMPLEXITY BIAS / THE FIELD OF DREAMS FALLACY

This category is a plea for practical focus—one that might best be summarized by, simply, “Come on, people.”

  • Stop talking, start doing. Thoughtfulness is good! But respondents agreed that philanthropy, especially in the face of fast-moving change, should move faster. Beware "program officers working on the fifth version of their strategy in as many years." (This tendency toward talk wasn't merely anecdotal: respondents from the philanthropy sector used 35% more words in their survey responses than anyone else.) No one commented on whether government should move faster or slower; perhaps that particular question is, at this moment in time, probably worthy of a separate survey (and a few martinis).

  • Build capacity to meet the moment. Many organizations—both public and private—try to control costs with headcount freezes. But especially within government this can lead to workarounds that are more expensive still, when they "direct work to high priced and temporary consultants or contractors." We should build capacity and drive efficiency; short-term solutions are only that. 

  • Look who's talking. One respondent highlighted a strangely common hypocrisy,"a conference room full of people with master's degrees talking about why college is not necessary." Making good decisions requires building on expertise, especially first-hand expertise.

  • Get real on timelines. Quite a few respondents weighed in on lopsided timelines. Too many foundations have "very long application processes with quick turnaround times for the applicants." Meanwhile, governments "take months to pay invoices for nonprofit service providers." Not surprisingly, nonprofits were especially frustrated here, with 40% calling it out as a major pain point.

RELYING ON POWERPOINT TRUTHS

Simple, clear communications are good. Saying that a car can go from zero to 60 in four seconds makes it clear it's fast. But that doesn't mean the work itself—the engineering required to design the car—is simple. We need experts making sophisticated decisions so we can talk about them simply.

  • Reach isn't the ballgame. It's year-end report time, and just about every organization and government agency we see describes programs in terms of the number of people reached. That can be meaningful, but often it's simplistic to a fault. Consider reporting on the outcomes of the people reached—did their lives get better?

  • Know your stuff. Then there’s the issue—exacerbated by AI, no doubt—of decision makers being too far removed from specifics of what they’re overseeing. "There’s a lot of modernization efforts that don’t understand the systems they try to change or the technology," wrote one respondent. Successful modernization requires a deep understanding of the original processes and intended goals.

WHAT WAS NOT MENTIONED…AND WHAT WAS

We were a bit surprised that there wasn’t much mention of any specific programs or policy verticals that people were jaded about, saw as a boondoggle or worst practice. Especially with the early 2025 chaos that hit programs in the Department of Education, Job Corps, and many other workforce and education programs, we expected to hear more about the things that people may be willing to see eliminated. We also anticipated that the upending of programs a year ago may have led to a greater rethinking of what worked and what didn’t, but these…did not show up in our responses.

Respondents focused on process, measurement, and execution—these seem to be pervasive across different places in the ecosystem. From ROI studies to measures of reach, philanthropies were cautious about the measurements that were shared with them—and potentially funded by them. Nonprofits felt that measurements and timelines from funders created undue burden and limited value.

Where does that drive things? Towards effective, vulnerable, and realistic leaders. The entire ecosystem seems to be calling for each player to do less corner cuttingand to focus on quality execution, process, and measurement. Most of this isn’t rocket science; it isn’t about honing new skills, so much as shrugging off legacy baggage. It’s a new year, and it’s time to make the best of it.


[1] For those of you who tuned out in 2008: collateralized debt obligations are financial instruments whose complexity and opacity makes their riskiness challenging to assess; they’re widely blamed for accelerating the financial crisis. The point of the analogy here is that meta analyses often seem good, but are built on a wide range of studies with varying levels of relevancy, and can be hard to understand or tie to the specific use case – so it’s hard to know how risky it is to use them without interrogating the underlying studies.

Next
Next

The Long Game: How College Rankings Change Over the Career Lifecycle