Doing Us Dirty: Why “evidence-based” isn’t all it’s cracked up to be

We have a public health problem. Evidence-based programs can’t fix it. Here’s why.

Public health: politics first, public second

Public health is a public service, which makes it political (in the U.S., anyway). And those politics often stand in the way of sound public policy. Great example: the federal ban on funding for syringe exchange. Way back in 1984, a publicly-funded syringe exchange program was started in the Netherlands, in order to prevent the spread of Hepatitis B. By 1986, in the face of the AIDS epidemic, there was enough mounting evidence to support these programs that Margaret Thatcher and her conservative government instituted one in the U.K. But here, in the U.S.? Here, we were so enamored of the War on Drugs that we banned federal funding for syringe exchange in 1988. Boy, did we screw that one up. A 1997 estimate showed that we could have reduced new HIV infections by 15–33% just by funding syringe exchange. Yet, the ban stood until 2009. After that brief reprieve, it was promptly reinstated for political reasons in 2011. Politics always wins. Evidence be damned.

We’re big on sloppy seconds

As a field, we find ourselves trailing behind the private sector, picking up discarded bits of overused business terms. (Right now, it’s engagement and stakeholders. We also like data-driven and evidence-based.) These terms aren’t inherently bad, but like all words, they lose meaning when you repeat them ad nauseam. My guess is that consultants bring these terms from their business clients over to public health, and bam! We’re drafting stakeholder engagement strategies. The problem here is that the context has changed. We’re not making cool and useful stuff for a small segment of the population, or trying to figure out how our existing customer base will respond to rebranding. Yes, there is a lot that the public sector could learn from the private sector, but the fact remains that we’re not the private sector. We serve the entire public, and we serve them in a way that impacts one of the most fundamental parts of their being. Our programs can’t always pivot based on some rapid user feedback, because those seemingly minor changes can have a major impact on people’s well-being. So that evidence? We need a whole lot of it to make decisions.

No, really, the fat-free version tastes the same!

Let’s say you have a completely uncontroversial public health topic (doesn’t exist but let’s pretend). Based on a huge body of evidence, we’ve hypothetically developed equally uncontroversial best practices. Even in perfect conditions, the resulting public health programs will not look the same as they did in the studies. Programs get diluted or altered. The results are slightly different because the climate is warmer or the transportation infrastructure is moderately better or there’s an administrator who had a bad dream that a flock of rogue condoms was chasing her once. People are messy and complicated. The same goes for programs and systems. Add in the mantra of “having to do more with less”, and you have a different program altogether. And that’s without getting into complicating issues like structural barriers, institutional racism, deep poverty, and other major social problems. Realistically, on-the-ground public health has nothing to do with study-conditions public health. That evidence is based on study conditions. Change the conditions, change the results.

Slower than dirt

Evidence is really, really slow. The American Journal of Public Health published a fascinating supplement this month with a bunch of articles on evidence-based decision-making in public health departments. It makes for a great read if you’re into that kind of thing. The problem here is the lag time. A study published in 2015 may have been completed in 2013 or 2014, and been based on information from 2009 or 2010. The CDC just got to publish a study based on its 2009 Medical Monitoring Project data last summer. For frame of reference, that means we just got medical care access data from before Obamacare was even implemented. It’s not the CDC’s fault; that’s just how long these things take. Then we take this evidence and build programs around it, and by the time they get out into the world, they’re based on things that were true ten years ago. Which means that right now, we’re introducing new public health programs based on evidence from before YouTube, Chuck Norris jokes, and the iPhone existed.


Yeah, think about that for a second.


The takeaway

Evidence is great — it’s just not enough. The application, if any, of public health evidence is highly politicized. The way it’s applied is often misguided, and it takes considerable resources to get enough of it. And by the time it makes its way from the ground to the top and back to the ground again, it’s barely recognizable. And it’s old.

Old and slow as dirt.


This post was first published in Rebel Public Health over at Medium.