Five years ago, the Disease Control Priorities in Developing Countries framed a central problem we face in our efforts to help people improve their lives: that there are proven effective health interventions that do not reach people who need them.
“The low use of effective interventions—in the developing world in general and among the poor in particular—translates into rates of mortality, morbidity, and malnutrition that are far higher than necessary. If use of all the proven effective childhood preventive and treatment interventions, for example, were to rise from their current levels to 99 percent—95 percent for breastfeeding—the number of under-five deaths worldwide could fall by as much as 63 percent… Deaths from malaria and measles could be all but eliminated, and deaths from diarrhea, pneumonia, and HIV/AIDS could be reduced dramatically.”
The urgency to increase the coverage and quality of effective interventions has not abated. Efforts to assure that vaccines reach children who need them, that health professionals have the tools to treat malnutrition and deliver healthy babies, and that women can use contraceptives and breastfeed their children feature heavily in the Gates Foundation’s strategies and collaborations with partners globally.
To do this work well, we need to deeply understand the context in which we work, help demand intersect with the supply of products, leverage private and public sectors, and encourage the mix of incentives and institutions that is most likely to produce sustainable, well-managed health services that poor people need most.
At the foundation we have been looking at shifting the way we define and prioritize resources for evaluation. We’ve highlighted below at least three ways that doing so will help the foundation and our partners reach people in need of urgent health interventions in the poorest countries in the world:
Evaluate how and why and not just whether outcomes are achieved. The foundation supports many key partners working to increase the coverage of effective health solutions. Collectively, we measure coverage of these interventions to track the progress we make increasing the number of people reached by proven solutions. We invest in evaluation and modeling to estimate the lives saved per a certain percentage increase in coverage. This is important information, but it also only tells us part of the story. We need evaluation to help us to shed light on how and why coverage does and can increase, and to test different approaches to drive, sustain, and expand coverage so key interventions reach more and more people.
Evaluate to inform national and local decision making rather than international accountability. We need to make evaluation more valuable for decision makers in poor countries. This means moving away from the expectation that evaluation’s main purpose is to demonstrate value for money, and toward an expectation that even the most rigorous evaluation can also be used as a key tool for national and local policy makers. One of the often-cited examples of the successful use of evaluation in policy making is that of the Mexican government’s main anti-poverty program - Oportunidades – a nearly twenty year old conditional cash transfer initiative that supports poor families to invest in the education, health, and nutrition of their children. Mexican officials conceived of this program and planned the evaluation from its outset. Perhaps most importantly, the multi-year experience of evaluating a national program and seeing the value of the evidence led the Mexican government to institutionalize evaluation within a strong national Monitoring & Evaluation (M&E) system. This trend is present in other countries – examples include Uganda, Colombia, and Chile – and provides a powerful signal that we can leverage scarce resources by supporting strong national M&E systems that contribute to improving people’s health over the long term.
Evaluate innovation and not just our theories. Our interest in catalyzing more effective, efficient health delivery requires that we lean heavily on innovation. This can be tough for an industry that relies on “theories of change” and logic models to design programs and define metrics for reporting. While we need to stay disciplined in our results planning tools, we also need to keep our eyes on innovative approaches we haven’t tried and use scarce evaluation resources to test and learn if they are effective drivers of sustainable increases in coverage. Innovation funds started by the US and British governments are an interesting signal for other donors looking for collective ways to support creativity and risk with rigorous evidence for decision making.
These ideas are not new and they are definitely not mine alone. Dialogue about the science of delivery and the shifts we need to produce more real time, actionable evidence and feedback fills the blogs and hallways of the major development organizations like the World Bank. The Foundation’s evaluation policy represents a commitment to these shifts as we collaborate with partners to relentlessly increase the coverage rates of effective interventions. We hope that with the growing global focus on getting delivery right, we can also be sure that we’re also continuing to get evaluation right. At the end of the day, results are only as good as the extent to which they are integrated into future decisions, better policies and programs, achieving more impact for the people we serve.