Syndicate content

Evidence-Informed Policy

Avoiding perversions of evidence-informed decision-making

Suvojit Chattopadhyay's picture

Emanuel Migo giving a presentation in Garantung village, Palangkaraya, Central Kalimantan, Indonesia.How to avoid “We saw the evidence and made a decision…and that decision was: since the evidence didn’t confirm our priors, to try to downplay the evidence”

Before we dig into that statement (based-on-a-true-story-involving-people-like-us), we start with a simpler, obvious one: many people are involved in evaluations. We use the word ‘involved’ rather broadly. Our central focus for this post is people who may block the honest presentation of evaluation results.

In any given evaluation, there are several groups of organizations and people with stake in an evaluation of a program or policy. Most obviously, there are researchers and implementers. There are also participants. And, for much of the global development ecosystem, there are funders of the program, who may be separate from the funders of the evaluation. Both of these may work through sub-contractors and consultants, bringing yet others on board.

Our contention is that not all of these actors are currently, explicitly acknowledged in the current transparency movement in social science evaluation, with implications for the later acceptance and use of the results. The current focus is often on a contract between researchers and evidence consumers as a sign that, in Ben Olken’s terms, researchers are not nefarious and power (statistically speaking) -hungry (2015). To achieve its objectives, the transparency movement requires more than committing to a core set of analyses ex ante (through pre-analysis or commitment to analysis plans) and study registration.

To make sure that research is conducted openly at all phases, transparency must include engaging all stakeholders — perhaps particularly those that can block the honest sharing of results. This is in line with, for example, EGAP’s third research principle on rights to review and publish results. We return to some ideas of how to encourage this at the end of the blog.

Thinking about stakeholder risk and accountability in pilot experiments

Heather Lanthorn's picture

ACT malaria medicationHeather Lanthorn describes the design of the Affordable Medicines Facility- malaria, a financing mechanism for expanding access to antimalarial medication, as well as some of the questions countries faced as they decided to participate in its pilot, particularly those related to risk and reputation.

I examine, in my never-ending thesis, the political-economy of adopting and implementing a large global health program, the Affordable Medicines Facility – malaria or the “AMFm”. This program was designed at the global level, meaning largely in Washington, DC and Geneva, with tweaking workshops in assorted African capitals. Global actors invited select sub-Saharan African countries to apply to pilot the AMFm for two years before any decision would be made to continue, modify, scale-up, or terminate the program. One key point I make is that implementing stakeholders see pilot experiments with uncertain follow-up plans as risky: they take time and effort to set-up and they often have unclear lines of accountability, presenting risk to personal, organizational, and even national reputations. This can lead to stakeholder resistance to being involved in experimental pilots.

It should be noted from the outset that it was not fully clear what role the evidence from the pilot would play in the board’s decision or how the evidence would be interpreted. As I highlight below, this lack of clarity helped to foster feelings of risk as well as a resistance among some of the national-level stakeholders about participating in the pilot. Several critics have noted that the scale and scope and requisite new systems and relationships involved in the AMFm disqualify it from being considered a ‘pilot,’ though I use that term for continuity with most other AMFm-related writing.
In my research, my focus is on the national and sub-national processes of deciding to participate in the initial pilot (‘phase I’) stage, focusing specifically on Ghana. Besides being notable for the project scale and resources mobilized, one thing that stood out about this project is that there was a reasonable amount of resistance to piloting this program among stakeholders in several of the invited countries. I have been lucky and grateful that a set of key informants in Ghana, as well as my committee and other reviewers, have been willing to converse openly with me over several years as I have tried to untangle the reasons behind the support and resistance and to try to get the story ‘right’.

Building evidence-informed policy networks in Africa

Paromita Mukhopadhyay's picture

Evidence-informed policymaking is gaining importance in several African countries. Networks of researchers and policymakers in Malawi, Uganda, Cameroon, South Africa, Kenya, Ghana, Benin and Zimbabwe are working assiduously to ensure credible evidence reaches government officials in time and are also building the capacity of policymakers to use the evidence effectively. The Africa Evidence Network (AEN) is one such body working with governments in South Africa and Malawi. It held its first colloquium in November 2014 in Johannesburg.  

Africa Evidence Network, the beginning

A network of over 300 policymakers, researchers and practitioners, AEN is now emerging as a regional body in its own right. The network began in December 2012 with a meeting of 20 African representatives at 3ie’s Dhaka Colloquium of Systematic Reviews in International Development.

Buffet of Champions: What Kind Do We Need for Impact Evaluations and Policy?

Heather Lanthorn's picture
I realize that the thesis of “we may need a new kind of champion” sounds like a rather anemic pitch for Guardians of the Galaxy. Moreover, it may lead to inflated hopes that I am going to propose that dance-offs be used more often to decide policy questions. While I don’t necessarily deny that this is a fantastic idea (and would certainly boost c-span viewership), I want to quickly dash hopes that this is the main premise of this post. Rather, I am curious why “we” believe that policy champions will be keen on promoting and using impact evaluation (and subsequent evidence syntheses of these) and to suggest that another range of actors, which I call “evidence” and “issue” champions may be more natural allies. There has been a recurring storyline in recent literature and musings on (impact) evaluation and policy- or decision-making:
  • First, the aspiration: the general desire of researchers (and others) to see more evidence used in decision-making (let’s say both judgment and learning) related to aid and development so that scarce resources are allocated more wisely and/or so that more resources are brought to bear on the problem.
  • Second, the dashed hopes: the realization that data and evidence currently play a limited role in decision-making (see, for example, the report, “What is the evidence on evidence-informed policy-making”, as well as here).
  • Third, the new hope: the recognition that “policy champions” (also “policy entrepreneurs” and “policy opportunists”) may be a bridge between the two.
  • Fourth, the new plan of attack: bring “policy champions” and other stakeholders in to the research process much earlier in order to get up-take of evaluation results into the debates and decisions. This even includes bringing policy champions (say, bureaucrats) on as research PIs.

There seems to be a sleight of hand at work in the above formulation, and it is somewhat worrying in terms of equipoise and the possible use of the range of results that can emerge from an impact evaluation study. Said another way, it seems potentially at odds with the idea that the answer to an evaluation is unknown at the start of the evaluation.