Samenvatting Impact Evaluation in Practice – Beleidsevaluatie
Chapter 1
Impact evaluation assesses the changes in the well-being of individuals that can be attributed
to a particular project, program or policy.
attribution is the hallmark of impact evaluations
identifying the causal relationship between program and outcomes of interest.
in addition to addressing the basic question of whether a program is effective or not,
impact evaluations can also be used to explicitly test alternative program modalities or design
innovations (bv. compare a training program to a promotional campaign to see which one is
more effective in raising financial literacy)
- Monitoring: process that tracks what is happening within a program and uses data collected
to inform program implementation, day-to-day management and decisions.
critical source of information about program performance.
- Evaluations: periodic, objective assessments of a planned, ongoing or completed project,
program or policy.
to answer specific questions related to design, implementation and results
discrete point in time and often seek outside perspective of experts.
- Descriptive questions: ask about what is taking place
- Normative questions: compare what is taking place to what should be taking place
- Cause-and-effect questions: focus on attribution, what difference in the
intervention makes to outcomes
Drawing on both quantitative and qualitative data:
- Qualitative: expressed not in numbers, but rather by means of language or
sometimes images
- Quantitative: numerical measurements and commonly associated with scales or
metrics
Impact evaluations are a particular type of evaluation that seeks to answer a specific cause-
and-effect question: what is the impact of a program on an outcome of interest?
Impact: the changes directly attributable to a program, program modality or design innovation
So: impact evaluations always cause-and-effect questions.
Counterfactual: what the outcome would have been for program participants if they had not
participated in the program use control group
Impact evaluations can be divided into two categories:
- Prospective evaluations: developed at the same time as the program is being
designed and are built into program implementation baseline data for control
and treatment group
- Retrospective evaluations: assess program impact after the program has been
implemented, looking for treatment and control groups ex post.
Prospective evaluations: 3 reasons more likely to produce strong and credible results
1. baseline data can be collected to establish measures of outcomes of interest before the
program has started to ensure similarity in groups and assess effectiveness
2. focuses both the program and the evaluation on intended results
3. have the best chance of generating valid counterfactuals
,Retrospective evaluations are necessary to assess programs that were established in the past.
valid estimate of counterfactual is more difficult
Main role of impact evaluation is to produce evidence on program performance for the use of
government officials, program managers, civil society and other stakeholders
question of generalizability is key for policy makers
Early days:
- efficacy studies: carried out in a specific setting under closely controlled
conditions to ensure fidelity between the evaluation design and program
implementation not that informative about impact of similar project under
normal circumstances
do not always adequately represent more general settings, which are usually the
prime concern of policymakers
- effectiveness studies: evidence from interventions that take place in normal
circumstances and aim to produce findings that can be generalized to a large
population
External validity (here): results generalizable to intended beneficiaries beyond the evaluation
sample, so long as the expansion uses the same implementation structures and reaches similar
populations as in the evaluation sample
Complementary approaches:
without information from process evaluations on the nature and content of the program to
contextualize evaluation results, policy makers can be left puzzled about why certain results
were or were not achieved.
- Monitoring: verifying whether activities are being implemented as planned (bv.
which participants received the program, how fast the program is expanding, and
how resources are being spent) administrative data
- Ex ante simulations: use available data to simulate the expected effects of a
program or policy reform on outcomes of interest
- Mixed methods: combine quantitative and qualitative data to help generate
hypotheses and focus research questions before quantitative data are collected and
provide perspectives and insights on a program’s performance during and after
program implementation data through focus groups, life histories and
interviews, but also observational and ethnographic assessments helpful not for
generalizability, but to understand why certain results have (not) been achieved
o Convergent parallel: both quantitative and qualitative data collected at
same time and used to triangulate findings or to generate early results
o Explanatory sequential: qualitative data provide context and explanations
for quantitative results qualitative work helps explain the why
o Exploratory sequential: qualitative approaches to develop hypotheses as to
how and why the program would work and to clarify research questions
that need to be addressed in the quantitative impact evaluation
- Process Evaluations: focus on how a program is implemented and operates,
assessing whether it conforms to its original design and documenting its
development and operation
if not: risk that the impact evaluation resources are misspent, or that needed
adjustments in program design are introduced one the evaluation is underway
, - Cost-Benefit and Cost-Effectiveness Analysis:
o Cost-Benefit: compares the estimates of the total expected benefits of a
program to the total expected costs
o Cost-Effectiveness: compares the relative cost of two or more programs or
program alternatives in reaching a common outcome (bv. student test
scores)
Ethical considerations regarding impact evaluation: lack of evaluation, rules used to assign
program benefits, methods by which human subjects are studies, transparency in documenting
research plans, data and results.
Basic form of impact evaluation: is a given program or intervention effective compared to the
absence of the program. core challenge is constructing a control group that is as similar as
possible to the treatment group
Internal validity: degree of comparability between treatment and control groups
Treatment arms: bv. program may wish to test alternative outreach campaigns and select one
group to receive a mailing campaign, another to receive house-to-house visits and yet another
receives SMS to assess which is most cost-effective.
Also: is the program more effective for one subgroup than it is compared with another
subgroup
External validity (here): evaluation sample statistically representative of the population of
eligible units from which the evaluation sample is drawn
Generalizability: whether results from an evaluation carried out locally will hold true in other
settings and among other population groups.
Impact evaluations should be used selectively when the question being posed calls for a
strong examination of causality.
Intervention to be evaluated should be:
- Innovative
- Replicable
- Strategically relevant
- Untested
- Influential
Chapter 2
Preparing for an evaluation:
1. Constructing a Theory of Change: description of how an intervention is supposed to
deliver the desired results causal logic of how and why and why a particular
program will reach its intended outcomes.
a. explore conditions and assumptions needed for the change to take place
b. make explicit the causal logic behind the program
c. map the program interventions along logical causal pathways
i. Best at the beginning of the desin process bring stakeholders
together to develop a common vision for the program
d. also review literature for accounts of experience with similar programs and
verify the contexts and assumptions behind the causal pathways in the ToC
they are outlining
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller kellynijhof1. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $5.92. You're not tied to anything after your purchase.