Summary Impact Evaluation incl. seminars, chapters, and an example test 'cheat' sheet
22 views 2 purchases
Course
Impact Evaluation
Institution
Erasmus Universiteit Rotterdam (EUR)
Summary of all relevant information for the Impact Evaluation course including seminars and chapters, and including an example 'cheat' sheet for your exam.
Chapter 2 Basic Issues of Evaluation
Introduction: Monitoring versus Evaluation (M&E)
Several approaches can be used to evaluate programs:
- Monitoring tracks key indicators (outcomes/objectives) of progress over the course of a program
as a basis on which to evaluate outcomes of the intervention -> improve policy design and
implementation and promote accountability among policy makers.
- Evaluation is systematic and objectives evaluation of the results. Seeks to prove that changes in
targets are due only to the specific policies undertaken.
o Operational/Process evaluation examines how effectively programs were implemented
and whether there are gaps between planned and realized outcomes (e.g. cost-benefit
analysis)
o Impact evaluation studies whether the changes in well-being are indeed due to the
program intervention and not to other factors (quantify effects).
Monitoring
- Identify the goals that the program/strategy is designed to achieve.
- Identify the key indicators that can be used to monitor progress against these goals (e.g. in
context of poverty, the proportion of people consuming fewer than 2,100 calories).
- Set targets, which quantify the level of the indicators that
are to be achieved by a given date.
- Establish a monitoring system to track progress towards
achieving specific targets and to inform policy makers ->
encourage better management and accountability.
Indicators
Final indicators measure the outcomes (e.g. higher consumption
per capita) and the impact on dimensions of well-being (e.g.
reduction of consumption poverty).
Intermediate indicators measure the inputs into the program
(conditional cash-transfer, wage subsidy scheme) and the
outputs of the program (e.g. roads built, unemployed men,
women hired).
- Intermediate indicators typically vary more quickly than final indicators, respond more rapidly to
public interventions, and can be measured more easily and in a more timely fashion.
- Rather a few indicators that can be measured properly than many that cannot.
Results-Based Monitoring
Results-based monitoring -> the actual execution of a monitoring system. 10 Steps:
1. Readiness assessment (needs & characteristics, key players, responsibility, how to respond to
negative pressures, etc.)
2. Agree on specific outcomes to monitor and evaluate, as well as key performance indicators
to monitor outcomes.
3. Decide how trends in these outcomes will be measured (e.g. test scores, attendance, etc.)
4. Determine instruments to collect information (comparisons, predictions, discussions).
, 5. Establish targets (can also be used to monitor results, periodic targets over time). Include
duration of likely effects, etc.
6. Collect good-quality data.
7. Timing of monitoring -> timing and organization of evaluations also drive the extent to which
they can help guide policy (e.g. if fast, can soon adjust).
8. Careful consideration of the means of reporting (incl. audience).
9. Create avenues for feedback.
10. Sustaining the M&E system within the organization (transparency, accountability, effective
management of budgets, responsibilities).
Challenges in Setting Up a Monitoring System
Primary challenges to effective monitoring include potential variation in program implementation
because of shortfalls in capacity among program officials, as well as ambiguity in the ultimate
indicators to be assessed.
Weaknesses have to be addressed through different approaches. Performance indicators, for
example, can be defined more precisely by (a) better understanding the inputs and outputs at the
project stage, (b) specifying the level and unit of measurement for indicators, (c) frequently collecting
community level data to provide periodic updates on how intermediate outcomes are evolving and
whether indicators need to be revised, and (d) clearly identifying the people and entities responsible
for monitoring.
- For data collection: survey timing, frequency, instruments, level of collection (e.g. individual,
household, community).
- Provide staff with training and tools for data collection and analysis and data verification.
- Policy makers might also need to establish how microlevel program impacts (at the community
or regional level) would be affected by country-level trends such as increased trade, inflation,
and other macroeconomic policies. A related issue is heterogeneity in program impacts across a
targeted group. The effects of a program, for example, may vary over its expected lifetime.
Relevant inputs affecting outcomes may also change over this horizon; thus, monitoring long-
term as well as short-term outcomes may be of interest to policy makers.
,Operational Evaluation
Seek to understand whether implementation of a program unfolded as planned. Based on initial
project objectives, indicators, and targets (e.g. interviews with officials responsible for
implementation). See if there are gaps between planned and realized outputs -> what to change the
next time.
Challenges in Operational Evaluation
Includes monitoring how project money was ultimately spent or allocated across sectors (as
compared to what was targeted), as well as potential spillovers of the program into nontargeted
areas.
Operational Evaluation versus Impact Evaluation
Operational evaluation relates to ensuring effective implementation of a program’s initial objectives.
Impact evaluation is an effort to understand whether the changes in well-being are indeed due to
project or program intervention (to what extent is it causation) -> focuses on outcomes and impacts.
The two are complementary.
However, although operational evaluation and the general practice of M&E are integral parts of
project implementation, impact evaluation is not imperative for each and every project. Impact
evaluation is time and resource intensive and should therefore be applied selectively. Policy makers
may decide whether to carry out an impact evaluation on the basis of the following criteria:
- The program intervention is innovative and of strategic importance.
- The impact evaluation exercise contributes to the knowledge gap of what works and what does
not (data available).
Quantitative versus Qualitative Impact Assessments
Qualitative information (e.g. understanding local sociocultural context) is essential to sound
quantitative assessment. Qualitative information can, for example, help identify mechanism through
which programs might have an impact -> aiding operational evaluation. But a qualitative assessment
on its own cannot assess outcomes against relevant alternatives or counterfactual outcomes. That is,
it cannot really indicate what might happen in the absence of the program. Mixed-methods approach
is often the most effective.
Quantitative Impact Assessment: Ex Post versus Ex Ante Impact
Evaluations
Quantitative methods: ex ante and ex post approaches.
- Ex ante: Determines the possible benefits or pitfalls of an intervention through simulation or
economic models. Attempts to predict the outcomes of intended policy changes, given
assumptions on individual behaviour and markets. Ex ante analysis can help in refining programs
before they are implemented, as well as in forecasting the potential effects of programs in
different economic environments.
- Ex post: Based on actual data gathered either after program intervention or before and after
program implementation. Ex post evaluations measure actual impacts. These evaluations,
however, sometimes miss the mechanisms underlying the program’s impact on the population,
which structural models aim to capture.
o More expensive (collecting data on actual outcomes), risk of failure of the intervention,
which might have been predicted through ex ante analysis. An ex post impact exercise is
easier to carry out if the researchers have an ex ante design of impact evaluation
, - Can combine both analyses and compare ex post estimates with ex ante predictions.
The Problem of the Counterfactual
The main challenge of an impact evaluation is to determine what would have happened to the
beneficiaries if the program had not existed. E.g. the per capita household income of beneficiaries in
the absence of the intervention. A beneficiary’s outcome in the absence of the intervention would be
its counterfactual.
Ex post, one observes outcomes of this intervention on intended beneficiaries, such as employment
or expenditure. Does this change relate directly to the intervention? Has this intervention caused
expenditure or employment to grow? Not necessarily. The problem of evaluation is that while the
program’s impact (independent of other factors) can truly be assessed only by comparing actual and
counterfactual outcomes, the counterfactual is not observed. So the challenge of an impact
assessment is to create a convincing and reasonable comparison group for beneficiaries in light of
this missing data. Ideally, one would like to compare how the same household or individual would
have fared with and without an intervention or “treatment.” E.g. comparison between treated and
nontreated groups when both are eligible to be treated or before and after they are treated?
Looking for a Counterfactual: With-and-Without
Comparisons
In this example, the control group yields and underestimate of the
program’s effect because the control group had a higher starting
point. This can be deceptive when one does not know the starting
point of the groups.
Looking for a Counterfactual: Before-and-After
Comparisons
Looking at the starting point and the post-intervention income (Y2),
the program’s effect would be Y2-Y0, this is the reflexive method of impact -> probably not realistic.
Many other factors may have changed over the period, not controlling
for those means that one would falsely attribute the participant’s
outcome in absence of the program as Y0, when it might be Y1.
Reflexive comparisons may be useful in evaluations of full-coverage
interventions such as nationwide policies and programs in which the
entire population participates and there is no scope for a control
group.
Broad baseline study covering multiple pre-program characteristics of
households would be very useful so that one could control for as many other factors.
Basic Theory of Impact Evaluation: The Problem of Selection Bias
Without information on the counterfactual, the next best alternative is to compare outcomes of
treated individuals or households with those of a comparison group that has not been treated. There
are two broad approaches that researchers
resort to in order to mimic the counterfactual
of a treated group: (a) create a comparator
group through a statistical design, or (b) modify
the targeting strategy of the program itself to
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller kimversteegt. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $9.65. You're not tied to anything after your purchase.