HMPYC 80 –
Research Methodology
Section E – Chapter 21
Evaluation Research
, 1. Introduction:
► The effectiveness of programmes and services or practice interventions has become
increasingly important for human services professionals over the last decade.
- In an age of accountability, managers, funders, and even clients demand that some
evidence is provided in terms of ‘what works’, ‘how it works’ or ‘how it can be made to work
better’.
► The mainstream of evaluation research in the human services professions consists of the
evaluation of programmes or other interventions.
- It is traditionally expected that such evaluations will be conducted by an outsider – an
evaluator – or at the very least, if conducted internally, that the person ‘switches hats’,
leaving the role of practitioner behind to conduct an evaluation as objectively as possible.
► Agency-based research agendas are often situated in research offices separate from
practitioners and the coalface of practice.
- This is no longer the only acceptable model of evaluation research, although some
contexts still place high value on the services of so-called ‘independent’ evaluators.
► Taylor states that intervention research is centrally concerned with the design and
development of interventions that work.
► Evaluating the effectiveness of the intervention under development is a pivotal stage in the
intervention research process leading to changes and improvements in interventions for
practice.
► Using advanced designs to test and refine the intervention sets intervention apart from
programme evaluation.
► The crucial characteristics that distinguish intervention research from other types of
evaluation research – and programme evaluation in particular – is that when intervention
research is attempted, something new is created and then evaluated.
- In other words, it is a new technology or intervention, an innovation, while most types of
evaluation research assume the prior existence of a service, programme or intervention
designed and developed by someone else – perhaps long before the evaluator ever
entered the field.
► In evaluation research, the development of the service, programme or intervention is not
part of the research design.
► On the 1 hand, when researchers are asked or feel compelled to design and develop (and
eventually evaluate) a new intervention, they are conducting intervention research.
- On the other hand, when they are asked or feel compelled to evaluate an existing
programme or service or engage in research after the development of such an initiative,
they enter the field of evaluation research.
, 2. Characteristics of Evaluation
Research:
► Of all the types of research, evaluation research is probably the one best understood in
practice as most people know what evaluate means.
- Paradoxically, it is also the most misunderstood of all types of research.
► The term evaluation research is somewhat misleading in that there is no separate set of
research techniques that is distinctly applied for this single purpose.
- The definitions of evaluation are also varied.
► Evaluation research – The process of using credible data to make judgemen ts about the
worth of a product, programme, service or process.
- In its simplest form, evaluation is the process of discovering the value of something;
evaluation can be aimed at anything from large-scale evaluation of services or programmes
to an individual practitioner determining whether a piece of work has made a difference for
a specific client.
► There are several intentions inherent to evaluation research , however, and these can be
summarised as follows:
i. First, there is a resurgence of attention on accountability-focused evaluation,
stemming from a strongly positivistic approach.
- The evidence-based movement has been acknowledged to work in opposition to
approaches that embrace respectful relationships and localised solutions that
promote community empowerment.
- Evaluators can be faced with demands for evidence that must be balanced with an
increasingly sophisticated understanding of what social settings, cultural values, and
influential evaluations actually entail.
- This includes the engagement required to ensure indigenous groups are properly
represented in an evaluation.
ii. Second, for this very reason, there are real advantages to getting close to practice,
but this must be balanced against the need for objective measures to demonstrate
outcomes.
iii. Third, in complect social contexts researchers often face wicked problems; the
interconnectedness of one problem with other problems. It can be difficult in such
circumstances to decide what a valid outcome is and the standards it should be
measured against.
iv. Fourth, researchers have to balance the context-specific nature of evaluative
research with the need for generalisable knowledge – especially where the intention
of the evaluation is to improve service delivery or policy designs.
v. Fifth, in practice contexts where (often very limited) resources are directed at service
delivery, and research capability is limited, the availability of data about programme