Summary ALL literature & lectures for the course Criminology and Safety
0 view 0 purchase
Course
Criminology & Safety (201500009)
Institution
Universiteit Utrecht (UU)
This is a summary of all the required readings for the Criminology and Safety exam in January 2025. It includes the following articles: Johnson, Farrington, Eck, Agnew, Britt, Volker, Bernasco, Moneva, Gill, Winter, Groff & Blair, along with notes from all the 6 lectures. In English!!
Introducing EMMIE: an evidence rating scale to encourage mixed-
method crime prevention synthesis reviews - Johnson et al. (2015)
EMMIE = een coderingssysteem om de kwaliteit en dekking van
systematische reviews over de effectiviteit van misdaadpreventie-
interventies te bekijken. Vijf dimensies waaraan systematische reviews
(SRs) aandacht zouden moeten besteden:
- Effect van de interventie
- Identificatie van het mechanisme waarmee de interventies werken
- Factoren die de impact modereren
- Praktische implementatiekwesties
- Economische kosten
“The proper agenda for the next generation of treatment effectiveness
research, for both primary and meta-analytic studies, is investigation into
which treatment variants are most effective, the mediating causal
processes through which they work, and the characteristics of recipients,
providers, and settings that most influence their results.”
Right now: the focus lies most on internal validity.
EMMIE framework:
The preceding discussion suggests that the adequately evidence-equipped
policymaker and practitioner need to know the following about
interventions they might want to implement:
- E: the overall effect direction and size (alongside major unintended
effects) of an intervention and the confidence that should be placed
on that estimate
- M: the mechanisms/mediators activated by the policy, practice or
program in question
o Mediators = describe the chains of events (or intermediate
outcomes) that occur between a treatment and the ultimate
outcomes produced. Such data should not be difficult to obtain
in primary evaluations, and systematic reviewers should have
no difficulty in determining whether chains of causality have
been explored in primary studies.
o It's important to understand what makes an intervention
achieve its intended (and unintended) outcomes. A strong
primary evaluation explains the theory behind the intervention
and collects data to test it. A good systematic review (SR)
summarizes these theories and analyzes the available
evidence to test them.
- M: the moderators/contexts relevant to the production/non-
production of intended and major unintended effects of different
sizes
o Moderators refer to variables that may explain variation in
outcomes across different studies. They can include
circumstances associated with differences in the efficacy of
, the intervention, such as the type of location. They can also
include the study methods employed. For example, weaker
effect sizes may be reported for RCTs than quasi-experimental
studies.
o Interventions don’t always work the same way or with the
same success every time. Their results can depend on where
and when they are applied, as well as on the people involved.
To decide if, when, where, and for whom an intervention is
suitable, policymakers need evidence about which groups or
settings are likely to benefit, which won’t be affected, and
which might experience negative effects.
- I: the key sources of success and failure in implementing the policy,
practice or program
o For both successful and unsuccessful initiatives, it is important
for the practitioner to know what was done, what was crucial
to the intervention and what difficulties might be experienced
if it were to be replicated elsewhere.
- E: the economic costs (and benefits) associated with the policy,
practice or program
o In policy terms, it is necessary but not sufficient that a given
measure is capable of producing an intended outcome. In
addition to the issues already discussed, the cost of
intervention will ideally be known.
Both primary evaluations and SRs may attend to each of these more or
less adequately. In assessing the evidence, it is thus important to
differentiate between what the evidence suggests (e.g., an estimate of
effect size) and the quality of that evidence (e.g., the methodological
adequacy of the studies on which the estimate is based).
Methodological Quality Standards for Evaluation Research -
Farrington (2003)
Cook and Campbell: Methodological quality depends on four criteria:
statistical conclusion validity, internal validity, construct validity, and
external validity. “Validity” refers to the correctness of inferences about
cause and effect. The main criteria for establishing a causal relationship
have been that (1) the cause precedes the effect, (2) the cause is related
to the effect, and (3) other plausible alternative explanations of the effect
can be excluded. The main aim of the Campbell validity typology is to
identify plausible alternative explanations (threats to valid causal
inference) so that researchers can anticipate likely criticisms and design
evaluation studies to eliminate them. If threats to valid causal inference
cannot be ruled out in the design, they should at least be measured and
their importance estimated.
- Statistical Conclusion Validity focuses on whether the
intervention (cause) and the outcome (effect) are truly related.
Effect size and its confidence intervals should be calculated.
Statistical significance, while useful, is less important than effect
size because it can depend on the sample size—a large effect in a
, small sample or a small effect in a large sample can both be
statistically significant. The main threats to this validity are:
o Low statistical power (e.g., due to a small sample size).
o Using inappropriate statistical methods (e.g., when the data
don’t meet the assumptions of the statistical test).
- Internal Validity focuses on whether the intervention truly caused
the change in the outcome. It is often considered the most
important type of validity. To answer this, a control condition is
essential to estimate what would have happened without the
intervention—this is known as the “counterfactual inference.”
Threats:
- Construct Validity is about how well the intervention and
outcomes are defined and measured based on the theory behind
them. For example, in a study on whether interpersonal skills
training reduces offending, it’s important to check if the training
truly improved interpersonal skills and if arrests accurately reflect
offending. Unlike physical measures like height or weight,
criminological constructs are harder to define and measure. The
main threats to construct validity are:
o Whether the intervention successfully changed what it was
supposed to (e.g., treatment fidelity or implementation
issues).
o The accuracy and reliability of the outcome measures (e.g.,
whether police-recorded crime rates reflect actual crime
rates).
- External Validity is about whether the results of an intervention
can be applied to different people, places, times, or versions of the
intervention and outcomes (e.g., scaling up a pilot project). This is
hard to study in a single evaluation unless it’s a large, multisite trial.
Systematic reviews and meta-analyses of multiple studies are better
for assessing external validity. The main threats to external validity
are:
o Differences in how causal relationships (effect sizes) vary
across types of people, settings, interventions, or outcomes.
For example, an intervention to reduce offending might work
well for some groups or in some areas but not in others.
- Descriptive Validity is about how well key details of an evaluation
are reported in a research study. Systematic reviews rely on
evaluation reports to include important data, like the number of
, participants and effect sizes. At a minimum, an evaluation report
should include essential elements like these:
The Maryland Scale (SMS)
The main aim of the SMS is to communicate to scholars, policy makers,
and practitioners in the simplest possible way that studies evaluating the
effects of criminological interventions differ in methodological quality. In
constructing the SMS, the main aim was to devise a simple scale
measuring internal validity that could easily be communicated. Thus, a
simple 5-point scale was used for internal validity:
1. Level 1: correlation, single point in time. No history, no control for
different areas.
a. Correlation between a prevention program and a measure of
crime at one point in time (e.g., areas with CCTV have lower
crime rates than areas without CCTV). This design fails to rule
out many threats to internal validity and also fails to establish
causal order.
2. Level 2: before and after, no control group. Only prepost, trends
could be accounted for by other explanations.
a. Measures of crime before and after the program, with no
comparable control condition (e.g., crime decreased after
CCTV was installed in an area). This design establishes causal
order but fails to rule out many threats to internal validity.
Level 1 and level 2 designs were considered inadequate and
uninterpretable by Cook and Campbell.
3. Level 3: before and after, with control group.
a. Measures of crime before and after the program in
experimental and com-parable control conditions (e.g., crime
decreased after CCTV was installed in an experimental area,
but there was no decrease in crime in a comparable control
area). It rules out many threats to internal validity, including
history, maturation/trends, instrumentation, testing effects,
and differential attrition. The main problems with it center on
selection effects and regression to the mean (because of the
nonequivalence of the experimental and control conditions).
4. Level 4: before and after, multiple control groups, control for other
variables. Doesn’t control for underlying factors.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller dillepoelen. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $11.61. You're not tied to anything after your purchase.