M. Medema
Advanced Research Methods and Statistics
Track specific - Clinical Psychology
Including
● Practical 1 - Power & Effect Size, Manipulation & Randomization check
● Practical 2 - Interaction in ANOVA & Regression + Posthoc Testing
● Practical 3 - Meditation, Meta-analysis
Practical 1
Power & Effect Size
Effect size
The effect size measures the proportion of total variance in the dependent variable, that is
associated with membership of different groups as defined by an independent variable. It is an
objective and standardised measure of the size of an observed effect.
There are different ways of measuring an effect size, for example with an R2 in a regression and
a partial eta-squared in an ANOVA.
● Partial eta-squared (pƞ2)
Small effect: .01
Medium effect: .06
Large effect: ≥ .14
● Pearson r (or regression 𝜷)
Small effect: .10
Medium effect: .30
Large effect: ≥ .50
Therefore there is a difference between significant and relevant. An effect can be significant
(p < .05), but not relevant (no to small effect size).
,For example:
In the clinical context; this study investigated which factors were associated with PTSD,
including general factors (age, gender), and tonic immobility, trait anxiety and trauma
responses. It had a large sample of 4781 participants. It was concluded that tonic immobility,
trait anxiety and dissociative tendencies, predicted PTSD severity.
The tabe that the
conclusion was based on:
In the green box you can
see that the 3 variables in
question are all significant
(p < .001), and they all
have a large effect size ( 𝜷
). Note that, in the red
box, you can see factor
age: it is also significant (p
< .01), yet it was not
considered to be
associated with PTSD. This
is because the effect size
is small ( 𝜷 < .10 (.06)).
Power and Effect Size
Statistical power has an influence on the possibility of detecting an existing effect of a
particular size, therefore the opportunity to correctly reject the null hypothesis.
The power is 1 - 𝜷 , where 𝜷 is the probability of a type II error. The goal is .80
*Note that this is a different Beta than the Beta in the regression analysis
Statistical power influences the possibility to detect an existing effect of a given size, and so the
chance to correctly reject the null hypothesis. Type II error (β) is the chance that an effect will
not be detected when in fact this effect is present; the null hypothesis (H0) is not rejected, even
though it is false.
True statements:
- If your effect size & sample are large, you can assume that your power is large
- With a large number of participants, it is important to look at the effect size before you
draw conclusions about your findings
- With a small number of participants, it is important to look at the effect size before you
draw conclusions about your findings
- If your sample is large and your effect size is small, you can NOT reduce the sample size
to increase the effect size
, Type I error & Type II error
● Type I error (α) is the probability that an effect will not be detected where in fact no
effect exists: the null hypothesis (H0) is rejected when in fact it is true
● Type II error (𝜷) is the probability that no effect will be detected where an effect does in
fact exist: the null hypothesis (H0) is not rejected when in fact it is false
Detecting an effect
The possibility to detect an effect (the power) depends on the p-value that is chosen, so the
Type I error. It is also dependent on the beta, since it is calculated by 1 - beta. It also depends
on the effect size, and the sample size.
On the left: Normal distribution of the null hypothesis
On the right: Normal distribution of the alternative hypothesis
The overlap of the normal distributions on the right represents the Type I error, which is the
level of significance. When there is no overlap, it would be 0. If there is an overlap, there is a
chance that there is a false positive.
The overlap of the normal distributions on the left represents the Type II error, where the
p-value is higher than chance. But there is a chance that the alternative hypothesis is actually
true, there is a chance that there is a false negative.