Lecture Summary ARM: DHM (development and mental health)
138 views 13 purchases
Course
Applied research methods (SOWPSB3RS45E)
Institution
Radboud Universiteit Nijmegen (RU)
Complete summary of (1) all lecture slides and (2) all (relevant) additional information provided by the lectures. Contains Lecture 1-7.
* In lecture 1 it wrongly states that increasing p leads to more false positives. Decreasing p leads to more false positives
applied research methods arm development and mental health arm development and mental health arm dhm dhm sow psb3rs45e development and health
Written for
Radboud Universiteit Nijmegen (RU)
Psychologie
Applied research methods (SOWPSB3RS45E)
All documents for this subject (6)
Seller
Follow
melaniegenders
Content preview
Lecture Summary ARM: Development
Lecture 1 ARM – Mike Rinck
Contents and purpose of the course:
Main aim of the course: Familiarise you with methods used in psychology in general, but also in
clinical and developmental psychology. Also talk about topics and contents. Second aim of the class:
prepare you for master thesis.
ARM: General topics
Topics that are important all the time.
Types of scientific research:
1. Observations: useful for finding phenomena, for seeing something that raises scientific
interest.
2. Correlations and Quasi-Experiments: about finding relationships (not about explaining
them), e.g. how large they are.
3. Experiments: Finding causal explanations
4. All of them: useful and used for developing and testing theories of experience and behaviour
These theories that are being developed and tested vary greatly on a scale of bad to good. You can
judge how good a theory is based on the following 3 principles:
1. Precision 🡪 the more precise, the better, the vaguer, the worse
2. Parsimony 🡪 complicated theory with many assumptions to predict something, this is worse
than a theory with few assumptions.
3. Testability and Falsifiability 🡪 must be possible to test and falsify a theory. A theory that can
predict each and everything and the opposite (psychoanalytics), is bad. Must be possible for
a theory to make predictions that can be wrong. If always correct 🡪 not a good scientific
theory.
Science needs to be valid. Good research should be high in the following 4 types of validity:
1. Internal validity: Did the intervention rather than a confounded variable cause the results?
- E.g. when you test interventions in clinical settings. Is the difference in patients really
due to the intervention difference.
- How methodologically clean is your study.
2. External validity: How far can the results be generalized?
3. Construct validity: Which aspect of the intervention caused the results?
- When intervention and no confound caused the change, which one was important? (e.g.
is it really the new treatment or just the attention a patient receives (a general factor))
4. Statistical validity: Are the statistical conclusions correct?
- Did you use the right statistical methods to analyse your data and reach your
conclusions.
,IRL however, rigoursly check stat validity, not good in construct validity, and internal and external
validity can be in contrast to each other.
- There is the belief that lab studies have high internal, low external validity; and that field
studies have high external, low internal validity. This latter belief is false due to the
priority of internal validity: It is impossible to have high external validity when you have
low internal validity.
IRL: in lab research, external validity is sometimes questionable.
Correlational Research
With this type of research you can find out/answer 2 types of research questions:
1. how closely are two variables related 🡪 correlation
2. Predict one variable if you know the other 🡪 regression
How can correlations be used and interpreted?
Correlation: can see the direction and size
Regression: you can make a prediction
But, you cannot make any statements about why there is an correlation (does not mean one causes
the other or the other way around)
Keep in mind:
1. The rela between correlation and causality is not symmetric:
● Causality indicates correlation
● Correlation doesn’t indicate causality
2. Temporal order does not prove causality:
● If A cause of B, A must occur before B
● If A happens before B, A not necessarily the cause of B
Even if 2 variables are both correlated and temporally ordered, the earlier one does not have to be
the cause of the later one. Correlation is a necessary, but not a sufficient precondition for causation.
Only 1 way to establish a causal relationship underlying a correlation: conduct an experiment.
What defines an experiment:
1. Contains independent variables (manipulated by experimenter)
● What is a good independent var
● How many levels of the var
2. Contains dependent variables (measured by experimenter)
● What is a good dependent variable (e.g. questionnaires on mood when
investigating depression)
, ● Beware of floor effects and ceiling effects (e.g. look at your sample when you
look for measurement instruments, e.g. don’t use Beck depression inventory if
you use a non-clinical sample)
3. Usually has control variables (controlled by experimenter)
● Expect these to be able to influence your dependent var, thus there are 2
options: holding them constant or turning them into independent variables
Designs in experiments:
1. Between subject designs = independent groups
● Every subject experiences only one level of the independent variable: via
random assignment to ensure baseline similarity between groups.
2. Within subject designs = repeated measures
● Every subject experiences every level of the independent variable: use
counterbalancing to ensure there are no order effects (if possible).
3. Mixed designs: per independent variable, you can choose if you want it to be between or
within, but do never mix between and within for ONE independent variable.
Problems of experimental designs, that are particularly critical in clinical psychology
1. Quasi experiments instead of random assignment
- When people already belong to groups (e.g. gender research)
2. External validity
- Laboratory vs everyday life
- Patients vs analogue populations (talk about depression but use non-clinical sample)
3. Low sample size 🡪 low statistical power
Effect Size and Statistical Power
Why bother with this?
- How many participants will I probably need in my study ?
- Why do so many experiments in psychology yield nonsignificant results ?
- Why should I better not believe many of the significant results I read about?
There are 2 types of errors, these arise when generalizing from the small experimental sample to the
population:
Error type 1: alpha error/false
positive error (5% chance is the
limit), when you find an effect, but
in reality there is none.
Error type 2: beta error/false
negative error, when you do not
find an effect, but in reality, there is
one.
Power of a study: probability that we do not make a beta error. Prob of finding an effect when it
actually exists.
, Effect Size: How large is a difference / correlation / relationship?
Statistical Power: What is the probability that this effect will be stat significant in an experiment?
Situations:
• Experiment in preparation: Determine necessary sample size 🡪 really use this knowledge here!
• Experiment completed: Determine power of the experiment 🡪 more disappointing situation
• Evaluation of published studies 🡪 judge in retrospect how believable the results are.
Effect sizes: Cohen’s d as a simple example
● Situation: comparing 2 group means by a t-test
● Effect size d: (mean1-mean2)/SD 🡪 see how much the distributions of the group overlap. So
how large the difference is.
o 0.2 = small (does not mean it is a rare effect)
o 0.5 = medium
o 0.8 = large
Thus, often find 2 distributions that greatly overlap 🡪 difference between the 2 means is only half of
the sd within each group.
Power is affected by:
1. Effect size: larger effects are easier to find (e.g. the stroop task)
2. Sample size: effects are easier to find with many participants
3. Alpha error: increasing the alpha error reduces the beta error 🡪 when increasing p, fewer
false negatives leads to more false positives
Before an experiment: determine the sample size. e.g. by using the program g-power.
! do not try to research a small effect 🡪 sample size per group is like 500.
! if you do not know, run an experiment with a number of participants based on a medium effect
size.
Some measures of Effect Size:
t-test: d
ANOVA: f (f = d/2)
Partial eta squared (percentage of explained variance)
Correlation r (pearson’s correlation coefficient) 🡪 values are very intuitive, thus everybody can
use them.
All these values can be transformed into each other.
But then, why are there so many small studies with large effects published?
1. Random fluctuation of effects in samples: sometimes smaller, sometimes the same,
sometimes larger than the real effect size. 🡪 studies take on a normal distribution when it
comes to effect sizes.
2. Publication bias favouring significant effects.
With small sample sizes, even if you find the real population effect, it will not be significant!
And thus, probably end up in the file drawer.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller melaniegenders. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $3.79. You're not tied to anything after your purchase.