Table of Contents
Introduction to Organisational Research Methods .................................................................................. 2
The history and philosophy of ethics in research .................................................................................... 14
Systematic review and meta-analysis ................................................................................................... 33
Meta-analyses applied in organisational psychology ............................................................................ 52
Introduction to qualitative research and designing a qualitative research study .................................... 61
Interviews & focus groups, novel methods of data collection and ethnography ..................................... 75
Grounded Theory and Interpretative Phenomenological Analysis .......................................................... 87
Discourse analysis, thematic analysis, and qualtiative data analysis overview ....................................... 97
Writing qualitative research for publications & critical appraisal of qualitative research ..................... 104
How to rate a paper ........................................................................................................................... 113
ORGANISATIONAL RESEARCH METHODS
KCL
PATRYCJA_KUSACZUK@ICLOUD.COM
1
,Introduction to Organisational Research Methods
• Understand what organisational psychologists do
• Differentiate between descriptive, association and causal research questions
• Differentiate between experimental and observational study designs
• Understand the concept of experimental and control conditions identify the advantages and disadvantages of within-
and between-subjects study designs
• Understand how mixed methods can utilise a range of philosophical perspectives
• Describe common types of mixed-methods designs and how they relate to research questions
• Understand what it means to think like a researcher
Is most published research wrong?
2011 Article - Feeling the future experimental evidence for anomalous retroactive influences on cognition and affect
• ‘Proof that people can see into the future’
• Reported on 9 experiments
Method:
• In one experiment ppts were shown 2 curtains on a computer screen and were asked to predict which one had an
image behind it. The other just covered a blank wall.
• Once the ppt made their selection the computer randomly positioned an image behind one of the curtains, then the
selected curtain was pulled back to show either the image or the blank wall.
• The images were randomly selected from 1 of 3 categories: neutral, negative or erotic. If ppt selected the curtain
covering the image this was considered a hit.
Results:
• Now with there being 2 curtains and the images positioned randomly behind 1 of them, you’d expect the hit rate to be
about 50%. That is exactly what the researchers found at least for negative neutral images.
• However, for erotic images, the hit rate was 53%.
Comments:
• Does that mean we can see into the future? Is that a slight deviation significant?
• To assess significance scientists usually turn to p-values, a statistic that tells you how likely a result, at least this
extreme, is if the null hypothesis is true. In this case the null hypothesis would just be that people couldn’t actually see
into the future and the 53% result was due to lucky guesses.
• For this study the p-value was .01 meaning there was just a 1% chance of getting a hit rate of 53% or higher from simple
luck. P-values less than .05 are generally considered significant and worthy of publication.
Key takeaway:
• Is published research wrong - there is a 5% chance according to p-values. However, in reality, it could be that up to
1/3rd of research papers published are false positives.
• Researchers in a number of fields have attempted to quantify the problem by replicating the prominent past results.
• The reproducibility project repeated 100 psychology studies but found that only 35% had a statistically significant result
the second time and the strength of measured relationships was on average half of those of the original studies.
• An attempted verification of 52 studies considered landmarks in the basic science of cancer only manages to reproduce
6 even working closely with the original studies authors.
2015 study - showing that eating a bar of chocolate every day can help you lose weight faster
Method
Ppts were randomly allocated to 1 of 3 treatment groups: (1) low carb diet (2) low carb diet + dark chocolate bar, (3) control
group maintaining their regular eating habits
Results:
• 3 weeks - control group neither lost or gained weight, but groups 1 + 2 lost an average of 5 lbs per person
• The group that ate chocolate lost weight 10% faster than the no chocolate eaters - the finding was statistically
significant with a p-value less than .05
Comments:
• This news circulated around the world like wildfire
• But the whole thing was faked - researchers performed the experiment exactly as they described, but they intentionally
designed it to increase the likelihood of false-positive: the sample size was incredibly small, just 5 people per group, and
2
, for each person 18 different measurements were tracked including: weight, cholesterol, sodium, blood protein levels,
sleep quality, wellbeing
• If weight loss didn’t show a significant difference there were plenty of other factors that might have.
• So the headline could have been “chocolate lowers cholesterol” or “increases sleep quality”
Key takeaway:
• A p-value is only really valid for a single measure once you’re comparing a whole slew of variables the probability that
at least one of them gives you a false positive goes way up, and this is known as “p-hacking”
• Researchers can make a lot of decisions about their analysis that can decrease the p-value, for example, let’s say you
analyse your data and you find it nearly reaches statistical significance, so you decide to collect just a few more data
points to be sure. Then if the p-value drops below .05 you stop collecting data, confident that these additional data
points could only have made the result more significant if there were really a true relationship there.
• But numerical simulations show that relationships can cross the significance threshold by adding more data points even
though a much larger sample would show that there really is no relationship. There are a great number of ways to
increase the of significant results like having 2 dependent variables, adding more observations, controlling for gender or
dropping 1 of 3 conditions. Combining all 3 strategies together increases the likelihood of a false-positive to over 60%,
and that is using p<.05
• Data doesn’t speak for itself - it must be interpreted
• Scientists have huge incentives to publish papers as their careers depend on it. Journals are much more likely to publish
results that are significant, novel or that are unexpected.
Introduction to Research Design - Planning and conducting a research project: main stages
STEP 1: FORMULATE A RESEARCH QUESTION
What is a research question?
• A research question must be measurable and have a measurable outcome.
• The research question needs to be refined until you can develop a study design that addresses it and is feasible to carry
out.
• Decide what you need to need to know about what topic and why that is relevant.
Descriptive, causal and association research questions
Descriptive: Involves 1 group and 1 variable
• Frequency measurements can be expressed as an amount of units
• Understand the frequency of an event in more detail to answer when an event is more likely to concur
• Frequency measurements are expressed in the form of percentages and proportions
Causal/association: how 2 or more variables are linked
• Understand the relationship between factors: trends, causal relationships, interactions, correlations
• Examine differences between 2 groups
• Measuring a behaviour can become complex as it increases the number of things that need to be measured e.g.
frequency, quantity, time of day etc.
STEP 2: SELECT A STUDY DESIGN
Experimental studies:
• Used to investigate interventions
• E.g. do patients feel better after an intervention (treatment) in comparison to a control group?
Observational studies:
• No interventions take place
• Used to enable inferences about diseases through natural observation of groups defined by their exposure or disease
status
How to decide which design to use:
There are a number of things to consider, such as the timing of the exposure and outcome, how quickly the results are needed,
the type of measurements applied, for example, genetic, neuroimaging, questionnaires. What samples are chosen? Are they
clinical, or is it a population sample? The setting and the subject matter, is it lab-based, field-based? Do you use live subjects?
And the types of analysis required. Is it quantitative, qualitative, or maybe mixed method?
3
, Variables in experiments
Dependent variables (DV)
• The dependent variable is what we measure in order to understand the effect of the independent variable
• It may be referred to as an ‘outcome’ or ‘criterion’ variable
Independent variable (IV)
• The independent variable is what we manipulate or vary systematically
• Is also measured
• It can be referred to as a ‘predictor’ variable
Variables in experiments: implications for standardisation of procedures
• Stating what you are measuring and how you are measuring it permits
o Later replication by other researches
o Scientific observations to be publicly reproducible and reliable
• Keep all other factors constant except the variable being manipulated
o For example, instructions, time of day, lighting etc. must be the same between conditions
• Eliminate variations in the behaviour of the experimenter
• If there is a difference in the size of the effect, and all other variables and factors have been held constant, then we can
conclude that the difference is due to the independent variable
Deciding the study design: between vs within-subjects designs
Between-subjects involve different participants taking part in different conditions
• For example, let's look at a study understanding the effect of specific types of exercise on blood pressure.
• Participants in group A will do one type of exercise and have their blood pressure recorded before and after.
• Participants in group B do another type of exercise and have their blood pressure recorded before and after.
• The advantage of this type of design is that you can compare very different groups such as males and females, different
ethnic groups or people who have or have not been exposed to certain environmental factors.
• However, there is a risk that such groups differ in ways that confound the results.
Within-group designs involve the same participants taking part in two or more conditions of the same experiment.
• For example, testing the effect of exercise on blood pressure.
• Participants could do one type of exercise and have their blood pressure recorded before and after.
• They could later do another type of exercise, and have their blood pressure recorded before and after.
• The researchers could address the question of which type of exercise affects blood pressure the most.
• This type of design is useful if you want to monitor the effect of something, for example, a treatment on individuals, as
it lowers the possibility of individual differences affecting the results.
• The downside of this approach is that there can be practice effects. If participants are affected, e.g. by familiarity, when
they are assessed more than once.
Randomised controlled trials (RCTs)
These are often seen as the gold standard for clinical trials in medical
interventions. Strictly speaking, randomised control trials, or RCTs,
should only be used to describe trials for the control group. And the
control group may receive a placebo or treatment as usual.
4