Advanced Research Methods
Quantitative methods
Lecture 1: Causal inference – Drawing the lines between causes and effects
Causal inference: drawing conclusions about causation/relationships.
Epidemiology is not the science of epidemics. It is about who is ill and why are they ill? It’s a
methodologic science. Its not about what makes you ill, but about how can we study what makes you
ill. It is the study of the distribution of health-related states and events in the population. It does not
represent a body of knowledge. It is a philosophy and methodology that can be applied to a very
broad range of health problems. The art op epidemiology is knowing when and how to apply the
various strategies creatively to answer specific health questions. Epidemiologists who engage in
thinking about the nature of causality are doing philosophy, whether they call it that or not.
Improve is a causal term and implies a causal effect. Something leads to something else, A leads to B.
Problems:
- Small sample
o Is this always a problem?
- Study performed or financed by commercial company
o Is this always fatal?
- No control group
o Essential omission
What would have happened to these women if they didn’t use this product?
It’s more likely that your skin will look better if it is at it’s absolute worst.
o What without treatment?
o Potential regression towards the mean
Causation: we know what happened to this woman, but we also need to know what would’ve
happened. Then we can predict what would happen to you if you use this product. You always need
to ask yourself what would’ve happened. In an individual, a treatment has a causal effect if the
outcome under treatment 1 would be different from the outcome under treatment 2.
Important questions:
- What would’ve happened?
- What will happen?
Potential outcomes:
Causal effect: Y a=1
i ≠Y ia=0
Y = outcome
a = treatment
1 = yes (received treatment/experienced improvement)
0 = no
i = individual
≠ does not equal
Counterfactual outcome: potential outcome that is not observed because the subject did not
experience the treatment (counter the fact). Counterfactual outcomes are the central problem of
causal inference. We observe only half of what we want to observe. There is usually a missing data
problem. Potential outcome Ya=1 is factual for some subjects and counterfactual for others.
,The individual causal effect cannot be observed, except under extremely strong (and generally
unreasonable) assumptions. We can however observe average causal effects, these can be
determined under certain conditions. Average treatment effect: average of individual effects in a
population.
EXAMPLE: Ask people if they are carrying a cigarette lighter. Come back after 20 years. Who is
healthier? Causal question: What is the effect of carrying a cigarette lighter on health?
Identifiability conditions: observing the counterfactual. Based on population averages, causal effects
can be estimated if three identifiability conditions hold:
Positivity
Observe ‘what would’ve happened if…’ Positive probability of being assigned to each of the
treatment levels. Units are assigned to all relevant treatments (people with and people
without a lighter). There has to be a control group.
Consistency
Observe ‘what would’ve happened if…’ Define if: clear definition of treatments. For example,
water is dangerous if you’re out on the sea, but two glasses water a day is not dangerous.
You can only have a causal effect for a specific situation. There is no causation without
manipulation.
Is it consistent?
- Broccoli? No
- Effect of obesity on health? No
- Effect of obesity on job prospects? Yes
Exchangeability
Observe ‘what would’ve happened if…’ Treatment groups are exchangeable, it does not
matter who gets treatment A and who gets treatment B. Notation: 𝑌_𝑖^𝑎⊥ A. Potential
outcomes are independent of the treatment that was actually received. Are people with and
without lighters exchangeable (similar in other respects)? No, they are not exchangeable. It
may be necessary to take other factors into account (adjustment). Association can be
ascribed to treatment effect. We can only describe the association we are left with if there is
exchangeability.
If the conditions are met, then association of exposure and outcome is unbiased estimate of causal
effect. If we want to estimate causal effects, all the conditions must hold.
In a randomised controlled trial (RCT), you automatically have the three conditions. The people can
be exchanged, you can switch between treatments etc. This is because you randomly assign patients
to treatment groups. Often you cannot do a RCT, because there is limited generalisability (external
validity) due to treatment protocol and patient selection and sometimes there are practical or ethical
considerations. At this point it would be beneficial to do observational studies. But this is much more
difficult, however very necessary. A observational (non-randomised) study has real world outcomes.
There is more availability of data. However, the internal validity is threatened by lack of
exchangeability. Positivity and consistency also need explicit attention.
Association does not equal causation. In many cases, we are interested in causal effects, not just
associations. Causal conclusions can be drawn if identifiability conditions are true. To see what
assumptions are required, we use: theory/subject knowledge and causal structure. We also design
the analysis accordingly.
,Statistical adjustments are mostly used to improve exchangeability. More factors is not always
better. You have to make a choice. There are a few traditional selection strategies. Correlation
matrix: select variables with significant association with outcome. Stepwise backward selection: Start
with all variables in regression model, remove the variable that is the least statistically significant and
repeat steps. The correlation matrix is a very bad method, just like the stepwise backward selection.
The least bad method is adjusting for confounders. Think about what the confounders would be and
adjust for them. Confounders are associated with the exposure, conditionally associated with the
outcome, given the exposure and not in the causal pathway between exposure and outcome.
Adjusting for everything you have will worsen exchangeability and improve bias. The problem with all
of these strategies is that they rely on the observed data rather than on a priori knowledge of causal
structures. There is after data collection, important variables may be missed and they might increase
bias rather than reduce it. The solution for this DAGs.
DAG: Directed Acyclic Graphic
What is the effect of carrying a cigarette
lighter on lung problems? People who carry a
lighter are more likely to have lung problems.
Smoking has an effect on the likelihood of
carrying a lighter. There is a causal effect from
smoking on cigarette lighters.
Everything in a DAG that is connected by arrows, is also associated in the data. An arrow represents a
causal effect.
Connections transmit association. Smoking has an effect on both, that is why there is an association
between lighters and lung problems. We don’t want them to be associated if they are not causal. You
need to block the connection by adjusting for smoking. Blocking a connection changes the
association. Blocking all connections removes the association.
A graph helps us to decide what to adjust for. It is a graphical representation of underlying causal
structures. DAGs encode a priori causal knowledge. Simple rules can be used to determine what
variables to adjust for.
- Directed: each connection is an arrow
- Acyclic: a path of arrows does not come back to its origin (a variable
cannot cause itself
- Each arrow represents a possible causal effect
- No arrow means certainly no causal effect
- Paths: a route between exposure and outcome. It does not
have to follow the direction of the arrows. In this case there
are 4 paths:
o AY
o AVY
o ALY
o AWY
- Causal paths and backdoor paths: a causal path follows the
direction of the arrows and a backdoor path does not follow the direction of the arrows.
o Causal paths:
AY
AVY
, o Backdoor paths:
ALY
AWY
- Open and closed paths: all paths are open unless arrows collide (two arrows have a collision
somewhere, they crash into one another) somewhere along the path
o How many open paths?
AY
AVY
ALY
o Collider W
- Blocking open paths
o How? By adjusting
o The association of A and Y is the combination of all open paths between them. An
open path is blocked when we adjust for a variable along the path.
o When you adjust for a collider, it leads to opening a path (unblocking). It is wrong to
adjust for a collider, because this would open the backdoor and generate an
association
o Why would you want to adjust? Because you don’t want any confounding in your
paths
o You use a confounder to block the path. Confounder is a variable that can be used to
remove confounding
Confounding: bias caused by common cause of exposure and outcome
Collider bias = selection bias.
Not adjusting for a confounder confounder bias.
Adjusting for a collider selection bias.
An open path is blocked when we adjust for any variable along
the path.
Remember the traditional definition of confounder:
- Associated with the exposure
- Associated with the outcome, given the exposure
- Not in causal pathway between exposure and outcome
Structural definition of confounding (based on DAG structure): bias created by a common cause of
exposure and outcome. Lighter and Health are associated because of Smoking. Structural definition
of confounder: variables that can be used to remove confounding. When definitions disagree
(collider), structural definition is right.
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller hef2020. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $13.56. You're not tied to anything after your purchase.