100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Advanced Research Methods - Colleges €5,49   In winkelwagen

College aantekeningen

Advanced Research Methods - Colleges

 22 keer bekeken  1 keer verkocht

Een samenvatting van alle colleges van Advanced Research Methods. Het eerste vak van HCM (2021/2022).

Voorbeeld 4 van de 32  pagina's

  • 8 maart 2022
  • 32
  • 2021/2022
  • College aantekeningen
  • Onbekend
  • Alle colleges
Alle documenten voor dit vak (9)
avatar-seller
healthcarestudent
ADVANCED RESEARCH METHODS
1.2 CAUSAL INFERENCE
Assessing causal inference studies:
- What was the question?
o What was the underlying question?
- What was actually estimated?
o Is the estimate biased or unbiased?
o Is this an estimate of a full or partial effect?
- Is the estimate really an answer to the question?
- How was the analysis designed?
- Were statistical methods applied correctly?
- What is the estimate? Is that big, small, good, bad, etc?
o How uncertain is the estimate?
- What do the researchers conclude? Is that conclusion just?
- Is this strong or weak evidence for something?
- How does it compare with what we (thought we) knew?

The word improves implies a causal effect. A leads to B is causal language.

Small sample, study performed or financed by a commercial company or no control group can always
be challenges in studies. It can lead to essential omission or potential regression towards the mean.

We cannot draw any meaningful causal conclusions from the True Minerals L’Oreal example.

An individual, a treatment has a causal effect if the outcome under treatment 1 would be different
from the outcome under treatment 2. Because of causation, you can find out what would have
happened to people if they do not undergo a treatment, but you are also able to find out what will
happen to people that start using the treatment.

Average treatment effect is average of individual effects in a population.




Not all potential outcomes are observed. Counterfactual outcome is the potential outcome that is
not observed because the subject did not experience the treatment (‘counter the fact’). Potential
outcome is factual for some subjects, and counterfactual for others. Individual causal effect cannot
be observed in this example. The average causal effect cannot be inferred from individual estimates.
Data is missing.

There is a solution to this problem: three identifiability conditions need to apply.
The conditions apply to the question of what would have happened if the counterfactual was
observed.
- Positivity
- Consistency

, - Exchangeability
If the conditions are met, then association of exposure and outcome are an unbiased estimate of
causal effect.

Example about Lijnbaan in Rotterdam
You ask all the passerby’s, ‘are you carrying a cigarette lighter?’. Come back after twenty years who is
healthier?
Causal question: what is the effect of carrying a cigarette lighter on health?

Positivity
It is about the sample that we collect and how it was composed. There has to be a positive
probability for everyone in the sample to be assigned to a certain treatment level. Treatment levels
in example are carrying a cigarette lighter or not carrying it. Everybody who has a lighter should also
could have had a lighter, and vice versa. This is not the case for the True Mineral Match group. Users
could not not have used it.

We don’t only need people with and without cigarette lighters, but also want to adjust for smoking
status. To achieve positivity we need:




Consistency
This condition is about defining the treatments or the exposures. Hèrnan: does water kill? The
question is what you mean with water. You need to be specific when you define exposures.

Exchangeability
The treatment groups have to be exchangeable. It does not matter who gets treatment A and who
gets treatment B. Potential outcomes are independent of the treatment that was actually received.
In Lijnbaan example they are not exchangeable, the people that are carrying cigarette lighters are
probably smokers. Association can be ascribed to treatment effect.

Stratification is dividing a sample up into different groups according to a variable. You can for
example adjust for smoking in the Lijnbaan example. When you have adjusted for smoking status, the
groups become exchangeable. Now we have an unbiased estimate of the average causal effect.

In RCTs you already meet the identifiability conditions. Randomizing helps you achieve
exchangeability and positivity. Defining the interventions leads to consistency.

RCT
- Limited generalizability (external validity) due to treatment protocol and patient selection.
- Practical, ethical considerations

Observational (non-randomised study)
- Real world outcomes
- Availability of data
- Internal validity threatened by lack of exchangeability
- Positivity and consistency need explicit attention

,1.3 DIRECTED ACYCLIC GRAPHS I




Association does not equal causation. This answer might be too easy. When creating causations, you
have to be transparent about your assumptions. This is when we can use a DAG.

Association is a statistical relationship. Causation is the difference between potential outcomes. This
association equals this difference if identifiability conditions hold. To understand this, we need
theory/subject knowledge and causal structure.

Adjustment is used to improve exchangeability. It is possible when there is a small number of factors.
Complete and correct adjustments lead to exchangeability. However, how do you decide what to
adjust for? Traditional selection strategies should not be used. These strategies rely on the observed
data rather than any theory/subject knowledge. It is possible that important variables are missing or
that certain choices were made before data was collected. These strategies may increase bias rather
than reduce it. Step-wise methods lead to underestimation of statistical uncertainty.

DAGs
Arrows represent causal effects. Blocking a connection changes the association. Blocking all
connections removes the association. A DAG is a graphical representation of underlying causal
structures. It is based on what you know or what you think you know. You use a priori causal
knowledge.

Each arrow represents a possible causal effect. No arrow means certainly no causal effect. DAGs are
acyclic because a path of arrows does not come back to its origin.

A path is a route between exposure X and outcome Y. It does not have to follow the direction of the
arrows.
A causal path follows the direction of the arrows. A backdoor
path does not. All paths are open unless arrows collide
somewhere along the path. Open paths transmit association:
the association of X and Y is the combination of all open
paths between them. An open path is blocked when we
adjust for a variable along the path. This part of the
association is then removed.

Confounding is
bias caused by
a common
cause of
exposure and
outcome. A confounder is a variable that can be used
to remove confounding. By adjusting for L you remove
the confounding.

, 1.4 DIRECTED ACYCLIC GRAPHS II
A collider blocks a path. Adjusting for a collider opens the path, which can lead to backdoor paths.
Collider bias is difficult to understand intuitively, but it is easy to see in a DAG. It is possible to
accidentally introduce collider bias. An example of collider bias is: do infected health workers get
worse COVID than other people? The data says health care workers get less severe COVID. This is not
the case, health workers just get tested more often. Almost all infections in health workers are
detected, whereas in normal people only the severe infections are tested. The effect is biased
downward.

M-bias occurs when the DAG is shaped as an M. It would be wrong to adjust for the variable in the
middle.

The traditional definition of a confounder is a variable that is associated with the exposure, and
associated with the outcome, given the exposure. It is not in a causal pathway between exposure and
outcome. This would mean a collider would be called a confounder, which would lead to you
conditioning it. This is however of course not correct. Therefore the structural definition based on
DAGs is right.

Selection bias is collider bias. Sometimes the terms confounding and selection bias are used with
different meanings. You should always be critical and think what the author actually means.

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper healthcarestudent. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €5,49. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 73918 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€5,49  1x  verkocht
  • (0)
  Kopen