Economic Assessment of Healthcare
Lecture 1: Introduction HTA & Research designs
Health technology assessment is a scientific study of a health technology, assessing
its health impacts and other aspects, with the aim to inform policy decisions.
Health effects: efficacy and effectiveness
1. Efficacy: Can it work?
2. Effectiveness: Does it work?
Aspects to be investigated in an HTA: health effects (efficacy & effectiveness),
technical properties, safety, economic impacts, and social, ethical, legal, or political
impacts.
Timing of the HTA: Safety -> efficacy -> effectiveness -> economic impacts -> social,
ethical, legal, political impacts.
In the Netherlands ‘Zorginstituut Nederland’ is the HTA-agency, in the UK it is the
National Institute for Health and Care Excellence (NICE).
Efficacy research is oftentimes conducted with a placebo, to see whether the drug
has more effect than a pill with no substance. A study with a placebo cannot be
effectiveness research, as in effectiveness research you want to portray the real-life
situation, while in real life a doctor does not prescribe a placebo.
Observational study easy to conduct, good reflection of usual practice, sometimes
very large populations are required, and confounding by indication is possible
(selection bias).
RCT is sometimes expensive to conduct, may take a lot of time, and there is selective
patient population (- generalizability).
Reliability: the degree of stability exhibited when a measurement is repeated under
identical conditions -> it refers to the measuring procedure rather than to the
attribute being measured.
Validity: the extent to which the study measures what it is intended to measure.
1. Internal validity: the probability that the study design and conduct does not lead
to biased results. In other words, results are valid for the research population.
2. External validity: the probability that results are valid for other populations as
well. In other words, results are generalizable.
Threats to internal validity:
1. Selection bias: systematic differences in participant characteristics at the start of
a trial.
2. Performance bias: systematic differences, other than the intervention under
study, in the performance of care.
3. Detection bias: systematic differences in obtaining (measuring) outcomes.
4. Selective outcome reporting bias: selection of a subset of the original variables
recorded, on the basis of the results, for inclusion in publication of trials
Solution to performance bias -> blinding
Lecture 2: Introduction Economic evaluation
Why do we perform economic evaluations:
1. Scarcity: available resources are never sufficient to allow all available health
interventions to be provided.
, 2. Choices: how can scarce health resources best be used in order to maximize the
health gain obtained from them?
The basic question of an economic evaluation: is the service or program worth doing
compared with other things we could do with the same resources OR do the extra
costs weigh up to the extra benefits.
An economic evaluation compares two or more alternatives and examines both
outcomes and costs.
Incremental Cost-Effectiveness Ratio (ICER) = (Costs1 – Costs2) / (Effects1 – Effects2)
Types of economic evaluations:
1. Cost-minimization analysis: effects considered equal
2. Cost-effectiveness analysis: disease-specific effects
3. Cost-utility analysis: Quality Adjusted Life Years
4. Cost-benefit analysis: Effects expressed in monetary value
Decision rule: accept if ICER < predefined threshold, but which threshold.
Thresholds:
1. UK: 20.000-30.000 per QALY gained
2. US: $20.000 per life year gained, $50.000 per QALY gained
3. NL: €10.000 - €80.000 per QALY gained.
ICER is a poor predictor of decisions, policymakers may accept an intervention with a
non-cost effective ICER, when:
1. There is lack of an adequate alternative
2. Seriousness of condition (burden of disease)
3. Affordability from the patient perspective
4. Predefined ethical objectives
Piggyback study -> adding collection of economic data to a clinical trial.
1. Advantages: experimental design, prospective -> patient level data, efficient use
of resources available for scientific studies
2. Aim of clinical trials: demonstrate safety and efficacy in a highly controlled
environment -> internal validity
3. Aim of economic evaluations: informing resource allocation decisions, outcomes
and costs in actual clinical practice -> external validity.
Clinical trial vs economic evaluation -> pragmatic/naturalistic trial design to improve
the generalizability of the results.
Study design issues in a pragmatic trial:
1. Comparator: choice of comparator treatment -> usual care, standard care, most
commonly used treatment.
2. Protocol-driven care: resources consumed for trial purposes that would not
typically be consumed in standard clinical practice. Increased resource
consumption, ‘case finding’ -> discovery of a previously undetected condition
during a protocol-mandated visit or diagnostic test, patient compliance.
3. Blinding: patients and providers do not have knowledge of the treatment group
to which patients have been assigned -> all study subjects receive the same tests
and services. Is this actual clinical practice?
4. Study population: in clinical trials patients are carefully selected to minimize
biological variation and highlight the treatment effect
5. Study sites: institutions and physicians should be representative of clinical
practice as a whole.