Summary of the piece 'Fair prediction with disparate impact: a study of bias in recidivism prediction instruments' by Chouldechova (2017). About the problems of AI systems that predict recidivism.
14. Fair prediction with disparate impact: a study of bias in recidivism
prediction instruments. - Chouldechova (2017).
In dit artikel worden verschillende noties van fairness gepresenteerd en wordt uitgelegd waarom
deze soms op verbazingwekkende manieren niet tegelijkertijd voldaan kunnen worden. Ook
wordt uitgelegd hoe we afwegingen kunnen maken tussen verschillende noties van fairness.
Dit artikel gebruikt gevorderde statistische concepten:
• Positive predictive value (PPV)
• False positive rate (FPR)
• False negative rate (FNR)
- recidivism prediction instruments (RPIs): provide decision-makers with an assessment of the
likelihood that a criminal defendant will reoffend in the future
→ important that they are free from discriminatory biases
- false positive rate (FPR): how often someone innocent is mistakenly thought to be guilty
- false negative rate (FNR): how often someone guilty is mistakenly thought to be innocent
→ in COMPAS: higher FPRs and lower FNRs for black defendants than for white ones
- disparate impact: refers to settings wherein a penalty policy has unintended disproportionate
adverse impact on a particular group
- high-risk score threshold (sHR): defendants whose score S exceeds sHR are referred to as high
risk, the rest is referred to as low risk
- calibration: an algorithm is fairly calibrated if the probability of a positive outcome for each
score of the algorithm is the same for all people, regardless of which group they are in
→ a well calibrated instrument: free from predictive bias or differential prediction
- predictive parity: an algorithm meets predictive parity when the probability of a positive
outcome is the same among all people with a positive prediction value (PPV), regardless of
which group those people are in
- error rate balance: FPRs and PNRs are equal for both groups
- statistical parity: an algorithm satisfies statistical parity if the proportion of individuals
classified as high risk is the same for each group (equal acceptance rates / group fairness)
- predictive parity is incompatible with error rate balance when prevalence differs across groups
- when the recidivism prevalence differs across groups, any instrument that satisfies predictive
parity at a given threshold (sHR) must have imbalanced false positive or false negative error
rates at that threshold
The benefits of buying summaries with Stuvia:
Guaranteed quality through customer reviews
Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.
Quick and easy check-out
You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.
Focus on what matters
Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!
Frequently asked questions
What do I get when I buy this document?
You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.
Satisfaction guarantee: how does it work?
Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.
Who am I buying these notes from?
Stuvia is a marketplace, so you are not buying this document from us, but from seller immederoever. Stuvia facilitates payment to the seller.
Will I be stuck with a subscription?
No, you only buy these notes for $3.75. You're not tied to anything after your purchase.