100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Fairness in AI $4.81
Add to cart

Summary

Summary Fairness in AI

 3 views  0 purchase
  • Course
  • Institution

This document contains notes and summaries covering the content of the course Human-Centered Machine Learning within the Artificial Intelligence Master at Utrecht University. It covers the following topics: - intro to FAI - measuring fairness - interventions & making fair systems

Preview 2 out of 11  pages

  • November 22, 2022
  • 11
  • 2022/2023
  • Summary
avatar-seller
Course notes on Human-Centered Machine Learning -
Explainability Part

— Lecture 1: XAI Intro —

What is Explainable AI/ML
• No consensus on a universal definition: definitions are domain-specific
• Interpretability: ability to explain or to present in understandable terms to a
human
⁃ The degree to which a human can understand the cause of a decision
⁃ The degree to which a human can consistently predict the result of a
model
• Explanation: answer to a why question
⁃ Usually relates the feature values of an instance to its model prediction
in a humanly understandable way
• Molnar: model interpretability (global) vs explanation of an individual
prediction (local)
• Ribeiro: explainable models are interpretable if they use a small set of
features; ‘an explanation is a local linear approximation of the model's
behavior

Motivation: why do we need XAI?
• Scientific understanding: does my model discriminate?
• Bias/fairness issues: why did my model make this mistake?
• Model debugging and auditing: how can I understand/interfere with the
model?
• Human-AI cooperation/acceptance: does my model satisfy legal requirements
(e.g. GDPR)?
• Regulatory compliance: healthcare, finance/banking, insurance
• Applications: affect recognition in video games, intelligent tutoring systems,
bank loan decision, bail/parole decisions, critical healthcare predictions (e.g.
cancer, major depression), film/music recommendation, job interview
recommendation/job offer, personality impression prediction for job interview
recommendation, tax exemption

Taxonomy
• Feature statistics: feature importance and interaction strengths
• Feature visualizations: partial dependence and feature importance plots
• Model internals: linear model weights, DT structure, CNN filters, etc.
• Data points: exemplars in counterfactual explanations
• Global or local surrogates via intrinsically interpretable models
• Example: play tennis decision tree:
⁃ Intrinsic, model specific, global & local, model internals
• Example: CNN decision areas in images:
⁃ Post-hoc, model specific, local, model internals

Scope of Interpretability
• Algorithmic transparency: how does the algorithm generate the model?
• Global, holistic model interpretability:

, ⁃ How does the trained model make predictions?
⁃ Can we comprehend the entire model at once?
• Global model interpretability on a modular level: how do parts of the model
affect predictions?
• Local interpretability for a single prediction: why did the model make a certain
prediction for an instance?
• Local interpretability for a group of predictions:
⁃ Why did the model make specific predictions for a group of instances?
⁃ May be used for analyzing group-wise bias

Evaluation of interpretability
• Application-level evaluation (real task):
⁃ Deploy the interpretation method on the application
⁃ Let the experts experiment and provide feedback
• Human-level evaluation (simple task): during development, by lay people
• Function-level evaluation (proxy task):
⁃ Does not use humans directly
⁃ Uses measures from a previous human evaluation
• All of above can be used for evaluating model interpretability as well as
individual explanations

Properties of explanation methods
• Expressive power: the "language" or structure of the explanations
⁃ E.g. IF-THEN rules, tree itself, natural language etc.
• Translucency: describes how much the explanation method relies on looking
into the machine learning model
• Portability: describes the range of machine learning models with which the
explanation method can be used
• Algorithmic complexity: computational complexity of the explanation method

Properties of individual explanations
• Accuracy: how well does an explanation predict unseen data?
• Fidelity: how well does the explanation approximate the prediction of the black
box model?
• Certainty/confidence: does the explanation reflect the certainty of the machine
learning model?
• Comprehensibility/plausibility:
⁃ How well do humans understand the explanations?
⁃ How convincing (trust building) are they?
⁃ Difficult to define and measure, but extremely important to get right
• Consistency: how much does an explanation differ between models trained on
the same task and produce similar predictions?
• Stability: how similar are the explanations for similar instances?
⁃ Stability within a model vs consistency across models
• Degree of importance: how well does the explanation reflect the importance of
features or parts of the explanation?
• Novelty and representativeness

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller massimilianogarzoni. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $4.81. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

52355 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$4.81
  • (0)
Add to cart
Added