100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
JADS Master - Interactive And Explainable AI Design Summary €5,49   In winkelwagen

Samenvatting

JADS Master - Interactive And Explainable AI Design Summary

1 beoordeling
 36 keer bekeken  4 keer verkocht

Summary for the Interactive And Explainable AI Design course of the Master Data Science and Entrepreneurship.

Voorbeeld 3 van de 19  pagina's

  • 20 september 2022
  • 19
  • 2021/2022
  • Samenvatting
Alle documenten voor dit vak (1)

1  beoordeling

review-writer-avatar

Door: patrykslomka88 • 7 maanden geleden

avatar-seller
tomdewildt
1. Explainable & Interpretable AI
Case-based Reasoning
Reasoning based on similar cases from the past (i.e. it is possible to determine accurately
and in a personalized manner how skaters should build up their lap times in order to achieve
a faster end result).

Computer Metaphor
The human is a symbol processor like a computer (limited attention and memory capacities).
● Information-processing approach:
○ The mental process can be compared with computer operations.
○ The mental process can be interpreted as information progressing through a
system in stages.
● Serial processing and symbolic representation.

General Problem Solver
The means-end analysis compares the current state with the goal state and chooses an
action that brings you closer to the goal.

Connectionist Approach
Rumelhart and McClelland (1986)
Connectionism / neural networks → parallel processing & distributed representation →
inspired current AI & deep learning techniques (brings new challenges for explanation).

Why Do We Need Explainability?
Important for trust and actual use/deployment of AI / ML.
● Model validation: avoid bias, unfairness or overfitting, etc.
● Model debugging & improvement: improve model fit, adversarial learning, reliability &
robustness.
● Knowledge discovery: explanations provide feedback to data scientists.
● Trust & technology acceptance: explanations might convince users to adopt
technology and have more control.

What Is A Good Explanation?
Confalonieri et al. (2020) & Molnar (2020)
● Contrastive: why this, and not this (counterfactual).
● Selective: focus on a few important causes.
● Social: should fit the mental model of the explainee / target audience and consider
social context + prior belief.
● Abnormalness: humans like rare causes.
● Truthfulness: less important for humans than selectiveness.

Important Properties ML
● Accuracy: does the explanation predict unseen data? As accurate as the model?
● Fidelity: does the explanation approximate the prediction of the model (important for
black-box)?



1

, ● Consistency: same explanations for different models?
● Stability: similar explanations for similar instances?
● Comprehensibility: do humans get it?

Types Of Explanations
Confalonieri et al. (2020)
● Symbolic reasoning systems: based on a knowledge base and productions
rules/logical inferences (inherently interpretable / explainable).
● Sub-symbolic reasoning: representations are distributed, explanations are
approximate models, focus on causability / counterfactuals can help the user.
● Hybrid / neural-symbolic systems: use the symbolic system to explain models coming
from the sub-symbolic system.

▶ Explanations as lines of reasoning:
domain knowledge as production rules:
● Q&A module: explanations on the
knowledge base.
● Reasoning status checker:
evaluate a sequence of rules used.




▶ Explanations as problem solving activity:
explanations need different levels of
abstraction, and should focus on
explaining the problem solving of the task.




Machine Learning / AI Interpretability
● Methods:
○ Glass-box models (inherently interpretable): regression, decision trees, GAM.
○ Black-box models: neural networks, random forest → requires post-hoc
explanations.
○ Model-specific methods: explanation specific to ML techniques.
○ Model-agnostic methods: treat ML model as black-box, only use in- and
output.
● Classifications:
Molnar et al. (2020)
○ Analyzing the components of the model (model-specific).
○ Explaining individual predictions (local explanation / counterfactuals).
○ Explaining global model behavior.
○ Surrogate models train on the in- / outputs (model-agnostic).
Confalonieri et al. (2020)
○ Global methods.
○ Local methods.
○ Introspective methods.




2

, Analyzing Components of Interpretable Models
● Linear Regression: weighted sum of features.
● Decision trees: interpret the learned structure.

▶ Does not work for high-dimensional data: pruning decision trees or shrinking coefficients in
regression (LASSO).

Analyzing Components of Complex Models
Molnar et al. (2020)
● Feature maps visualizing layers in CNN models.
● Analyze the structure of random forests (Gini Importance).
● Add interpretability constraints to the model.

Global Explanations
How does the model perform on average for the dataset?
● Generate symbolic representations: fit interpretable model on input / output relations
of the trained model.
● Feature importance ranks: permutate / remove features and observe changes in
model output.
● Feature effect: effect of a specific feature on the model outcome.

Local Interpretable Model-agnostic Explanations (LIME)
An algorithm that can explain predictions of classifiers / regressions by approximating it
locally with an interpretable model.
● Interpretable: provide quantitative understanding between input and response.
● Local fidelity: explanations must be at least locally faithful.
● Model-agnostic: explainer should be able to explain any model.

Shapley Values
The average marginal contribution of a feature value across all possible coalitions.
Contrastive explanations (better than LIME), high computational time.

Counterfactuals
How does the output change when the input changes? Need to be actionable (unchangeable
features are bad counterfactuals: gender, race, etc.).

Recommender Systems
Often black-box but can be explained abstractly by the underlying algorithm.
● Item-based: we commend you … because you also like …
● Use-based: people like you also like …
● Feature-based: you like this because you like aspects X and Y.

Challenges
Molnar et al. (2020) & Confalonieri et al. (2020)
● Statistical uncertainty: most often not represented in the explanations, so how much
can we trust them?
● Causality: casual interpretation is usually the goal, but models are typically more
correlated and so are the explanations.


3

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper tomdewildt. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €5,49. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 67474 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€5,49  4x  verkocht
  • (1)
  Kopen