100% tevredenheidsgarantie Direct beschikbaar na betaling Zowel online als in PDF Je zit nergens aan vast
logo-home
Broad summary Ethics of Technology - lectures, seminars and all readings €9,69   In winkelwagen

Samenvatting

Broad summary Ethics of Technology - lectures, seminars and all readings

 9 keer bekeken  0 keer verkocht

A broad summary of all the lectures, seminars and the accompanying readings of the course Ethics of Technology (FI3V19019), including the links to all the articles.

Voorbeeld 3 van de 17  pagina's

  • 23 juni 2024
  • 17
  • 2023/2024
  • Samenvatting
Alle documenten voor dit vak (1)
avatar-seller
diede26
Lectures & Seminars
Lecture 1 - Introduction: technology and technomoral changes
- Reasoning (process of reflection on available information to reach a conclusion) is
expressed through arguments helping us make defend/reject claims about the world
- There are 2 types of arguments: deductive (conclusion is guaranteed, very
powerful), inductive/abductive (arguments make a claim more likely)
- Deductive: ‘Valid’ (P1 and P2 are true, C is necessarily true) and ‘sound’ (it
is valid and P1 and P2 are true) are related to deductive arguments.
- Inductive inferences: based on statistical frequencies that lead to
generalizations going beyond observed samples, no absolute certainty. A
‘strong’ inductive argument: the P’s support the C. A ‘cogent’ inductive
argument: a strong argument with true P’s. Often used in the real world.
- Abductive arguments: inference to the best explanation. Plausible conclusion
but not definitely verified.
- Evaluating a philosophy paper: (I) make explicit the argumentative structure of the
paper, then (II) analyze the P’s and how they relate to the conclusion (look at validity,
truth of the P’s, etc.)
- Some useful strategies: (I) find counterexamples that would either render the
argument invalid or absurd, (II) analyze the terminology so whether the author has
carefully defined their concepts, or (III) if you find a counterexample to the conclusion
you can work your way backwards until you find which P is/are problematic
- Origin of technology: Ancient Greeks were already thinking that humans need
technology. Then the big question: ‘What is technology?’
1. One approach: instrumental; instrument that serves ends defined by others,
technology as a tool, a means to an end. Very intuitive. Extension of human
capabilities and up to us how we use it; it can’t have goals/values/biases itself
2. Another approach: cultural; technology as an expression of human culture. It
exists as an element in human culture, and it promises well or ill as the social
groups. technologiess emerge as a result of social, cultural, political and
economic structures. Technology shapes human behavior, but also vice
versa. We must look at the entire system, not one-way street.
- Technological Determinism: the effect of technology on society (technology
dictates social change). Definition: certain effects of a given technology will unfold
necessarily, e.g. ‘What is the impact of facial recognition technology on privacy?’
Look further than just the technology (naive technological determinism): the
social/economic system in which it’s embedded
- Winner (reading 1) defines “politics” as arrangements of power and authority in
human associations as well as activities within. Two kinds of artifacts that have
politics (objects made intentionally to accomplish some purpose):
1. Artifacts as means to establish patterns of power and authority: bridges too low,
preventing certain people from crossing the road, racist view: power relations
intentionally embedded in technological design. Another example: politics by
design interest: unintentionally helped the harvest, decrease in tomato farmers,
but increase in production. Power imbalance shaping development of the
technology and leading to a replication of existing power structures.

, 2. Inherently political artifacts: people that cannot climb stairs, soap dispensers
which did not detect hands with dark skin, unintentionally
- So, instead of thinking that there is determinism or that technology is neutral, we
have to think about it as affecting human agency. It is partly determined by design,
also dependent on social and cultural context and sometimes unforeseen by
designers and engineers. Therefore, it is important to think about responsibilities and
making choices. To make these choices, risks should be assessed.
- AI: big focus. As with technology, it has a general purpose; it can be used for many
different things. Especially focus on ML (supervised (involving a training and a
prediction step), unsupervised, reinforcement).
- Lecture topics later in this course - (II) Autonomous systems, (III) Value alignment: AI
systems should be consistent with human values, (IV) Algorithmic bias (e.g., AI CV
hiring tool biased against females) and fairness, (V) Epistemic dimensions of AI
(black box problem, why is a decision made?), (VI) Human Rights and AI (how can
we link the challenges to existing legal frameworks?), (VII) Robot-human relationship

Seminar 1
- A good philosophy paper: (I) Leave out irrelevant information, ​(II) Clarity of
exposition: vague/incomplete. Really focus in the essay. E.g., define ‘responsibility’
and be careful with ambiguity, (III) Go into detail, (IV) Nuance: weigh different forms
of terms, (V) Thesis statement: one clear concise sentence, (VI) Poor vocabulary: are
you talking about a concept or a theory e.g.? Be careful with words like ‘valid’. So,
make it clear, not many directions and keep it simple.

Are Algorithms Value-Free? (Johnson, reading 2)
- The author applies arguments from feminist philosophy of science to ML programs to
show that these programs are value laden because they use induction. The focus is
on the design of the programs, not the data.
- Thinking that induction is objective is problematic because: (I) Problems of induction
(the world may not always be as it is) and (II) Problems of underdetermination (we
can draw different conclusions).
- Meta-ethical question: what is bias? Because we have to accept some sort of bias
(as we can’t let these problems stop us from scientific research). Otherwise, we are
nowhere. Look at epistemology, then epistemological values. This leads to the
value-free ideal:




- Arguments against the value-free ideal:
1. The argument against demarcating between “epistemic” (knowledge-related)
and “non-epistemic” (social/ethical) values in science:
a. The justification argument against demarcation: 2 values: consistency
(perpetuates historical unjust norms) and novelty (allows us to move

, past historical unjust norms). These values stand in contrast, therefore
we can’t demarcate between epistemic and non-epistemic values
because the values we choose (and how we justify them) are informed
by socio-political contexts.
b. The constitutive argument against demarcation: epistemic values are
context-dependent and therefore take up non-epistemic values of the
time (e.g. in medicine where men were primary fundamental entities in
research with dangerous consequences for women), in ML: the
decision to use some data analysis method depends on the context
and aims of the programmer and program (decisions are value laden)
2. Argument from inductive risk: we might get things wrong, how sure do we
need to be before accepting/rejecting a hypothesis? (e.g. in ML: image
recognition for office lights vs self driving cars → the potential consequences
of being wrong should be considered). Ethical values play a role in guiding
such decisions. Therefore, the adoption of a hypothesis is value laden
- These arguments lead to the following statement: scientific and technological
practices are deeply embedded in social and ethical contexts
- Conclusion: accept the fact that algorithms are value laden, otherwise you have to
abandon the whole technology

Lecture 2 - Autonomous Systems, Human Control and Responsibility Gaps
- (Act-) Utilitarianism: a person's act is morally right if and only if it produces the best
possible results in that specific situation. Focus on the consequences of an act, not
on motives (consequentialism). Acts are judged on the basis of their aggregated
utility. Advantage: clear calculus and every person counts the same. Criticism:
strange moral imperatives (think about the thought experiment ‘trolley problem’:
“Would you push a person in front of the train to save 5 lives?”). Also, quantification
is often tricky, and “How well can we predict consequences?”
- Deontological ethics (Kant): whether an act is right or wrong depends on whether it
conforms to certain principles or rules, irrespective of the outcomes. Advantage:
explains certain intuitions about the right conduct and upholds human values.
Criticism: sometimes we have a lack of moral nuance in specific situations (e.g. “If
you protect someone and have to lie to someone, what is ethical?” You cannot lie but
you also have the duty to save lives → conflicting duties)
- Virtue ethics (Aristoteles): focus on the good life/character, there are no universal
rules, focus on right acts depending on context. The right thing to do is the virtuous
thing to do. Advantage: holistic approach and the flexibility to adapt to different
contexts. Criticism: cultural relativism and the lack of specific guidance (it does not
tell you how to act in a specific situation, no clear guidelines)
- Real-world situation: self-driving cars. Who should the car save? The person in the
car or the pedestrians? Which is the moral choice? Depends on cultural differences.
Example with elderly: in some countries, people would choose for the elderly to be
killed. In other countries, people would choose for the people inside the car. Two
viewpoints (Nyholm, reading 3):
1. Individual choice: owner should choose their car’s ethical settings, reflecting
personal ethical beliefs. Critique: harmful biases such as racism
2. Uniform settings: all cars have same settings to maximize overall safety and
ensure coordinated crash responses

Voordelen van het kopen van samenvattingen bij Stuvia op een rij:

Verzekerd van kwaliteit door reviews

Verzekerd van kwaliteit door reviews

Stuvia-klanten hebben meer dan 700.000 samenvattingen beoordeeld. Zo weet je zeker dat je de beste documenten koopt!

Snel en makkelijk kopen

Snel en makkelijk kopen

Je betaalt supersnel en eenmalig met iDeal, creditcard of Stuvia-tegoed voor de samenvatting. Zonder lidmaatschap.

Focus op de essentie

Focus op de essentie

Samenvattingen worden geschreven voor en door anderen. Daarom zijn de samenvattingen altijd betrouwbaar en actueel. Zo kom je snel tot de kern!

Veelgestelde vragen

Wat krijg ik als ik dit document koop?

Je krijgt een PDF, die direct beschikbaar is na je aankoop. Het gekochte document is altijd, overal en oneindig toegankelijk via je profiel.

Tevredenheidsgarantie: hoe werkt dat?

Onze tevredenheidsgarantie zorgt ervoor dat je altijd een studiedocument vindt dat goed bij je past. Je vult een formulier in en onze klantenservice regelt de rest.

Van wie koop ik deze samenvatting?

Stuvia is een marktplaats, je koop dit document dus niet van ons, maar van verkoper diede26. Stuvia faciliteert de betaling aan de verkoper.

Zit ik meteen vast aan een abonnement?

Nee, je koopt alleen deze samenvatting voor €9,69. Je zit daarna nergens aan vast.

Is Stuvia te vertrouwen?

4,6 sterren op Google & Trustpilot (+1000 reviews)

Afgelopen 30 dagen zijn er 73918 samenvattingen verkocht

Opgericht in 2010, al 14 jaar dé plek om samenvattingen te kopen

Start met verkopen
€9,69
  • (0)
  Kopen