Summary: Topic Algorithmic Persuasion in the Digital Society
Week 1: Introduction to algorithms and the digital society
Defining algorithms
Algorithms: encoded procedures for transforming input data into a desired output, based on specified
calculations.
o E.g. you can see it is as a recipe; follow the instructions one by one and in the end you get a
result.
o Input → set of rules to obtain the expected output from the given input (algorithm) → output.
Algorithmic persuasion: the practice of using algorithms to create online persuasive environments that
can influence the online attitudes and behaviours of users in a preferred direction in a subtle way.
Algorithmic power: 4 main structures in which algorithms can be used
• Prioritization (making an ordered list)
o Emphasize or bring attention to certain things at the expense of others; an order in
certain content.
o E.g. Google Page rank; prioritizing search results based on certain aspects (e.g.
location), otherwise you would have a long list of irrelevant results.
→ Making it more relevant.
• Classification (picking a category)
o Categorize a particular entity to given class by looking at any number of that entity’s
features; classifying certain content with a specific label.
o E.g. inappropriate YouTube content.
• Association (finding links)
o Association decisions mark relationships between entities and draw their power
through both semantics and connotative ability.
o E.g. OKCupid match, Tinder → a lot of profiles, the algorithm tries to look for
associations/relationships to profiles that are similar to each other and make sure these
profiles match.
• Filtering (isolating what’s important)
o Including or excluding information according to various rules or criteria. Inputs to
filtering algorithms often take prioritizing, classification, or association decisions into
account.
o Filtering almost always takes the previous three into account; it uses other logics before
using filtering something in or out.
o E.g. Facebook news feed → filter out irrelevant information, only select relevant
information.
Algorithmic power: two types of broad classes of algorithms
1. Rule-based algorithms
o Based on a set of rules or steps
1
, o Typically “IF – THEN” statements (if someone sees X, then show Y)
o + Pros: quick to write, easy to follow
o – Cons: only applicable to specified conditions (not really flexible), takes a lot of time.
2. Machine learning algorithms
o Algorithms that “learn” by themselves (based on statistical models rather than
deterministic rules)
o These algorithms are “trained” based on a corpus of data from which they may “learn”
to make certain kinds of decisions without human oversight (not needed to formulate
rules one by one)
o + Pros: flexible and amenable to adaptations
o - Cons: need to be trained, black-box (you don’t know why they are making certain
decisions). Only interesting if a lot of data is available (to train them).
Machine learning algorithms are almost always used in data-intensive online environments; they have
become the standard.
→ Logic: train the algorithm on a sample of data, and then it can be used for making predictions about
other data.
→ Facebook, Amazon, Netflix, etc. all use machine learning algorithms.
• They have loads of data, thus the machine has lots to learn.
• A few hundred lines of code can easily generate a model consisting of millions of lines.
• Example: Facebook’s DeepFace algorithm identifies human faces in digital images. It is trained
on a large ‘identity labelled dataset’ of four million facial images. The algorithm could
determine similarities between people and recognize people as well (97% accuracy). According
to the book of S. Zuboff, these systems can be sold to businesses and authoritarian regimes.
Recommender Systems
Recommender systems are algorithms that provide suggestions for content that is most likely of
interest to a particular user. These algorithms decide which content to display to whom based on certain
criteria → users receive distinct streams of online content.
Examples: news feed on Facebook, movies on Netflix, songs on Spotify, videos on YouTube.
Arguments to do this (rationale):
• Avoid choice overload; algorithm filters which information is the most interesting, this makes
consuming media easier.
• Maximize user relevance
• Increase work efficiency (but: this also makes people fear that algorithms take over jobs)
Most used techniques of recommender systems:
1. Content-based filtering: these algorithms learn to recommend items that are similar to the
ones that the user liked in the past (based on similarity of items). Can be based on:
o Explicit data, e.g. giving a movie 4 out of 5 stars (you ‘give’ data to the algorithm)
o Implicit data, e.g. the amount of time you watch a specific video on YouTube that is
used as matrix of you liking something (so you might not be aware of this).
2
, 2. Collaborative filtering: these algorithms suggest recommendations to the active user based on
items that other users with similar tastes/characteristics liked in the past.
3. Hybrid filtering [mostly used]: these algorithms combine features from both content-based
and collaborative systems, and usually with other additional elements such as demographics.
For example: Netflix recommender systems.
How people perceive algorithms
Two main camps in literature:
1. Appreciation: people like that algorithms take over decisions from humans.
- People rely more on advice from algorithms than from other people despite blindness
to algorithm’s process (‘black-box’).
- Automation bias: natural tendency that we over-rely on everything that is automatic.
o Humans tend to over-rely on automation (blind faith in information from
computers).
o Information from automation > information from humans.
o Humans are imperfect, whereas computers are objective, rational, neutral,
reliable. E.g. GPS system, automatic pilot, spelling checker.
2. Aversion: people do not like that algorithms take over decisions from humans.
- Tendency to prefer human judgements over algorithmic decisions, even when the
human decisions are clearly inferior.
- Less tolerance for errors from algorithms than from human, e.g. when GPS leads you
to a traffic jam causes an over-reaction.
- People are averse because they don’t understand the algorithmic process; they think
human decisions are easier to understand.
- Algorithmic anxiety: lack of control and uncertainty over algorithms that creates
anxiety among Airbnb hosts → they have double work: a) appealing to customers and
b) to algorithms; how does the algorithm make me recommend me more often?
Aversion or appreciation: a nuanced view
Aversion or appreciation depend on many other factors:
• Type of task:
o Mechanical task: algorithmic decisions more fair and trustworthy than human
judgement (reliable and lack of bias).
o Human tasks: people think algorithms do not fit (less fair, less trustworthy, less
positive emotions).
Type of task matters in evaluating algorithms.
• Level of subjectivity in decisions:
o Objective decisions (e.g. financial decisions): people want to rely more on algorithmic
advice.
o Subjective decisions (e.g. dating advice): people rely more on human decisions; affect
based decisions needed.
People prefer an algorithm for objective decisions and a person for subjective decisions.
• Individual characteristics: not everyone is reacting the same way to algorithms.
3
, o Usefulness: age (-); the higher the age, the less useful people think algorithms are,
education (+); the higher educated, the more useful they perceive algorithms, gender
(+); women think algorithms are more useful than men.
o Usefulness and fairness: technical knowledge (+); the more technical knowledge, the
more useful and fair someone thinks algorithms are, online privacy concerns (-); if
someone is more concerned about privacy, algorithms are perceived less fair and
useful.
Also: the choice is not always ‘human vs algorithm’. They can go hand-in-hand and work in
partnership; benefiting from each other’s strengths and accept the weaknesses.
→ “Power of veto”: whenever you use algorithms, it is important that people have the right to interfere.
Humans should always be in the position to decide otherwise; not overly rely on algorithms.
➔ As such, algorithms should also be seen as tools that can ‘support’ rather than ‘replace’ humans in
making decisions.
Do people know how algorithms curate content?
According to a study of Zarouali, Boerman, and de Vreese (2020), people have a very low algorithmic
awareness; they are not aware that algorithms make decisions for them. This is studied in the cases of
Facebook, Netflix, and YouTube.
Week 2: Algorithmic persuasion in online advertising
Article Boerman, Kruikemeijer, & Borgesius (2017): online behavioural advertising
Online behavioural advertising (OBA): the practice of monitoring people’s online behaviour and using
the collected information to show people individually targeted advertisements. This can include web
browsing data, search histories, media consumption data, purchases, or posts on social media.
→ OBA could be considered a type of personalized advertising (= tailoring advertising to individuals),
but personalized advertising has a broader scope than OBA and could include advertising amended to
personal data that are not based on online behaviour (e.g. location). OBA refers only to advertising that
is based on people’s online behaviour.
→ OBA uses personal information to tailor ads in such a way that they are perceived as more personally
relevant. The tracking of online activities, collection of behavioral data, and dissemination of
information, often happens covertly; this may be harmful and unethical.
This study of Boerman and colleagues first conducted a literature review of 32 empirical studies
published between 2009 and 2016. The findings of this review are much more nuanced than the
industry’s promise that OBA boosts ad effects: the effects depend on advertiser- and consumer-
controlled factors.
Then, a framework was developed that identifies the factors that explain consumers responses to
OBA and illustrates their interconnectedness. This model (figure 1) explains how consumers perceive
and process online ads. Three main types of factors are distinguished: 1) advertiser-controlled factors,
2) consumer-controlled factors, and 3) advertising outcomes. Within these aspects, different aspects can
be distinguished:
4