Week 1: Introduction to algorithms and the digital society
Algorithms: Encoded procedures for transforming input data into desired output, based on specified
calculations.
Types of algorithms:
Prioritization (making an ordered list)
o Emphasize or bring attention to certain things at the expense of others (e.g. Google page rank)
Classification (picking a category on Netflix)
o Categorize a particular entity to given class by looking at any number of that entity’s features
(e.g. inappropriate, humor Youtube content)
Association (finding links)
o Association decisions mark relationships between entities (e.g., OKCupid dating match)
Filtering (isolating what’s important)
o Including or excluding information according to various rules or criteria. Inputs to filtering
algorithms often take prioritizing, classification, or association decisions into account
(Facebook news feed)
Rule-based algorithms
Based on a set of rules
If condition then result
+ Quick and easy to follow
- Only applicable to the specified conditions
Machine learning algorithms
Algorithms that learn by themselves (based on statistical models rather than deterministic rules)
They are trained based on a corpus of data from which they may learn to make certain kinds of
decisions without human oversight
+ Flexible and amenable to adaptations
- Needs to be trained & black-box (no explanation of how it works)
E.g. that use Machine learning are Facebook, Amazon, Netflix
DeepFace algorithm
Facial recognition system
Identifies human faces in digital images
Recommender system
Algorithms that provide suggestions for content that is most likely of interest to a particular user
o E.g., news feed of FB, movies on Netflix, songs on Spotify, video’s on YouTube, products on
Amazon,
Rationale: avoid choice overload, to maximize user relevance and increase work efficiency
Techniques
Content-based filtering: algorithms recommend items that are similar to the ones that the user liked
in the past (based on similarity of items)
Collaborative filtering: algorithms suggest recommendations to the user based on items that other
users with similar tastes liked in the past
Hybrid filtering: algorithms combine features from both content-based and collaborative systems,
and usually with other additional elements (e.g., demographics) [mostly used]
Algorithms appreciation
People rely more on advice from algorithms than from other people
Despite blindness to algorithm’s process (black-box)
Automation bias
o Humans tend to over-rely on automation (blind faith in information from computers)
o Information from automation > information from humans
o Humans are imperfect, whereas computers are objective, rational, neutral, reliable, etc.
o E.g., GPS system, automatic pilot, spelling checker
,Algorithmic aversion
Tendency to prefer human judgments (oordelen) over algorithmic decisions, even when the human
decisions are clearly inferior (minderwaardig)
Less tolerance for errors from algorithms than from humans (e.g., GPS leads you to traffic jam =>
“over-reaction”)
People are averse because they don’t understand the algorithmic process
Algorithmic anxiety: lack of control and uncertainty over algorithms
Aversion or appreciation depend on many other factors
Type of task
Level of subjectivity in decisions (e.g., objective domain such as financial decision vs subjective
domain like dating)
Individual characteristics (e.g., age, education)
Algorithmic awareness (bewustzijn)
People have a very low algorithmic awareness
Article 1: Zarouali, B., Boerman, S.C., Voorveld, H.A.M., & Van Noort, G. (2022). The Algorithmic
Persuasion Framework in online communication: Conceptualization and a future research agenda. Talking
about algorithmic persuasion and APF
Algorithmic persuasion: any deliberate attempt by a persuader to influence the beliefs, attitudes and
behaviours of people through online communication that is mediated by algorithms.
Algorithmic persuasion framework (APF)
It addresses how data variables are inputs for different algorithmic techniques and algorithmic objectives,
which influence the manifestations of algorithm mediated persuasion attempts, informing how such attempts
are processed and their intended and unintended persuasive effects.
The output of a persuasion attempt becomes new input in the circular process that reinforces existing
algorithmic systems.
Input
All data that is used in algorithmic persuasion.
o First: collected by sender (Sephora)
o Second: owned by collaborating party (Platform Instagram)
o Third-parties: not directly involved (Data collector)
Explicit
, o Data wittingly (bewust) disclosed (onthuld) by users in online environments. Data that you
provide
Implicit
o The complication of and/or inferences from data about users collected without their
awareness. Unwittingly
The algorithm
Encoded procedures for transforming input data into the desired output, based on specified
calculations.
Rule-based, machine learning
Techniques: prioritization (making an ordered list), classification (picking a category), association
(finding links) and filtering (isolating what is important)
Objective of persuader
o Cognitive: Increasing memory of persuasion attempt, evoke certain thoughts
o Affective: Encouraging positive attitudes toward persuader
o Behavioural influence: Ensuring continues use of a media platform
Algorithm bias
o Developers unconsciously program their biases (e.g. prejudices and stereotypes) into
algorithms and/or because machine learning models have been trained on flawed and biased
datasets
Persuasion attempt
Context: e.g. advertising, corporate, health communication
Nature: e.g. paid/sponsored content or as organic in nature
Medium: Distribution via diverse online channels
Modality: Modes of delivery e.g. visual, auditory
Persuasion process
How persuasion is processed by recipients
Relevance: Content that aligns well with the interests and preferences of the recipients
Reduction: Reduce a very large corpus of content into a smaller consideration set to avoid choice
overload
Social norms: Ability to persuade people by showing them the preferences and behaviors of others.
These are supplemented with social norms.
Automation (bias): People attributing greater trust in machines and their recommendations
compared to other sources of recommendation
Reinforcement: The potential of using algorithms to reinforce people’s pre-existing attitudes and
views.
Persuasive effects
Users’ responses to algorithm-recommended communication also function as (data) input for new
communication and selection processes
Intended effects: those effects that are desired by the persuader exposing people to the
algorithmically recommended content (e.g., increased brand awareness, brand attitudes, sales)
Unintended effects: Undesired effects of exposure to algorithm mediated content (e.g. covert
behaviour manipulation, privacy issues, discrimination in the recommendation)
Marketing context
+ Increased brand awareness, brand attitudes, sales
- Covert persuasion (people’s behaviour and attitudes are influenced outside their awareness),
privacy
Political context
Political microtargeting:
+ intended effects of exposing people to more relevant political information, reaching social groups
that are difficult to contact and increasing voter knowledge about individually relevant issues, thus
influencing voter’s attitudes and behaviour.