Algorithmic Persuasion in the Digital Society
Lecture 1- 05/04/2022
INTRO TO ALGORITHMS AND THE DIGITAL SOCIETY
Def. of Algorithm: encoded procedures for transforming input data into desired output, based on
specified calculations (Gillespie, 2014)
Algorithmic Power/Functions
Prioritization-> making an ordered list. Emphasize or bringing to attention certain
things/content at expense of others
Classification-> picking a category, according to content features of the content/entity
Association->finding links, finding relations among the data.
Filtering-> including or exclude content based on certain criteria or rules. Isolating what’s
important in the content.
Rule-based algorithms: simplest, based on very specific steps or rules to follow in advance (IF-
>THEN). They’re only applicable to the condition they were designed in and for, specified condition.
Specific context only.
Machine learning algorithms: algorithms lean by themselves, based on statistical models rather than
rules). They’re trained based on a corpus of data, to learn to make certain kinds of decisions without
human oversight. Very flexible and adaptable to different contexts but need to be trained (large
amount of data needed for this), black box (at some point you don’t know why they took certain
decision and how the algorithm came to it).
-Recommender Systems: algorithms that provide suggestions for content that is most likely
of interest to a particular user. Decide which content to show/display to whom according to
specific criteria.
1) Content-based filtering: algorithms learn to recommend items that are like ones
that user liked in the past, based on similarity of items.
2) Collaborative filtering: recommendations based in items of other people with
similar tastes liked in the past.
3) Hybrid filtering: combination of the two first.
Who do people react?
Two different opinions, they’ll depend on type of task, level of subjectivity in decisions, individual
characteristics, etc.
Algorithmic appreciation: people highly rely on and trust algorithms, ignoring their faults. Blind
faith in automation.
Lecture 2 – 12/04/2022
ONLINE ADVERTISING
Online Behavioral Advertising
Effects of synced advertising (reading)
Personalized advertising across media in real time. Based upon current media behavior, increases
chance of brand exposure. There’s synchronization between ads on different devices.
, There’s no difference in brand memory despite the difference of sequence. There’s more attention to
TV commercials than tablet ads. More important: attention to one message drives attention to the
other. TV commercials receive more attention when the tablet ad was placed before or
simultaneously. Tablet receives more attention when it was shown after the TV commercial.
Perception of it: majority finds it unacceptable. There’re benefits such as personal relevance and
added value, but the costs are privacy risks and intrusiveness.
Filtering (reading)
Content-based filtering: based on past behavior. User own preferences
Collaborative filtering: based on similar users’ past behavior or preferences. Will lead to
higher level of bandwagon perception (if people like it then it’s good, based on opinions of
people who has info about the product, so they know it) among users, which will be
associated with a more positive evaluation of the recommendation system, compared to
content-based filtering. Works for experience products.
NFC: Need for Cognition, need to engage in effortful thinking. People with high NFC appreciate
content-based filtering more than collaborative because it is based in their preferences (more
transparency).
Lecture 3 – 19/04/2022
POLITICAL COMMUNICATION
Voter persuasion-> not something new, it’s been like this as long as politics exist. But technology
creates new dynamics hanks to algorithms and big data.
Computational politics: applying computational methods to large datasets derived from online and
offline sources for conducting outreach, persuasion and mobilization.
Big data: these days we had a massive increase in the amount and variety of data per
individual. Big data allows us, as a microscope, to zoom into someone’s personality and to
see more details, but also, as a telescope, allows us to zoom out and see the bigger picture.
Emerging computational methods: there’ve been developments in storage and database
systems, which is crucial to engage on computational politics, since there’s lots of data in
need to be stored. Also, algorithms allow the extraction of semantic information, based on
written texts, as well as social networks analysis and correlational data analysis.
Modeling: predicting new information through computational data analysis, predict
individual aspects based on their online information, without asking anything to the voter and
without their knowledge even. These allows a more subtle persuasion, since the voter doesn’t
know about this modeling, with accuracy.
Behavioral Science: idea that political communication used to be based on rationality, but
models and theory on how to persuade and influence people psychologically, aiming to their
gut feeling and emotions.
Experimental science: stepping aside from “gut feeling”, because of technology being
cheaper politics engaged on large, real-time field experiments.
Power of algorithmic platform: arise of social media with really powerful algorithms that
provide organic content (proprietary algorithms are opaque, black box). It’s problematic
because users see different political content and information.
Political Microtargeting: from mass communication to personal communication. Not everyone gets
to see the same content.