100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Complete notes $8.06
Add to cart

Summary

Summary Complete notes

 55 views  5 purchases
  • Course
  • Institution

These are notes of all the mandatory readings, videos and lectures from the "Topic: Algorithmic persuasion in the digital society" course (2nd/3rd year at the UvA - Communication Science). It comprehends everything relevant to study for the exam. It is well structured, as it addresses the readings,...

[Show more]
Last document update: 2 year ago

Preview 4 out of 48  pages

  • July 12, 2022
  • July 15, 2022
  • 48
  • 2021/2022
  • Summary
avatar-seller
TOPIC: ALGORITHMIC PERSUASION IN THE
DIGITAL SOCIETY
1: INTRODUCTION TO ALGORITHMS AND THE DIGITAL SOCIETY

VIDEOS


WHAT EXACTLY IS AN ALGORITHM? ALGORITHMS EXPLAINED (BBC IDEAS) : HERE

Online definition: a process or set of rules to be followed in calculations or other problem-solving operations, especially by a
computer

“A set of instructions that enable a computer programme to put together different sources of information and generate a result,
like a recipe” (Professor Victoria Nash)

“Set of instructions use data. Algorithms can learn from other algorithms, and they can create their own instructions. The basis is
the same for all algorithms. Data goes in, goes through instructions, result comes out.” (Dr Bernie Hogan)

Coding vs algorithms: “Coding is algorithms that a computer can run the instructions for you. It has to be done in a language that
the computer can understand (i.e., coding). The big benefit of algorithms compared to humans, is that you don’t have the human
error. The computer goes through the instructions and that’s all they know how to do.” (Isabelle Maccabee)

“It’s possible for algorithms to be taking our jobs. But also deskilling humans if we become too dependent upon them and too
trusting of them it can deskill. But on the flip side, they can hugely beneficial and useful: speeding up decision-making, making
whole processes efficient, maybe spotting things that we might not have spotted ourselves. So we mustn’t be frightened of them,
we must just use them in the correct manner. (Dr. Allison Gardner)

In some parts of the world, algorithms are now used in the criminal justice system, in social care, in credit checks – they’re prolific
– machines making decisions that directly affect our human lives, not just the adverts that we see on the internet or the people
we match on dating apps. The question for society isn’t the algorithm, but who controls it and where the data comes from that
goes into them.


ALL HAIL TO THE ALGORITHM - A FIVE-PART SERIES EXPLORING THE IMPACT OF ALGORITHMS ON OUR
DAILY LIVES (AL JAZEERA): HERE

These invisible codes are playing an increasing important role in our lives. They are just as much a part of our modern infrastructure
as buildings or roads. People might assume that because it is done by a computer, the result is objective and correct. The reality
is that algorithms are trained on past human decisions and built by fallible humans themselves. We talk about accuracy, we talk
about fairness, but we don’t talk about justice.

1) ‘Trust me’ – I’m an algorithm
Robot-debt: In Australia, the government set up an algorithm to detect inconsistencies amongst welfare receivers. There used to
be a human employee checking these flagged inconsistencies, but they removed them in 2016. While there was still a human
checking these, there were 20,000 discrepancy notices a year. When the human employee was removed, it went up to 20,000 a
week. As a result, hundreds of thousands of people are issued false debts. The Australian government disagrees and claims that
the algorithm is working. However, they are not able to provide people with the reasons and calculations regarding their debt,
indicating a lack of transparency.



1

,Virginia Eubanks studies the automation of public services in the USA points to developments in the late 60s and 70s with the civil
rights movement. Black people and unmarried women could now request financial aid from the government. While technology
was touted as a way to distribute financial aid in an efficient way it quickly became a tool to limit the number of people receiving
financial aid. These systems are often thought of as natural and inevitable upgrades, but in fact they are systems that make really
important consequential political decisions for us. They are needlessly complex systems that are incredibly stigmatising and
emotionally very difficult. It furthers divert people from getting the resources that they need.

Correctional Offender Management Profiling for
Alternative Sanctions: it’s an algorithm that’s being
used in courtrooms across the country (USA) to assist
judges during sentencing and help them assess who
should be released and who should be detained
pending trial. COMPAS was brought to offset
inconsistencies in human judgement. The assumption
being, of course, that a piece of code would always be
less biased and less susceptible to prejudice.
However, COMPAS has faced several criticisms:
primarily accusations of racial bias, inaccuracy and
lack of transparency. This algorithm is black box: impenetrable, unquestionable, impossible to appeal the decision. The supreme
court deems it as beneficial and an ‘equal playing field of “no one knows” – since no one has power over it. Someone argues that
we need to have transparency built into this system.
So, can we trust algorithms? It’s not so much about whether algorithms are trust-worthy, it’s more about the quality of the data
that feeds them and the objectives of those designing and controlling them. Human biases, human imperfections: that’s what we
see reflected in our algorithms and without better oversight, we risk reinforcing our prejudices and social inequalities.

2) ‘Like me’ – The popularity and power of big tech
Big tech companies need us to like them so that we keep
using them and thus, generate data which big tech
companies are on the chase for. Data colonialism:
extraction and appropriation of data. The two centres of
power for data colonialism are the US and China. Chinese
companies are everywhere in Africa (Huawei, PalmPay…).
Internet.org is a project by Facebook meant to give access
to internet for people in areas that have poorer
connection. However, the website is always linked to
Facebook, implying that there is possible data extraction
so it might not be as a selfless initiative as Facebook tries to portray. Our participation is expected and our participation, we are
told, is for own good (like colonialism was supposed to bring ‘progress’). Meanwhile all of this extraction and capturing of data is
happening in the background without us realising the consequences.

A lot of these companies aren’t African. So what is a Kenyan citizen supposed to do when an American company uses their data,
sells their data, markets it as a product without their consent, without their ability to intervene to a court system. Because of the
narrative around how any and all digital development is positive, asking critical questions is almost seen as being an enemy of
progress and therefore the risk is your people in the communities you serve will miss out. So bc of that nuanced and problematic
notion being created, very few politicians (government actors) want to step up to the plate to play this game pro-actively. Our
data (our lives) isn’t really data until you create algorithms that can convert every single human being into a collections of bits that
money can be made off of.

3) ‘Click me’ – The algorithm made me do it
Every day, Twitter, Facebook and Instagram are trying to manipulate public conversation on behalf of politicians (bots). But the
age of bots is over as their techniques are easily noticeable, they got purged. So now people are hired to do the bots’ work (trolls
or decks). Bot activity, even if done by humans, has characteristics: bots create tumours, and people create webs. Conversations
grow until we have a web. Bots cannot simulate these connections. Instead of censoring social media platforms, the political
parties of the countries where online manipulation happens have realised that it’s much more effective to flood them with bots,

2

,junk news, and disinformation. These platforms can be used to sway opinions. We are seeing this shift away from automated bots
to actual people doing this.
Ricardo and Alberto Flores were killed (set on fire by a mob) because of a false rumour that spread on WhatsApp and Facebook
that they were child abductors in Mexico. Their killing by the mob was live streamed on Facebook. Websites can earn money
quickly with every click on their page (e.g., websites that share fake sensational news, such as there being child abductors) so
money is the motivation for it. What’s the best way to get views on your website? Facebook, which is one of the most efficient
distribution tools with its 2 billion users for sharing content. There are still Facebook pages spreading fake news trying to generate
clicks and money. Facebook has started to collaborate with external fact checkers ‘Third party verifier’ to prevent the spread of
fake news because algorithms can’t really determine what is fake news or not. If an information is identified as fake news, then
Facebook will flag it in the users’ feed as ‘You might want to know that external party-checkers have said it is fake news’ instead
of simply removing the content. It’s just a PR move bc of the pressure of society, media and social organisations. If it were up to
them, they wouldn’t be doing much according to internet activist Alberto Escorcia. Digital manipulation is now a global game.
Nothing can change until our clicks are seen as more than engagement and money makers.

4) ‘Follow me’ – You can run, but you can’t hide
Biometrics: voice, fingerprint, face, behavioural biometrics (how
regularly you post on Facebook, how you use your mouse, where you
click on things), gate biometrics (how you walk). The United Nations
Sustainable Development Goals: legal identity by 2030 focussing on
more than 1 billion people that have no ID (no way to prove their
identity): refugees, trafficked children, homeless people… “EyePay
system” → In a Syrian refugee camp, the refugees don’t need a bank
card to pay for the groceries, instead their iris is scanned, their balance
checked and the transaction is validated. It happens within seconds. It’s
better for them (they think) because otherwise they’d be afraid of their
card getting lost or stolen. However, although protected by the UN, we could call it a “low rights zone”. People cannot opt out of
this and there is no one to explain to them if it’s legal etc. Really experimental, invasive technology is being tested on people who
have some of the least rights and protections of anyone. The data is encrypted so well-protected according to the UN and they
regularly run assessments on the safety of the data of the project to guarantee to protect new threats and tackle them before
they become an issue. Oxfam stated in 2015: “Given the number of unknowns around most effective operation and governance
models and risks of this incredibly sensitive data falling into the wrong hands, we felt it was best not to become an early adopter.”
Activists say that facial recognition is one of the most dangerous technologies that’s ever been invented: it’s not democratically
accountable bc there is no legal basis for this. The facial recognition cameras systems put in the streets by the UK police is not
even working, 96% of supposed matches are wrong and women and people of colour represent most of these wrong matches.
Basically, it doesn’t work very well on people who are not white men, which is quite a lot of the population. There are currently
no laws regulating this technology in the UK. Some people think it’s an invasion of privacy.
In the UK, since 1999 about 70 to 80% children in the UK have interacted with biometrics technology at school.
The issue is the normalisation of technologies. These extensive
biometrics systems are like surveillance. A law was enacted in the UK in
2012 to obtain or withdraw consent. Biometrics are increasingly being
used by private companies, shopping malls, recruitment agencies,
online DNA and ancestry services and even private security companies.
All of them are taking and using our biometrics. Finding out how the
technology is being used, what data is being stored and with whom it’s
being shared, not just today but also in the future, involves a lot of
probing bc these aren’t transparent systems. They’re being developed
much faster than any regulation is created. We need to elevate ethics
for this technology right to the top of the agenda.

5) READ ME – Or just tap ‘I agree’!




3

, Pop ups and consent forms may give us the illusion that we have control over our data,
but they are everywhere and annoying and we don’t have time to read through it. Design
is power and every design decision makes a certain reality more possible. If you design
the interface so that all the defaults are set to maximise exposure if you set all the
defaults to consent, instead of doing the opposite. Bc people would be ‘lazy’ to uncheck
everything.

Endless scroll is a tech hypnotisation which is key to what’s called the ‘attention
economy’. Our attention is a finite currency in the online world. And it’s only when
websites and apps have our attention that they have our business and our data. The
companies’ biggest competitor isn’t another website, it’s sleep. With our time online, they find our weaknesses and exploit them.
Sean Parker, one of the co-founders of Facebook said: “How do we consume as much of your time and conscious attention as
possible? And that means that we need to sort of give you a little dopamine hit every once in a while bc someone liked or
commented on a photo or post or whatever. And that’s gonna get you to contribute more content. It’s a social validation feedback
loop. It’s exactly the kind of things that a hacker like myself would come up with because you’re exploiting vulnerability in human
psychology.” Skinner boxes were pivotal in demonstrating how design had the power to modify behaviour. Skinner’s concept is at
the heart of a lot of addictive design: from casino machines, such as slot machines. Smartphones are like slot machines: we all
swipe down, pause and then wait to see what will appear. We are back to these randomly scheduled rewards again. It’s that
unpredictability that makes it so addictive. Our online experience is extremely deliberately designed, just like how these design
algorithms are made to pull us in and form habits. Regulation has to come from the outside (not from the tech companies), a
public conversation about what is actually going on in these products need to happen. And once we understand that, as a
collective, what do we want to limit and constrain? Experts think there will be a shift from human-centered design to protective
design.
READING
ZAROUALI, B., BOERMAN, S.C., VOORVELD, H.A.M., & VAN NOORT, G. (2022). THE ALGORITHMIC
PERSUASION FRAMEWORK IN ONLINE COMMUNICATION: CONCEPTUALIZATION AND A FUTURE
RESEARCH AGENDA. INTERNET RESEARCH. HTTPS://DOI.ORG/10.1108/INTR -01-2021-0049
An algorithm is a set of step-by-step instructions
computers are programmed to follow to accomplish
certain tasks (Zhong, 2021). They can be used in many
contexts and for various purposes; for example, banks
may use them to approve or decline mortgage
applications; government agencies may use them to
allocate funding requests; and criminal justice systems
may use them to evaluate who might be eligible for early
release (Araujo et al., 2020; Fry, 2019). More precisely,
they are used to filter enormous amounts of content and
present personalized information, services and
advertisements to online users (Ricci et al., 2015). This
means that users are more influenced than ever by
(partially) personalized and distinct streams of online
content, which are largely based on their own (and/or
those of similar others) past choices and preferences
(Beer, 2017; Bucher, 2018; Ricci, 2015; Zarouali et al., 2021). As a result, algorithms are transforming online platforms into codified
environments that expose users to the content that is likely to be most persuasive to them (Yeung, 2017). Concerns have arisen
about the persuasive impact of such algorithms. These concerns include a fragmented public sphere, a higher likelihood of (voter
or consumer) manipulation, an increase in attitudinal polarization, more privacy infringements, an increase in user surveillance
and a loss of user autonomy (e.g., Cho et al., 2020; Susser, 2019; Tufekci, 2015; Zarouali et al., 2020a). We define algorithmic
persuasion as any deliberate attempt by a persuader to influence the beliefs, attitudes and behaviors of people through online
communication that is mediated by algorithms.
Algorithmic Persuasion Framework = a dynamic process



4

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller LouMoreau. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $8.06. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

53340 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$8.06  5x  sold
  • (0)
Add to cart
Added