100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
Summary Lectures Global Digital Cultures (Media and Culture) $8.15   Add to cart

Class notes

Summary Lectures Global Digital Cultures (Media and Culture)

 9 views  0 purchase
  • Course
  • Institution

Summary Lectures Global Digital Cultures (Media and Culture)

Preview 3 out of 30  pages

  • March 26, 2024
  • 30
  • 2021/2022
  • Class notes
  • -
  • All classes
avatar-seller
Lecture 1: Algorithms, Data, Music & Video
Readings
Hallinan, B., Striphas, T. (2016). Recommended for you: The Netflix Prize and the
Production of Algorithmic Culture. This reading focuses on the Netflix Prize, which was a
contest that ran between 2006 and 2009. For $1m dollars, they asked participating teams to
try to improve the accuracy of Netflix’s recommendation system by 10%. Hallinan and
Striphas explored how the various teams engaged with that challenge.

Karakayali, N., Kostem, B., & Galip I. (2018). Recommendation Systems as Technologies of
the Self: Algorithmic Control and the Formation of Music Taste. This reading is about
recommendation systems as technologies of the self. It specifically focuses on last.fm, which
recommends music to fans. The website is still there but is to an important part defunct now.
It constitutes an example of how users develop taste through the platform’s algorithm.

Zhang, Q. & Negus, K. (2020). East Asian Pop Music Idol Production and the Emergence of
Data Fandom in China. This reading is about Chinese data facts, which is an extreme
example of users trying to influence how content is curated on the platform.

Recommendation Systems
In online media, there is almost unlimited space to share content, which is not the case in
offline media (such as newspapers) as their space is limited. Platforms tend to carry a lot of
content, thus, for a platform to remain user friendly, it helps users to navigate all this content.
This is where recommendation systems come in and these are algorithmically determined.

When examining ordering practices of public visibility, a general distinction can be made
between editorial and algorithmic forms of curation.
- Editorial curation: done by humans who are based on cultural norms. Professional
producers make judgments on what is relevant, what is valuable content and what is
not. This is done by experts that have knowledge about content that is being shared
and that can make these valuable judgments. Although editorial curation is mostly
associated with offline media, on many platforms it still plays an important role.
- Algorithmic curation: platforms also rely on data and algorithms when recommending
content. Algorithms can be understood as “encoded procedures for transforming input
data into a desired output, based on specified calculations” (Gillespie 2014, 167).
Typically, content and users with a high ranking, calculated by the algorithm, appear
on top of a user’s feed.

When thinking of algorithmic and editorial curation, it is important to note that algorithmic
curation is based on calculations and thus is quite a technical process. Nonetheless, it is not
necessarily more objective or neutral than editorial curation, even though platforms often
claim to be more neutral and mechanical. In practice, platforms and algorithms are still
designed by humans and work based on human data. Research has illustrated how automated
decision-making frequently produces results that are deeply problematic: they are biased,
discriminatory, and should not be publicly shared. It should also be observed that platform-
companies design their algorithmic sorting practices and thus their recommendation systems
based on particular business models, which also feed into the way in which the
recommendation system is organized. Although editorial and algorithmic curation involve
different procedures, they are correctified both by problems of biased, discriminations, and
they are shaped by business interests.

,How do platforms and digital services such as Netflix algorithmically recommend content?
They do so by using two types of filtering: content-based filtering and collaborative filtering.
- Content-based filtering: recommending content based on the characteristics of the
content. Users are recommended content based on their particular interest for
example, which are stated in their profiles, favorite songs, favorite movies, etc. Then
these are matched with similar characteristics in those songs and movies that are
available on the platform. This means that the platform needs to analyze that content.
Spotify does raw audio analysis on the content that is offered through the platform
and Netflix tags all the content that is available on the platform.
- Collaborative filtering: recommending content based on what other users like. The
content is recommended based on similar interests of large numbers of users.

In practice, platforms use a combination of content-based filtering and collaborative filtering.
The platform determines through descriptions or tags what the content is about, thus enabling
content-based filtering. But it also considers what aggregates of users watch, listen to, share,
engage with, etc., and thus, enabling collaborative filtering.

These algorithmic recommendations, which connect cultural producers, content advertisers,
and users, build on platform data infrastructures. This has led John-Cheney Lippold to argue
that “when we are made of data, we are not ourselves in terms of atoms. Rather, we are who
we are in terms of data” (Zhang and Negus 2020, 495). Lippold emphasizes that it is our data
that is being watched and not ourselves. Rather than tracking who we are, as biologically
beings for example, platforms make inferences of who we are in terms of the data that is
collected about us. Lippold’s research reveals that, given that this data and algorithms
constantly change, what the platform interferes about us constantly changes as well.

More fundamentally, platforms do not just connect content, advertisers, and end-users based
on these traditional demographic categories, but based on these completely new categories
that emerge from the data. Hallinan and Striphas make a remark on this. When they examined
the participating teams of the Netflix Prize, they noticed that they stopped looking at personal
demographic information as a useful way to recommend content. It was too crude; it was not
precise enough, they argued. This is contrary to the conventional wisdom of marketing, in
which the starting point is always the demographic characteristics (gender, race, age, and so
on). The participating teams were much more focused on a wide variety of other signals,
which they tried to compute through new algorithms.

A 13-minute video produced by the Wallstreet Journal “How TikTok’s Algorithm Figures
You Out” is about TikTok creating personalized categories based on an individual’s
particular interests and moods and how these are derived from tracking the viewing habits of
users. This then leads to personalized genres and fragmentation of audiences. The video also
points to the danger of users disappearing in rabbit holes of evermore extreme content.

User Practices
This week’s readings argue that users are not just data points but also have agency; they can
influence what happens and doing so with purpose, with a particular objective in mind.

Data fans reveals how users can play a role in data systems. Zhang and Negus (2020)
describe a data fan as follows: “A data fan understands how their online activities are
monitored and tracked to produce metrics that quantify variables measuring the popularity of
performers; semantic information cataloguing the meanings associated with performers; and

, sonic data that register the most frequently accessed musical characteristics of tracks. Data
fans adopt individual and collective strategies to deliberately intervene and to influence the
statistical, sonic, and semantic data collected by and reported on digital platforms and social
media. Fans recognize their importance as data and use this to benefit the musicians or idols
they are following, and to enhance their sense of achievement and agency” (494). Central to
these efforts are music charts in which data fans are concentrated on particular artists.

Zhang and Negus claim that much of this can be considered as data teamwork, which can be
understood as follows: “The data team are a group of dedicated, skilled fans with extensive
knowledge of digital platforms, and who understand the technical processes driving
algorithms and enabling loopholes. The team collect data from various platforms and prepare
strategies for intervening, guiding other data fans who may not have such technical
knowledge” (505). Again, there is a very thorough knowledge involved of how a platform
curates content. This then suggests that the social practices of fandom historically changed
quite a bit. Users are not just datafied and profiled, but also have agency in shaping the
process of cultural curation.

Cultural Producers
Cultural producers depend for the livelihood increasingly on the production, distribution, and
monetization of content through platforms. Many cultural producers have been forced to
adapt to the increasingly center role of data and algorithm. Cultural producers are attuned to
platform algorithms. They are conscious of how their content is being datafied and curated
based on user engagement. This consciousness and response by cultural producers has been
called visibility game.

Research done by Kelley Cotter looks at creators who produce, distribute, and monetize
content from social media platforms. Cotter suggests that these kinds of producers play the
visibility game. Visibility game is in a way similar to Chinese data fans; they use a range of
strategies to affect the visibility of content, trying to shape the datastreams around their own
content. Cotter examined the online discussions between Instagram influencers, which is
where she came to the realization of the visibility game. Cotter (2018) argues that
“Influencers might be reframed as ‘playing the visibility game,’ which shifts focus from a
narrative of a lone manipulator to one of an assemblage of actors. Within the visibility game,
there is a limit to the extent that algorithms control behavior. Influencers’ interpretations of
Instagram’s algorithmic architecture—and the visibility game more broadly—influence their
interactions with the platform beyond the rules instantiated by its algorithms. Re-directing
inquiries toward the visibility game, rather than narratives of individuals ‘gaming the
system,’ makes present the interdependency between users, algorithms, and platform owners
and demonstrates how algorithms structure, but do not unilaterally determine user behavior”
(896). Again, agency is ascribed to cultural producers.

In practice, the visibility game entails influencers developing insight into how these platforms
organize content. Similar to data fans, creators develop quite sophisticated knowledge of how
these platforms organize content, which mostly revolves around engagement. Subsequently
what creators do is try to influence how their content becomes visible and try to maximize
engagement and follow accounts. Historically, they have done so through a wide variety of
methods and one of them is bots. This is now forbidden by Instagram. In recent years,
influencers have shifted to another strategy called “pods,” which are groups of influencers
that agree to engage with each other’s posts. From the perspective of the platform this can be
seen as a form of manipulation, and as Cotter calls it, as a form of gaming the system. Cotter

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller utedehaan. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $8.15. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

75323 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$8.15
  • (0)
  Add to cart