Lecture 1
The Society of Algorithms: Refers to the combination of massive datasets and algorithms
used to sort, organize, extract, or mine information across major social institutions. It
explores the promises of technical and economic efficiency, fairness, and the impact of
algorithms on social structures and subjectivities.
Critical AI Studies: A sociological and cultural approach to algorithms, focusing on their
impact on society, social structures, and subjectivities. It highlights the non-neutrality of
algorithms and cautions against exaggerated promises of AI while acknowledging the
coupling between algorithmic processes and societal structures.
Artificial Intelligence: Challenges the notion of AI as both artificial and intelligent,
emphasizing the myth of technology's autonomy. It explores the concept of "enchanted
determinism" and the implications of AI's role in shaping social conduct and arranging social
worlds.
Algorithms: Defined as sets of instructions for solving problems or completing tasks,
guiding computers on what to do. It distinguishes between rule-based algorithms and
machine-learning algorithms, which rewrite themselves based on data and patterns.
Machine Learning: An approach to AI where algorithms learn patterns from data to
recognize and predict outcomes. It involves the automation of pattern recognition and logical
reasoning. Supervised machine learning requires labeled data, and machine-learning
algorithms are described as brute force approximations rather than exact mathematical
analysis.
Algorithmic Bias: Refers to the presence of unfair or discriminatory outcomes (of gender,
race, ability, and class discrimination by algorithms) resulting from algorithmic
decision-making processes. It can be caused by biased training data, flawed algorithms, or
unequal representation in data sources.
Algorithmic Reason: Algorithmic reason refers to the utilization of algorithms for governing
the behavior of individuals, populations, and social and political boundaries. It involves
arranging and managing things in a way that guides people in specific ways. It emphasizes
the analytics of government and the impact of algorithms on social norms, power dynamics,
and decision-making processes. Algorithmic reason raises concerns about agency,
responsibility, accountability, and the potential for epistemic injustice in the context of
algorithmic decision-making.
Black-Box Effect: Refers to the lack of transparency and understanding of why certain AI
models work and others don't. It raises concerns about agency, responsibility, accountability,
and the right to an explanation, leading to a context of epistemic injustice.
Correlation is not Causation: Emphasizes the need to distinguish between data
(information about something happening) and knowledge (information about why something
, happened). It questions the reliance on correlations in machine learning without
understanding causal relationships.
Lecture 2
Critical AI Studies: This concept emphasizes the sociological and cultural aspects of
algorithms and highlights that algorithms are not neutral in their organization and ordering of
the world.
Automation and Labour: This concept discusses the impact of automation on employment
and labor. It questions the notion of large-scale technological unemployment and focuses on
the unequal distribution of work, time, and money. It also explores how profit-driven
development of digital technologies may lead to "worse" jobs rather than less work.
Algorithmic Violence: This concept delves into the negative consequences of algorithms
and their potential for perpetuating violence or harm. It highlights the need to consider the
implications of algorithms on individuals and society.
AI Activism: This concept explores the role of activism in shaping the development and
deployment of artificial intelligence. It focuses on advocating for ethical and responsible AI
practices, as well as addressing the social and political implications of AI technologies.
Hidden Labour: This concept sheds light on the underpaid workers involved in building,
maintaining, and testing AI systems, often referred to as "ghost work" or "human-fueled
automation." It highlights the exploitative aspects of AI development and the reliance on
unpaid or low-paid labor.
Data Tagging: Data tagging is a process in which human workers annotate or label data to
provide meaningful information for training machine learning algorithms. It involves assigning
tags or labels to specific data points to help algorithms understand and categorize the data
accurately.
Content Moderation: Content moderation involves reviewing and monitoring
user-generated content (such as text, images, or videos) on online platforms to ensure
compliance with community guidelines, policies, or legal regulations. Human moderators
assess and filter content, removing or flagging inappropriate, harmful, or violating content.
Digital Taylorism: Digital Taylorism refers to the application of Taylorism principles (scientific
management principles developed by Frederick Winslow Taylor) in the context of digital
labor. It involves the use of algorithmic technologies to implement elements of rationalization,
standardization, decomposition, surveillance, and measurement of labor. Digital Taylorism
aims to increase efficiency and control over labor processes, often resulting in deskilling and
the precise monitoring of workers' activities.
Algorithmic Management of Labour: This concept examines how algorithms are used to
manage and control labor processes. It discusses topics such as time management,
decomposition of tasks, surveillance, standardization, flexibility, and constant assessment. It