Reading Materials/Articles Language Technology &
Society
Literature Subject/Lecture Week
Article 1: Birhane (2020) Week 1: Introduction, 'smart' technology, ethics and AI.
Article 2: Friedman, Kahn & Borning (2008) Week 1: Introduction, 'smart' technology, ethics and AI.
Article 3: Video by Launchbury (2017) Week 1: Introduction, 'smart' technology, ethics and AI.
Article 4: Barocas et al. (2020) Week 2: How does ‘smart’ technology work?
Article 5: Mitchell (2021) Week 2: How does ‘smart’ technology work?
Article 6: Video by Mitchell (2020) Week 2: How does ‘smart’ technology work?
Article 7: Video by CrashCourse (2021) Week 3: Language technology: NLP, NLG, Chatbots, Speech
technology and virtual assistants
Article 8: Tatman (2017) Week 3: Language technology: NLP, NLG, Chatbots, Speech
technology and virtual assistants
Article 9: Hovy. D, & Spruit, S.L. (2016) Week 3: Language technology: NLP, NLG, Chatbots, Speech
technology and virtual assistants
Article 10: Shaikh & Cruz (2022) Week 4: Hype and technology adoption
Article 11: Van den Bogaert, Geert, & Week 4: Hype and technology adoption
Rutten (2019)
Article 12: Orben (2021 Week 5: Panic and resistance
Article 13: Slechten, Courtoi, Coenen & Week 5: Panic and resistance
Zaman (2021)
Article 14: Schmuck & von Sikorski (2020) Week 5: Panic and resistance
Article 15: Greenhalgh & Stones (2010) Week 6: A theoretical vocabulary for understanding the
impact and role of (language) technology
Article 16: Bandura (2001) Week 6: A theoretical vocabulary for understanding the
impact and role of (language) technology
Article 17: Claypool, O’Mally & DeCoster Week 6: A theoretical vocabulary for understanding the
(2012) impact and role of (language) technology
,Article 1: Birhane, A. (2020) Fair Warning. Real Life.
The tech industry currently holds unprecedented power and influence. Tech companies continue to
invest handsomely in creating an attractive image of the tech industry, the hackers behind the code,
and the technologization of society in general. What emerges from this is a portrait of technology as
inevitable progress that must, despite its inevitability, be fully embraced without hesitation. In this
discourse, AI allows humans to surpass their own limitations, biases and prejudices. This view prefers
to imagine worst-case scenarios in science-fiction terms: Will AI take over humanity?
The truth, however, is that the tech industry hardly concerns itself with human welfare and justice.
Its practices have been starkly opposed to protecting welfare of society’s most vulnerable (e.g.,
prohibiting employees from protesting against exploitation of the LGBT community, treating low-paid
workers poorly, etc.). The tech industry has always tended to serve existing power relations. In place
of fundamental social changes, the computer allows technical solutions to be proposed that would
allow existing power hierarchies to remain intact.
Weizenbaum recognized this pattern decades earlier. His turn toward critique started with the
reception of ELIZA, which he built to imitate Rogerian therapy (an approach that often relies on
mirroring patients’ statements back to them). Although he was explicit that ELIZA had nothing to do
with psychotherapy, others hailed it as a first step toward finding a potential substitute for
psychiatrists. Weizenbaum’s colleagues, enormously exaggerated ELIZA’s capabilities, with some
arguing that it understood language. And people interacting with ELIZA, he discovered, would open
their heart to it. He would ultimately criticize the artificial intelligence project as “a fraud that played
on the trusting instincts of people.” Computer scientists then (and now) shared the fantasy that
human thought could be treated as entirely computable, but in one of his books Weizenbaum
insisted on crucial differences between humans and machines, arguing that there are certain
domains that involve interpersonal connection, respect, affection, and understanding into which
computers ought not to intrude, regardless of whether it appears they can.
Upon its advent at the 1956 Dartmouth Workshop, “artificial intelligence” was conceived as a project
tasked with developing a model of the human mind. Key figures attended the conference and played
a central role in developing AI as an academic field. Inspired by the idea of the Turing Machine and
enabled by computer programming, a machine to simulate human intelligence seemed a natural next
step. But as the AI project has progressed, it has gradually become less about simulating the human
mind and more about creating financial empires. Although the boundaries between AI as a model of
the mind and AI as surveillance tools are blurry in the current state of the field, there is no question
that AI is a tool for profit maximization.
In his book, Weizenbaum insists that “humans and computers are not species of the same genus,”
since humans “face problems no machine could possibly be made to face. Although we process
information, we do not do it the way that computers do.” Beyond being skeptical about the
prospects for an “intelligent machine,” Weizenbaum also recognized how computers were beginning
to be invoked as an easy way out of complex, contingent, and multifaceted challenges. Today, this
flawed approach of turning to computational tools such as software, algorithms, and apps has
become default thinking across Western society.
In the education field alone, computers and other surveillance tools are put forward as a solution
to the student dropout crisis, to tackle the supposed lack of student attentiveness, and to
the pervasive attitude that aggressively pushes the computer as an inevitable part of learning.
Implementing these technologies bypasses confronting ugly social realities. These factors could be
better understood not through more surveillance and pervasive tech but by actually talking to the
,receivers (e.g., students in the former example) directly. But when one is steeped deep in tech-
solutionism discourse, the first step of consulting those at the receiving end of some technology is
not so obvious. It might even seem irrelevant. Many of the “problems” in the social sphere are
moving targets, challenges that require continual negotiations, revisions, and iterations – not static
and neat problems that we can “solve” once and for all. This attitude is so well engrained within the
computational enterprise, a field built on “solving problems,” that every messy social situation is
packaged into a neat “problem –> solution” approach.
In 1972, in an article for Science, Weizenbaum called attention to how the AI field masked its
fundamental conservatism with a blend of optimistic cheerleading and pragmatic fatalism. This same
pattern persists in many articles about emerging technologies: The potential achievements are
applauded, while the dangers are regarded as further proof that the technology is desperately
needed, along with more generous societal support. Today, the rhetorical pattern that Weizenbaum
decried looks like this among AI’s current boosters: A machine learning model is put forward as doing
something better than humans can or offering a computational shortcut to a complex challenge. It
receives unprecedented praise and coverage from technologists and journalists alike. Then critics will
begin to call attention to flaws, gross simplification, inaccuracies, methodological problems, or
limitations of the data sets. But the outrage and calls for caution and critical assessment will be
drowned out with promotion of the next great state-of-the-art tech “invention,” the cry for emphasis
on the potential such tech holds, and championing of further technological solutions for problems
brought about by the previous tech solutions in the first place.
Among the standard justifications for developing and deploying harmful technology is the claim of
their inevitability: It’s going to be developed by someone, so it might as well be me. Another
justification is to dismiss the limitations, problems, and harms as minor issues compared with the
advantages and potential. Adopting an AI or machine-learning “solution” rather than a more
comprehensive approach to social issues remains widespread. It can be seen in “technology for social
good” initiatives, it is also evident in automated decision-making in regard to welfare systems, it is
seen in algorithmic approaches to mental health issues, and it is becoming an integral part of criminal
justice systems and policing. Technology has become the almighty hammer to bash every
conceivable nail with.
Summarized (bullet points)
- Tech industry is on the rise, and is very influential;
- AI boosterism: hype or (excessive) promotion of AI, as a solution to societal problems;
- It feels like the growing influence of technology is unstoppable
o … even though when you look at individual cases, they’re really not there yet.
o … even though there are clear ethical issues with many proposals.
- “It’s going to be developed by someone, so it might as well be me” bad argument.
- History repeats itself, and the essay shows how.
In addition, this is a typical essay on the impact of computers, build op like this: On the one hand…
(there are many good things that computers have done for society), on the other hand… (have also
brought us problems (e.g., privacy, unemployment, etc.)). But look at what computers have brought
us! Surely we can fix this. (Suggestions for technological fixes). Give us more money and we can do
more!
, Article 2: Friedman, B., Kahn, P. H., & Borning, A. (2008). Value
sensitive design and information systems. The handbook of
information and computer ethics, 69-101.
Value Sensitive Design is a theoretically grounded approach to the design of technology that
accounts for human values (e.g., privacy, ownership/property, physical welfare, freedom from bias,
autonomy, etc.) in a principled and comprehensive manner throughout the design process.
Aim and structure of the study:
The goal of this paper is to provide an account of Value Sensitive Design, with enough detail for other
researchers and designers to critically examine and systematically build on this approach. We begin
by sketching the key features of Value Sensitive Design, and then describe its integrative tripartite
methodology, which involves conceptual, empirical, and technical investigations, employed
iteratively. Then we explicate Value Sensitive Design by drawing on three case studies. One involves
web browser cookies, implicating the value of informed consent. The second involves high-definition
plasma (HDTV) displays technology in an office environment to provide a ‘window’ to the outside
world, implicating the values of physical and psychological well-being and privacy in public spaces.
The third involves user interactions and interface for an integrated land use, transportation, and
environmental simulation, to support public deliberation and debate on major land use and
transportation decisions, implicating the values of fairness, accountability, and support for the
democratic process, as well as a highly diverse range of values that might be held by different
stakeholders (e.g., environmental sustainability, opportunities for expansion, etc.). We conclude with
direct and practical suggestions for how to engage in Value Sensitive Design.
Key features of Value Sensitive Design
In the current work ‘value’ refers to what a person or group of people consider important in life. In
this sense, people find many things of value, both lofty and mundane: their children, friendship,
morning tea, education, art, a walk in the woods, nice manners, good science, a wise leader, clean
air.
In the 1950’s, during the early periods of computerization, cyberneticist Norbert Wiener argued that
technology could help make us better human beings, and create a more just society. But for it to do
so, he argued, we have to take control of the technology. More recently, supporting human values
through system design has emerged within at least four important approaches. (I) Computer Ethics
advances our understanding of key values that lie at the intersection of computer technology and
human lives. (II) Social Informatics has been successful in providing socio-technical analyses of
deployed technologies. (III) Computer Supported Cooperative Work (CSCW) has been successful in
the design of new technologies to help people collaborate effectively in the workplace. (IV) Finally,
Participatory Design substantively embeds democratic values into its practice.
Integrative Tripartite Methodology of Value Sensitive Design
With Value Sensitive Design, an artifact (e.g., system design) emerges through iterations upon a
process that is more than the sum of its parts. Nonetheless, the parts provide us with a good place to
start. Value Sensitive Design builds on an iterative methodology that integrates conceptual,
empirical, and technical investigations.
Conceptual investigations
Value Sensitive Design takes up questions like ‘who are the direct and indirect stakeholders?’ ‘how
are both stakeholders affected?’ and ‘what values are implicated?’ under the rubric of conceptual
investigations. In addition, careful working conceptualizations of specific values clarify fundamental