Samenvatting artikelen – AI & Society: Fixing Algorithmic Decision Making
Elliott Chapter 1 – The complex systems of AI
A globalizing world of AI
• Approximately 40-50% of exiting jobs are at risk from AI technology and automation in the
next 15 to 20 years
• Definition of AI according to the ‘Industrial Strategy White Paper’ à “Technologies with
the ability to perform tasks that would otherwise require human intelligence, such as
visual perception, speech recognition, and language translation”
- Another key condition of AI is the capacity to learn from, and adapt to, new
information or stimuli
• AI refers to à “Any computational system which can sense its environment, think, learn
and react in response (and cope with surprises) to such data-sensing”
Machine learning
• Machine learning is the process where computers execute tasks through learning or
information gathering, that draw from human intelligence and human decision making
- Toby Walsh à “Machine learning is an important part of computers that think.
Programming all this knowledge ourselves, fact by fact, would be slow and painful,
but we don’t need to do this, as computers can simply learn it for themselves”
• Through analysis of massive volumes of data, machine learning algorithms can
autonomously improve their learning over time
Natural language processing
• Natural language processing (NLP) is a fundamental aspect of AI and encompasses all
AI-technologies related to the analysis, interpretation and generation (of text- and speech
based) natural language
- Examples are machine translation (Google translate), dialogue systems (Google’s
assistant, Siri, Alexa) and automatic question answering
• More than 60% of internet traffic is now generated by machine-to-machine, and person-
to-machine communication
• Brian Christian’s argument à “Machine language is a kind of conversational puree, a
recorded echo of billions of human conversations”
• Critique and limitations of NLP:
- NLP is only as good as the dataset underpinning it à if not appropriately trained and
ethically assessed, NLP models can accentuate bias in underlying datasets, resulting
in systems that work to the advantage of some users over others
- NLP is currently unable to distinguish between data or language that is irrelevant and
socially or culturally damaging
Robotics
• Robotics has been characterized as the intelligent connection of perception to action in
engineered systems
• Robotics are not only human-like, but also any kind of technological system that uses
sensors such as cameras, thermal imagers or tactile and sound sensors to collect data
,Complexity, complex digital systems and AI
• There are some considerations between technological systems and digital life that can be
analyzed and critiqued from the sociological:
1. Sheer scale of systems of digitization, technological automation and of social relations
threaded through artificial intelligence
- The contemporary flourishing of complex, interdependent systems of digitization re
the ‘flow architectures’ that increasingly order and reorder social relations, production,
consumption, communications, travel and transport, and surveillance around the
world
2. AI is not a new technology which simply transcends, or renders redundant, previous
technologies
- The complex systems of AI should not be viewed as simply products of the
contemporary, but depend upon technological systems which have developed at
earlier historical periods
- John Urry à “Many old technologies do not simply disappear but survive through
path-dependent relationships, combining with the ‘new’ in a reconfigured and
unpredicted cluster”
3. We need to recognize the global reach of AI as embedded in complex adaptive systems
4. There is the sheer ubiquity of AI à various complex, interdependent digital systems
which are today everywhere transferring, coding, sorting and resorting digital information
instantaneously across global networks
- With systems of digitization and technological automation, information processing
becomes the pervasive architecture of our densely networked environments
- The rise of systems of digital technology has created a new form of invisibility which is
linked to the characteristic of software code, computer algorithms and AI protocols
and to its modes of information processing
5. The systems which are ordering and reordering digital life are becoming more complex
and increasingly complicated
- Moore’s law à refers to the doubling of computing power every two years
- The ubiquity of digital technology, and especially complexity in AI and robotics,
involves multimodal informational traffic flows, which in turn substantially depends on
technical specialization and complex expert systems
6. AI technologies go all the way down into the very fabric of lived experience and the
textures of human subjectivity, personal life and cultural identities
- Complex adaptive digital systems and technological infrastructures aren’t ‘just there’
processes or happenings, but are condensed in social relationships and the fabric of
people’s lives
- Complex digital systems generate new forms of social systems
- Systems of digital technology increasingly wrap the self in experiences of
instantaneous time, and the individualized work of constituting and reinventing digital
identities is built out of instantaneous computer clicks
7. The technological changes stimulated by the advent of complex digital systems involve
processes of transformation of surveillance and power quite distinct from anything
occurring previously
- The expansion of surveillance capabilities is a central medium of the control of social
activities à especially the control over the spacing and timing of human activities
(watch, observe, record, track and trace human subjects)
- Critics of digital surveillance tend to be heavily influenced by Michel Foucault’s notion
of panoptic surveillance à the prototype of disciplinary power in modernity, and
argued that prisons, asylums, schools and factories were designed so that those in
positions of power could watch and monitor individuals from central point of
observation
- The author thinks it is mistaken to see digital surveillance as maximizing disciplinary
power of the kind described by Foucault à some digital systems of surveillance
depend upon authoritative forms of monitoring and control, and can be likened to
many of the instances of direct supervision. But, this is not the only aspect of
surveillance which comes to the fore in conditions of digital life
, - Today, surveillance is often indirect and based upon the collection, ordering and
control of information
- Sousveillance à refers to people watching each other at a distance through digital
technologies. People become part of environments which are sentient and smart, and
such digital systems promote increasingly swarming behavior
- Real dangers include disturbing effects in free speech, and freedom of expression,
loss of liberty and erosion of democracy
Possible questions, based on the text
à What are some of the potential consequences of the increased use of digital surveillance
technologies on individual privacy, civil liberties, and human rights, and how can we ensure that
these technologies are used in a responsible and ethical manner?
à What are the major threats to human freedom and privacy resulting from corporate
surveillance over the private and public lives of citizens?
- Example: something about face recognition in public spaces, or even on your
smartphone. What kind of consequences are there for having these possibilities, to
human freedom and privacy?
, Elliott Chapter 8 – Ethics of artificial intelligence
Historical and intellectual background
• Artificial intelligence à any artificial computational system that shows intelligent behavior,
i.e., complex behavior that is conducive to reaching goals
• AI gets under our skin further than other technologies à because the goal of AI is to
create machines that have a feature central to how we humans see ourselves, namely as
feeling, thinking, intelligent beings
• The main purposes of an AI agent involve: sensing, modelling, planning, action,
perception, text analysis, natural language processing (NLP), logical reasoning, game-
playing, decision support systems, data analytics, predictive analytics
• The latest EU policy documents suggests that ‘trustworthy AI’ should be lawful, ethical
and technically robust and gives the following seven requirements: human oversight,
technical robustness, privacy and data governance, transparency, fairness, well-being
and accountability
Main debates
Privacy
• This is about the access to private data and data that is personally identifiable
• Privacy has several well-recognized aspects:
- Information privacy
- Privacy as an aspect of personhood
- Control over information about oneself
- The right to secrecy
• The main data-collection for the 5 biggest companies appears to be based on:
- Deception
- Exploiting human weakness
- Furthering procrastination
- Generating addiction
- Manipulation
• The primary focus of social media, gaming, and most of the internet is to gain, maintain
and direct attention à data supply
• Device fingerprinting à a technique for identification
• Surveillance capitalism à a concept which denotes the widespread collection and
commodification of personal data by corporations (the business model of the internet)
• There is an ever-growing data collection about users and populations, which know more
about us than we know ourselves
• Users are manipulated into providing data, unable to escape this data collection and
without knowledge of data access and use
Manipulation
• Manipulation is mostly aiming at the user’s money
• With sufficient prior data, algorithms can be used to target individuals or small groups with
just the kind of input that is likely to influence these individuals
• Dark patterns à such manipulation is the business model in much of the gambling- and
gaming industries, and low-cost airlines
Opacity
• Opacity and bias are central issues in ‘data ethics’ or ‘big data ethics’
• Data analysis is often used in predictive analytics in business, healthcare and other fields,
to foresee future developments
• If a system uses machine learning, it will be opaque even to the expert, who will not know
how a particular pattern was identified, or even what the pattern is
• Bias in decision systems and datasets is exacerbated by this opacity
• There is a fundamental problem for democratic decision-making if we rely on a system
that is supposedly superior to humans but cannot explain its decisions