CRIME & TECHNOLOGY SOC2066
To What Extent Could Judicial Decision-Making be Automated?
To What Extent should it be?
For many years, questions have arisen concerning the credibility of the contemporary judicial
decision-making-process. This has evolved into studies analysing the impact that determining
variables in court such as state-level public opinions, judicial ideology have had upon the jury in
comparison to automated technological processes. More specifically, this paper will be exploring the
extent to which ‘technomia’ already plays an instrumental role in influencing social control in court
and whether this is morally correct. Furthermore, this paper will cover the diverse potential that
technology has to replace numerous tools within the juridical system both on an informal level and
perhaps more seriously, within formal mechanisms. Finally, from analysis of evidence, it will draw a
conclusion and recommend to what extent judicial decision-making should and would benefit from
incorporating elements of automated technology and why.
A growing concern has emerged regarding the historically narrow social background of judiciaries, as
statistics show a bias for white, well-educated males. Despite several initiatives since the mid-1990s
to establish a judiciary from a more diverse social background, movements such as the BLM petition
still remain; the House of Lords Constitution Committee (2012) 2 concluded that this bias is still ever-
present; “the proportion of women judges, black, Asian and BAME from underrepresented groups
has increased too slowly.” Therefore, the issue of prejudicing juries, often portrayed through pre-
trial publicity, has become a prominent concern in criminal trials. Whether prejudice be
predetermined by their background, or their interpretation of ‘extra-evidential’ variables, it still has
a profound impact upon each offender’s fate and the functioning of society as a whole. Research has
examined the impact that ‘extra-evidential’ variables such as eye-witness testimonies have had upon
the jury’s attitudes and perceptions to conclude whether juries are a legitimate/appropriate means
of decision-making. McAllister (1986) found that too often juries fall victim to feelings of empathy
and drew the conclusion that they are insufficiently attuned to correctly judge different types of
evidence without forming emotional connections. Heller (2004) supports these findings and states
that juries tend to be positively influenced when presented with factual evidence either by the
presence of an expert or forensic evidence. A lack of diversity among the jury questions the
credibility of the current decision-making process and paves the way for a more technological input
to demonstrate a more generic, factual and legitimate system.
Arguably, technology already plays an integral role among certain elements of society such as the
education system, government, state, medical system and most relevantly; criminal justice system.
Therefore, although risk assessments are traditionally determined by human discretion and intuition
of the jury, the technological influence is having an ever growing statistical, data-driven effect upon
evaluations. At present, AI complements areas of the justice system by providing factual, reliable
information, in contrast to its complex relationship with the organs of justice and is therefore,
becoming a key control mechanism. Research in Canada demonstrates the growing use and reliance
upon automated judicial decision-making in analysing data either to determine eligibility, predict
risks, recommend sentencing, as well as discovering it maintains a widespread presence in a broader
range of legal issues from child welfare to immigration. The AI Now Institute (2019) comments
“automated decision-systems can exist in any context where the government bodies or agencies
evaluate people, cases, allocate resources, focus scrutiny or make nearly any sort of decision.”
However, having the opportunity to incorporate automated systems into the modern criminal justice
system, does not necessarily validate or make it a morally correct decision.
1
, CRIME & TECHNOLOGY SOC2066
Automated decision-making in the context of juries operates using algorithms and artificial
intelligence (AI). An algorithm performs tasks and makes decisions, thus, is more relevant when
discussing decision-making environments, whereas AI has the potential to improve access to justice
and provide benefits for historically marginalised populations, incorporated into crime prevention
techniques such as facial recognition, crime mapping and natural language processing. Contini
(2020) argues that courts value a sense of independence, fairness and integrity which simply cannot
be replicated by the innovation of technology. The Fundamental Rights Principle states “the design
and implementation of AI must be compatible with fundamental rights”, in addition to a lack of
accountability for any misjudgement, utilising technology as more than a tool to support decisions
would deem the judicial decision making process as ironically liable to error.
In the judicial narrative, technologies have the potential to transform our lives and to better mediate
much of society’s behaviour. The term ‘technomia’ predicts a socio-legal order determined solely by
technology and signifies an unprecedented stage in society where AI and algorithms may overtake
human intelligence and become the dominant source of knowledge and decision-making. Moreover,
arguments suggest that automated decision-making is steadily driving towards more powerful
regulatory law and ultimately threatening technology regulating society rather than vice versa.
Since the industrial revolution, automated technology has seen an exponential growth and has
massively transformed our environment as well as causing redundancy among many industries.
Despite theories suggesting machinery automated systems pose less of a threat to occupations
surrounding human judgement and interaction, a study by Frey & Osbourne (2013) evaluates the
impact of technology upon 702 lawyers and judges and found an unexpectedly increased potential.
This could be the result of a rising number of disputes challenging the capacity of the criminal justice
system in providing qualitative and effective access to justice. Realistically, algorithms have become
more popular as they can exercise ‘machine learning’ to analyse mass information in bulk and
determine patterns which ultimately offers a more generic, cost-effective and efficient service.
Završnik (2020) demonstrates how algorithms are already exercising the ability to perform
‘predictive inferencing’ by means of profiling in the judicial decision-making process, analysing
personal information to produce correlations and predict behaviours.
Byrne’s (2011) suggests that theoretically, the vast development of technology threatens the
necessity for judicial figures such as police officers on patrol when CCTV, speed cameras, electronic
motoring, supermax prisons etc are existent and constantly improving. In the context of the criminal
justice system, technological innovations have adapted and reshaped sentencing software,
electronic filings and monitoring which could, potentially, mean the displacement of traditional legal
concerns with automated predictions. Fortunately, we are not yet at this stage and many recognise
the valuable unparalleled nature of human interaction in these areas; Marx (1991) described it as
“there is no soul in the new machine”.
Consequentially, the rapid deployment of automation is attracting conflicting narratives and
concerns have voiced that if we do not place restrictions or safeguards upon its usage, it could
compromise data protection, privacy law and social order. The most favoured form of regulation to
overcome this issue is referred to as the ‘regulation of technology’ which highlights the duty to
protect public health, human rights, environment, safety and security. It does not aim to prevent the
development and innovation of technology in the legal sector, but regulate the pace, scope and
extent of innovation (Hook, 2019).
However, the task of regulating mass algorithms, AI and automated processes is extensive and
would require a dedicated board of regulatory bodies to ensure indiscriminate and fair processing
2