ISSUES
: The Internet of Things
Chapter 2: Our digital future
35
Robot law: what happens if intelligent
machines commit crimes?
By Jeffrey Wale and David Yuratich, Lecturers in Law
T
he fear of powerful artificial
intelligence and technology is a
popular theme, as seen in films
such as
Ex Machina
,
Chappie
, and the
Terminator
series.
And we may soon find ourselves
addressing
fully
autonomous
technology with the capacity to
cause damage. While this may be
some form of military wardroid or law
enforcement robot, it could equally be
something not created to cause harm,
but which could nevertheless do so
by accident or error. What then? Who
is culpable and liable when a robot or
artificial intelligence goes haywire?
Clearly, our way of approaching this
doesn’t neatly fit into society’s view of
guilt and justice.
While some may choose to dismiss
this as too far into the future to concern
us, remember that a robot has already
been arrested for buying drugs. This
also ignores how quickly technology
can evolve. Look at the lessons from
the past – many of us still remember
the world before the Internet, social
media, mobile technology, GPS –
even phones or widely available
computers. These once-dramatic
innovations developed into everyday
technologies which have created
difficult legal challenges.
A guilty robot mind?
How quickly we take technology for
granted. But we should give some
thought to the legal implications. One
of the functions of our legal system
is to regulate the behaviour of legal
persons and to punish and deter
offenders. It also provides remedies
for those who have suffered or are at
risk of suffering harm.
Legal persons – humans, but also
companies and other organisations
for the purposes of the law – are
subject to rights and responsibilities.
Those who design, operate, build or
sell intelligent machines have legal
duties – what about the machines
themselves? Our mobile phone, even
with Cortana or Siri attached, does not
fit the conventions for a legal person.
But what if the autonomous decisions
of their more advanced descendents
in the future cause harm or damage?
Criminal law has two important
concepts. First, that liability arises
when harm has been or is likely to
be caused by any act or omission.
Physical devices such as Google’s
driverless car, for example, clearly
has the potential to harm, kill or
damage property. Software also has
the potential to cause physical harm,
but the risks may extend to less
immediate forms of damage such as
financial loss.
Second, criminal law often requires
culpability in the offender, what is
known as the ‘guilty mind’ or
mens rea
– the principle being that the offence,
and subsequent punishment, reflects
the offender’s state of mind and role
in proceedings. This generally means
that deliberate actions are punished
more severely than careless ones.
This poses a problem, in terms
of treating autonomous intelligent
machines under the law: how do we
demonstrate the intentions of a non-
human, and how can we do this within
existing criminal law principles?
Robocrime?
This isn’t a new problem – similar
considerations arise in trials of
corporate criminality. Some thought
needs to go into when, and in what
circumstances, we make the designer
or manufacturer liable rather than the
user. Much of our current lawassumes
that human operators are involved.
For example, in the context of
highways, the regulatory framework
assumes that there is a human
driver to at least some degree.
Once fully autonomous vehicles
arrive, that framework will require
substantial changes to address the
new interactions between human and
machine on the road.
As intelligent technology that by-
passes direct human control
becomes more advanced and more
widespread, these questions of risk,
fault and punishment will become
more pertinent. Film and television
may dwell on the most extreme
examples, but the legal realities are
best not left to fiction.
2 July 2015
Ö
Ö
The above information is reprinted
with kind permission from
Bournemouth University. Please
visit
for
further information.
Ö
Ö
Originally published on
The
Conversation
.
© Bournemouth University 2016