Léonard Van Rompaey: How does the Law deal with machines making decisions?
When machines start making decisions, this creates problems for our legal systems because those systems are used to dealing with human beings making decisions, not with objects. A lot of research and debates have happened over the past few years among lawyers and policy-makers, but the talks so far have mostly focused on the specific tiny issues that robots and artificial intelligence (AI) create, not necessarily on dealing with this pervasive, conceptual, low-key problem of machines making the decision. While this question is stimulating at a theoretical level, it is also meaningful to the daily operations of businesses that want to deploy or use AI-based systems, because it ultimately affects how day-to-day legal questions such as transparency of algorithmic decision making, or such as the patentability of AI-made designs, or even of more down to earth questions of shared responsibility between members of the value chain for an accident caused by an advanced robot containing AI modules.
Machines that make discretionary decisions
With contemporary AI, a machine learns by itself how to answer a question it is asked or a task it is given, based on the examples it is shown. This means that once it has learned and once it is in the real world, it can be given a mission by a human user, and then it will try to accomplish on its own that mission with little help from the user, and how it does so is based on what it has learned—also on its own. In a sense, this ability is similar to the discretion, or the ‘freedom’ or ‘judgement’ that a person uses in order to fulfil their professional missions. This machine ability to make decisions within the limits of the mission: this is what is so foreign and difficult to deal with for our legal systems. This quality is also what has led some to wonder whether we should give legal personhood to robots: they possess a quality that is very human by nature.
Neither objects nor persons: the ambivalence of AI
On the other hand, they also miss qualities that human persons do possess when it comes to interacting, understanding, and respecting the law: they lack a wider understanding of why Law needs to be obeyed, of why a rule has to be followed and respected, and of how other persons around us do that too. This suggests that while they possess some human qualities with regards to the law, they also miss other important qualities: they don’t really fit neatly as ‘objects’ and they also don’t fit neatly as ‘persons’. This ambivalence is at the heart of the difficulties that AI and robotics create for our legal systems. It is observable in all fields of legal practice: traffic regulation, medical law, administrative law, responsibility for accidents, laws of war, etc. In the end, there is no easy answer for dealing with this conceptual problem, maybe the wisest solution is to force ourselves to categorize robots and AIs as objects and deal with the remaining problems through this newfound understanding of what the technology is and what the technology means for our legal systems. Once we know this, it becomes easier to devise business-friendly solutions for helping companies deploy and acquire advanced robots and AI-based systems.