Radboud Universiteit

02/09/2024 | Press release | Distributed by Public on 02/09/2024 09:59

Computer output always lags behind; law is forward-looking

Computer output always lags behind; law is forward-looking

02 September 2024Research news item

The computerisation of law has grown significantly in recent years. This has generated serious consequences. Take, for instance, the Dutch Tax Administration's benefits scandal and similar suffering in other countries (for instance, the Post Office scandal in the UK in which subpostmasters were wrongly convicted of theft, fraud and false accounting). Or consider so-called 'legal tech' that scours legal text corpora, consisting of legislation and court rulings using ChatGPT variants, for instance to produce written arguments for a lawyer's plea. Or the Dutch judge who recently used ChatGPT to determine the damages in a lawsuit. This is a risky business, which is why it is important - to the extent that investing in these technologies is at all useful - that these legal technologies are developed by lawyers and computer scientists together, making sure they align with the rule of law, Mireille Hildebrandt tells us.

The computerisation of law has grown significantly in recent years. This has generated serious consequences. Take, for instance, the Dutch Tax Administration's benefits scandal and similar suffering in other countries (for instance, the Post Office scandal in the UK in which subpostmasters were wrongly convicted of theft, fraud and false accounting). Or consider so-called 'legal tech' that scours legal text corpora, consisting of legislation and court rulings using ChatGPT variants, for instance to produce written arguments for a lawyer's plea. Or the Dutch judge who recently used ChatGPT to determine the damages in a lawsuit. This is a risky business, which is why it is important - to the extent that investing in these technologies is at all useful - that these legal technologies are developed by lawyers and computer scientists together, making sure they align with the rule of law, Mireille Hildebrandt tells us.

Together with a team of lawyers and computer scientists, Hildebrandt, Professor Emeritus of ICT and the Rule of Law at Radboud University and the Free University of Brussels, has been researching the development of what some call 'computational law'. She's been doing so since 2019, i.e. long before most people had even heard of OpenAI or large language models (LLMs). She did this as the principal investigator (PI) of a prestigious European project, the ERC Advanced Grant project funded by the European Research Council, 'Counting as a Human Being in the Era of Computational Law' (COHUBICOL). She has recently presented the findings. 'It was clear as far back as 2019 that people were starting to believe that computerising law would be the solution to all sorts of problems. The volume of work is mounting, for example: the European Court of Human Rights in Strasbourg at one point had a backlog of 16,000 cases. The idea is that because there is not enough budget nor enough good lawyers to work off this backlog, we will have to resort to e.g. computer systems that can predict the outcome of court cases.'

To find out whether computational law really could be a solution to these kinds of problems, Hildebrandt and her team members developed an online web tool, the 'Typology of legal technologies'. The Typology identifies and compares the most relevant typesof applications, datasets and other systems, and evaluates them from a technical as well as legal perspective. All of this is presented in an easy-to-navigate webtool. So the Typology is not just a database. It offers a method and a mindset that can be used to put to the test the functionality claims that the vendors of legal technologies make.

Under the bonnet

The fact is that the many of the commercial providers of those systems have no interest in substantiating those promises. 'Law firms, corporate law departments and public administration often procure their software without the knowledge necessary to assess functionality claims, a victim to fear of missing out (FOMO) when it comes to AI. Not enough budget or time is spent on verifying whether these technologies actually do what their PR claims,' Hildebrandt explains. 'The providers of those systems often hide behind "trade secrets" or "intellectual property rights", and no one gets to look under the bonnet. This makes it very difficult to scrutinise the reliability of those systems. Added to this, the Terms of Service often include a disclaimer, stating that the risk of deployment is for the buyer. Based on the Typology and its underlying methodology, lawyers can now learn to assess the reliability of new tools, while also evaluating their potential impact on law and the rule of law. A key methodological aspect is that lawyers joined forces with computer scientists when designing and configuring the Typology.'

Hildebrandt believes that it is essential that lawyers and computer scientists collaborate more to develop future tools. 'Computer scientists often think that law is a problem waiting to be solved. But law is precisely about calling out problems and solutions must be contestable. On the other hand, many lawyers find all those figures and data and programming languages too difficult or dull, and fall for what software makers promise. "It's expensive, so it must be good," is too often the thinking behind this. By training, however, we lawyers are adversarial; we know how to raise the right questions and to foresee counterarguments. We need to learn to raise the right questions when it comes to investing in these systems, so that a proper dialogue between lawyers and developers of legal technologies becomes part of the legal curriculum.'

To that end, Hildebrandt and Diver (one of her postdoctoral researchers) have also set up an international academic journal, together with leading international lawyers and computer scientists: the Journal of Cross-Disciplinary Research in Computational Law (CRCL). The journal publishes papers from lawyers and computer scientists if accepted after the usual double-blind peer review (by reviewers from the same discipline as the author). The added value of the journal is that it not only publishes excellent work by lawyers, computer scientists, philosophers and social scientists, but also publishes, after each article, a short reply by a scientist from the 'other' discipline, followed by a response from the author. This prompts a serious, academically sound discussion on reliability issues, unwarranted bias and other implications of using legal technology.

Computer output always lags behind

Hildebrandt warns about the fact that AI systems necessarily base their answers on historical data (or computer code that was written in the past). 'You can't train an AI system on future data and you can't be constantly rewriting the code. If you rely on these kind of systems, you actually run the risk of fossilising the law. The fact is that these systems lag behind by definition: they are trained to repeat the past. The law, instead, is designed to evolve: judgments are adaptive and forward-looking without breaking the line with legitimate expectations raised in past decisions - that's no mean feat. The relevant circumstances and societal insights are constantly changing, and it is up to the judiciary to determine what those circumstances mean for the interpretation of the law.'

Hildebrandt, meanwhile, has high hopes for the future. 'I believe that the advent of ChatGPT has given lawyers a wake-up call. Prior to that, the conversations were often non-committal and abstract, or the opposite: extremely technical; the discussion on legal technologies is now much closer to home. It concerns very tangible issues: if a person has been affected by incorrect or biased automated decision-making, who is liable? Are we OK with the notion that big commercial companies process and sell legal text corpora and thus make money on providing access to the law? And do the algorithms have to be so unfathomable as some believers in AI like to profess? It's imperative that lawyers and computer scientists seriously explore the underlying methodology of each other's discipline so that the implications of novel legal technology for law and society are recognised for what they are, and in good time. The CRCL journal and Typology will certainly carry on contributing to this.'

Contact information

For further information, please contact Mireille Hildebrandt or team Science communicationvia +31 24 361 6000 or media[at] ru.nl(media[at]ru[dot]nl).

Theme Artificial intelligence (AI), Privacy, Law
Share this page
  • Facebook
  • Mastodon
  • Twitter
  • LinkedIn
  • E-mail