Ms Rostalski, where is AI already providing support in the legal sector? Where are the technological limits (still)?
Frauke Rostalski: In addition to the generally growing importance of AI in almost all areas of society, AI systems are also increasingly being used in the legal sector. For private individuals, AI systems can primarily play a supporting role in the area of legal self-help and help to categorise and assess the specific legal situation without the need for costly consultation with a lawyer. However, technological limitations arise from the (still) inadequate generative activity and error-proneness of such systems, which is why complete outsourcing of legal advice does not appear possible for this reason alone. For law firms and lawyers, the potential of AI systems lies primarily in supporting time-consuming tasks such as document review, summarising or searching through documents and legal research in general. Here, too, the susceptibility to errors proves to be problematic and thus a technological limit, as these systems often deliver imaginative results (‘hallucinations’) that can have far-reaching (legal) consequences if they are not checked properly. Due to both their susceptibility to errors and potential data protection problems, AI systems do not (yet) qualify as a real relief for lawyers and law firms in this area of application. AI systems can be used in the judiciary to support courts, e.g. when analysing evidence, taking minutes or sentencing. Appropriate applications can increase the efficiency and speed of the justice system and also reduce the workload of judges. The most significant challenge here is the (poor) quality of the training data of such AI systems, which harbours the risk of unjustified discrimination. In addition, the lack of digitisation - of judgments, for example - means that there is a lack of sufficient training data.
How can AI systems relieve the burden on courts and contribute to fair judgements?
Frauke Rostalski: The use of AI systems in court is conceivable in two ways: In the context of decision support systems, use in document analysis or creation is conceivable, which enables large amounts of data to be searched and analysed in a short space of time. In the future, AI systems can help judges to find relevant judgements or other data more quickly and reliably. In addition, AI systems can also be used to support decision-making, for example in criminal law when determining the appropriate sentence. So-called sentencing databases, which enable judges to view as many judgements as possible and to compare and categorise the sentencing considerations listed therein, can relieve the burden on courts and, above all, contribute to fairer judgements - see the Smart Sentencing research project that I supervise.
The highest conceivable level of AI use in the legal system are so-called decision replacement systems. These are AI systems that are intended to replace the work of judges as such, so-called “Robo-Richter“. From a technical point of view, such systems are particularly demanding. A “Robo-Richter“ or “Iudex ex machina“ would have to master the traditional activity of finding justice. There are significant constitutional objections to the admissibility of decision-replacement software. According to Art. 101 para. 1 sentence 2 of the Basic Law for the Federal Republic of Germany no one may be removed from the jurisdiction of his lawful judge. This is traditionally understood to mean a natural person. Last but not least, the risk of errors inherent in every judgement speaks against delegating human decision-making to technical systems. Doubts can never be completely ruled out - no matter how comprehensively the facts are established. This risk is generally accepted under certain conditions, namely when legally sufficient evidence of a conviction has been provided. However, this risk only appears socially acceptable if the decision is made by an equal member of society - a human being. Delegating this risk to technology, on the other hand, proves to be unacceptable, at least when it comes to important decisions, especially those in the area of criminal law.
Trust in an independent judiciary is a basic prerequisite for a functioning democracy. What does this mean for the use of AI in the legal system?
Frauke Rostalski: In principle, transparency is a factor that has a decisive impact on citizens' trust in the state. AI systems harbour the risk of being so-called black boxes. When using AI to support judgements, it is all the more important to ensure that any proposals are adequately explained and justified. This is the only way to create social acceptance for the decision and thus legal peace. Trust in new technologies is also created through regulation. This is where the establishment of laws, norms and standards and certification come in. The latter empowers consumers to categorise AI systems on the basis of clear criteria and to base their own usage decisions on them.
Art. 97 para. 1 of the Basic Law for the Federal Republic of Germany guarantees the independence of the judiciary. This already has an impact on the development of AI systems that are to be used by the judiciary. A purchase from private companies would lead to the judiciary becoming dependent. Independent proceedings would hardly seem possible. The development of a separate state system would therefore appear to be preferable. However, the principle of separation of powers must be observed here, as the financing is regularly in the hands of the executive. The independence of individual judges must also be guaranteed. The possibility of using AI to support decision-making must under no circumstances lead to an obligation to use it or even to a compulsion to adopt automatically generated decision-making proposals.
Detailed expertise on the potential and challenges of using AI in the legal sector can be found in the white paper ‘Artificial intelligence and law - on the way to the robo-judge?' (in German) published by Plattform Lernende Systeme.
The interview is released for editorial use (provided the source is cited © Plattform Lernende Systeme).