In a European context marked by extremely slow civil and criminal proceedings, technology appears almost like a pragmatic response. Indeed, automating repetitive tasks, sorting through piles of files, flagging inconsistencies, or referencing precedents can give judges more time to address more complex aspects of the case. However, the entry of AI into the courtroom raises several questions that go beyond shortening trials. The main problem would be transparency.
In European systems, justice is not limited to the final outcome, but it is essential that the process leading to it be comprehensible and, above all, verifiable. Many artificial intelligence systems, in fact, function as “black boxes,” producing results without fully reflecting the criteria used.
A concrete example that can help us understand this artificial intelligence problem is the “SyRI” system in the Netherlands. This algorithm, developed by the Dutch government to detect potential social benefit fraud, combines numerous government databases such as tax data, land records, vehicle registration, and employment and income information. This data was then automatically analyzed to assign a risk level to citizens. For example, if a person was deemed “at risk,” they were reported to the authorities for further investigation .
The main problem with this mechanism is transparency . The algorithm was not publicly available, and the authorities did not fully explain which data was combined, the risk criteria, or how suspicious profiles were constructed. This made it impossible for citizens to challenge the system’s decisions. In 2020, the Hague court declared the use of the system illegal . According to the court, SyRI violated the right to privacy and did not adequately guarantee transparency in the decision-making process.
If the parties in a process cannot understand how an algorithm contributed to the decision, the possibility of contesting is lost. This can, for example, lead to the risk of indirect discrimination. Algorithms learn from historical data: if the data they derive reflects social inequalities or discriminatory practices that have developed over time, the system risks repeating them.
In criminal matters, tools that estimate the likelihood of recidivism or suggest precautionary measures could disproportionately impact certain defendants belonging to a certain category of individuals. This would violate the European principle of equality and the prohibition of discrimination. Another issue would be liability. European legal tradition places the decision on a human subject, who assumes responsibility and must justify his or her actions.
Aware of these risks, the Council of Europe has adopted ethical guidelines for the use of AI in judicial systems, emphasizing the centrality of human oversight, non-discrimination, and above all, transparency.
At the same time, the European Union approved the AI Act , which classifies all systems used in justice as high-risk, imposing obligations regarding supervision, evaluation, and the protection of fundamental rights. Technology, according to the European Union, can support, but not completely replace, the judicial function.
Finally, there remains a more cultural dimension. Judgment is not simply an exercise in probabilistic calculation. In European systems, the trial also represents a space for listening, discussion, and public motivation. Entrusting this task entirely to an algorithm would reduce justice to a technical operation, depriving it of the human component that constitutes its essence.
