The landscape of law is perpetually evolving, shaped by social changes and technological advancements. One of these is AI, which is continuously gaining ground in the field of justice. However, we, as lawyers, often maintain reservations, mainly regarding whether the interests of our clients will be fully safeguarded. After all, the purpose of justice is not only speed and flexibility but also being accessible to everyone, creating a sense of security and proper administration of justice. Reasonable questions therefore arise about whether the evidence will be adequately evaluated, whether human judgment is irreplaceable, and whether AI will ensure the flexibility of the hearing process, given that the trial is a living organism. Also, what can AI do for the administration of justice, and what does that require?
AI can be used as a tool or evidence in the hands of lawyers and judges, as a means of alternative dispute resolution, and as a subject for issuing judicial decisions.AI can process vast amounts of data quickly, allowing for faster case analysis and decision-making the judges and lawyers by providing insights, identifying relevant precedents, and even suggesting potential rulings based on past decisions.
A characteristic example is eDiscovery. EDiscovery is the electronic aspect of identifying, collecting and producing electronically stored information (ESI) in response to a request for production in a lawsuit or investigation. EDiscovery uses machine learning AI, which learns through training what the best algorithm is that can extract the relevant parts from a large amount of information. Parties agree which search terms and coding they use. The judge assesses and confirms the agreement. This is a method of document investigation recognized by the courts in the United States, some European countries, and the United Kingdom. The method is faster and more accurate than manual file research and certainly contributes to accelerating the process, making it easier for the judge.
On the other hand, although AI is designed to be objective, it can inadvertently perpetuate biases present in the data it was trained on. If the training data contains biases, the AI system may replicate these biases in its decision-making process, leading to unfair outcomes. For example, in the United States, in order for judges to decide on the severity of the sentence or whether to grant probation, it is required for the relevant authority to present the court with a report on the risk of recidivism for the specific offender.
Some states use AI programs to compile this report, such as Wisconsin, which utilized the COMPAS program to determine the sentencing of a defendant named Loomis based on a questionnaire he completed. Loomis argued that both his right to a fair trial (due process rights) and his right to a personalized sentence were violated, claiming that the nature of the software prevented him from challenging its scientific validity and that the sentencing gave undue weight to the social dimension of his gender. According to the Supreme Court of the State of Wisconsin, “if the COMPAS risk assessment report was the decisive factor in determining the sentence, it would raise issues of fairness regarding whether the defendant actually received a personalized assessment of his sentence.”
So, the use of AI with its particular characteristics (e.g. opacity, complexity, data dependency, autonomous behavior) may adversely affect certain fundamental rights enshrined in the EU Charter of Fundamental Rights. These risks of artificial intelligence have not left the European legal world unaffected. Since December 2018, The Commission for the Efficiency of Justice (CEPEJ) of the Council of Europe has adopted certain ethical guidelines for the use of AI in the administration of justice. The ethical principles are oriented towards the respect of fundamental human rights, the right to a fair trial, and equal treatment. Additionally, all data provided for processing must come from certified sources, transparency must be maintained, and the results should naturally be subject to human oversight.
A few months ago, something even more important was instituted: A new EU regulation on artificial intelligence (ΑΙ).
In April 2021, the discussions of the European committee began, which proposed that artificial intelligence systems that can be used in different applications be analyzed and classified according to the risk they pose to users. Depending on the risk level assigned, a different level of regulation implementation will be required. The Act’s regulatory framework defines four levels of risk for AI systems: unacceptable, high, limited, and minimal or no risk.
The choice of a regulation as a legal act is justified by the need for the uniform application of new rules, such as the definition of AI, the prohibition of certain harmful AI-based practices, and the classification of certain AI systems.
The European Parliament approved the AI Act in March 2024 and the Council followed suit, giving its approval in May 2024. On 12 July 2024, the European Union’s Artificial Intelligence Act, Regulation (EU) 2024/1689 (“EU AI Act”) was published in the EU Official Journal, making it the first comprehensive horizontal legal framework for the regulation of AI systems across the EU.
Regarding justice the regulation provides that:
«Certain ΑΙ systems intended for the administration of justice and democratic processes should be classified as high risk, taking into account their potentially significant impact on democracy, the rule of law, individual liberties, as well as to the right to an effective remedy and an impartial tribunal. However, this characterization should not be extended to ΑΙ systems intended for purely auxiliary administrative activities which do not in practice affect the administration of justice in individual cases, such as the anonymization of judicial decisions, documents or data, communication between its members personnel, the performance of administrative tasks or the allocation of resources. »
The direct application of a regulation, in accordance with Article 288 of the TFEU, will reduce legal fragmentation and facilitate the development of a single market for lawful, safe, and reliable AI systems.
All things considered, the intersection of AI and the legal field has already occurred, and the role of technology in daily practice is ever-expanding. It is important for the legal world to address the ethical issues surrounding it and ensure that the technology is used responsibly and according to the AI Act. And if the past has taught us anything, it is that the industry has successfully adapted to these technological changes and can continue to do so. After all, let us not forget that justice is administered by humans for humans, and this is irreplaceable.
Read also the artcile here, as published in Mondaq.