Jusletter IT

Ethical Artificial Intelligence in Judiciary

  • Author: Maria Dymitruk
  • Category: Articles
  • Region: Poland
  • Field of law: E-Government, E-Justice, Artificial Intelligence & Law
  • Collection: Conference proceedings IRIS 2019
  • Citation: Maria Dymitruk, Ethical Artificial Intelligence in Judiciary, in: Jusletter IT 21. February 2019
This paper analyses the admissibility of using artificial intelligence (AI) tools in judiciary, and contains considerations on ethical aspects of AI-application in judicial proceedings (whether an AI system is capable of taking over the role of a decision-maker in judicial proceedings, thereby replacing or supporting the judge). The paper presents five principles of using the AI in judicial proceedings (adopted by the European Commission for the Efficiency of Justice), with the view of its legal compliance, non-discrimination, transparency and efficiency of legal proceedings.

Inhaltsverzeichnis

  • 1. Introduction
  • 2. Two sides of automation
  • 3. Development of AI in judiciary – legal challenges
  • 3.1. Initiative of CEPEJ
  • 3.1.1. Principle of respect for fundamental rights
  • 3.1.2. Principle of non-discrimination
  • 3.1.3. Principle of quality and security
  • 3.1.4. Principle of transparency, impartiality and fairness
  • 3.1.5. Principle «under user control»
  • 4. Conclusion
  • 5. References

1.

Introduction ^

[1]

For many years, the scientists dealing with legal informatics and computerization of a judiciary have been working to create legal solutions that meet the needs of the society increasingly influenced by a rapid technological development. The tools, which help achieve this, include application of the artificial intelligence (AI) in legal sphere – one of the currently most debated subjects by legal practitioners and computer scientists alike. The global AI and law researchers focus on creating and modelling the AI systems in law or introducing such systems to support the work of legal practitioners. My paper complements these efforts by analysing ethical aspects of the automation of judicial proceedings using the AI.

[2]

There is no widely accepted definition of the artificial intelligence. It consists of various automated problem-solving techniques in cases where these problems cannot be resolved with simple algorithms. This paper deals only with «specialized AI», i.e. artificial intelligence methods optimized around one specific task (opposite to «general AI», which is still considered as science fiction). Therefore, «the artificial intelligence» referred to in the title of this paper shall be understood as any existing AI methods able to conduct legal reasoning required to make a judgment in judicial proceedings. It includes, in particular, knowledge-based AI systems, machine learning systems or combination of these methods.

[3]

Creating a well-functioning AI system capable of carrying out numerous adjudicating activities and providing a reasoning process is just one requirement guaranteeing a success of the AI application in judicial proceedings. Detailed analysis of admissibility to use AI in a particular justice system is needed to replace or support judges in their functions. It is difficult to imagine that the AI system is introduced into a legal sphere without verifying its compliance with the requirements set by the national, European or international legal order. The automation of legal proceedings supported by the AI system must follow the legal provisions shaping the content and the form of judicial procedure in a particular justice system. Another aspect affecting the admissibility of the AI in judicial proceedings pertains to the citizens’ trust in the automated administration of justice – an understudied topic deserving additional in-depth analysis. Before any properly functioning «e-judge» system is created and introduced into a justice system, all potential advantages and risks of such automated legal proceeding need to be thoroughly examined.

2.

Two sides of automation ^

[4]

There are two possible models of the AI-application considered as the automation of judicial proceedings:

  • use of the AI tools in order to create system, which is able to adjudicate in legal cases unassisted (in such cases the system would adjudicate instead of a human judge),
  • use of the AI tools to create a judge-supporting system (in this model the system would only support the human judge by finding relevant provisions, analysis of the judicature or review the doctrine, and finally – suggest decision to the judge).
[5]

The systems based on the second of the presented models are called in source literature «judicial decision support systems» (JDSS). Both models are equally important (and potentially risky) from the point of view of my research. Even in the case of a legal dispute being resolved by a human judge (but supported by the computer program using artificial intelligence tools), there is a risk connected with compliance of such solutions with the legal framework of judicial procedure. It might have seemed that JDSS model is neutral, as decision-making process will still remain in humans’ hands. From the analysed perspective, it appears that there is no difference, whether the judge is using any tool in his work or not. Despite appearances, it turns out that using AI only as JDSS may have the same results as full automation of judicial proceedings. It is connected with some psychological results of human’s work and the «persuasiveness» of judge’s supporting systems. Jaap Dijkstra carried out a psychological experiment examining how lawyers respond to the advice automatically generated by legal knowledge-based systems while resolving a legal case1. It turned out that participants:

  • had difficulties with the assessment of the accuracy of the automatically generated advice, as they focus on argumentation presented by the system and ignore alternative solutions,
  • had too much trust to the system’s work and, as a result, they carelessly accept the system’s advice (including incorrect one put into experiment on purpose),
  • when being advised by both the system and the human, the participants considered the system’s advice «to be more objective and rational than the human advice» (even when the humans advice was identical as the system’s one).
[6]

As a result, the participants performing legal reasoning without the support of the system achieved better results than the participants using the system. The participants’ conduct results from certain psychological reaction – a desire to avoid an excessive effort when processing information. The research proves that people tend to use computer systems to reduce the effort of the decision-making process rather than to increase the quality of their own decisions2. It is therefore probable that the use of decision support systems in the judiciary would not improve adjudication, but would rather make it worse. An excessive reliance on the decision automatically generated by JDDS may result with the fact that decisions about the legal issues of the citizens would actually be made by the computer program – despite the impression that all principles of human adjudicating process are obeyed. Ignoring this fact in the legal analysis of using the AI in the judiciary could bring my research and a potential application of the AI in the judiciary to the level of methodological and scientific carelessness.

[7]

Presented research indicates that although there are two models of judicial proceedings automation (the model of replacing the human with the machine and the model of the AI system supporting the human judge), the analysis of their legal admissibility is convergent in some respect. In both cases, the effect of their work is alike: it is the system, not the human, who is the author of the judgement of a given legal case. This circumstance was also presented in publication entitled «Algorithms and Human Rights – Study on the human rights dimensions of automated data processing techniques and possible regulatory implications» prepared in March 2018 by the Committee of Experts on Internet Intermediaries (MSI-NET) of the Council of Europe: «(...) while it may seem logical to draw a distinction between fully automated decision-making and semi-automated decision-making, in practice the boundaries between the two are blurred», «(g)iven the pressure of high caseloads and insufficient resources from which most judiciaries suffer, there is a danger that support systems based on artificial intelligence are inappropriately used by judges to «delegate» decisions to technological systems that were not developed for that purpose and are perceived as being more «objective» even when this is not the case. Great care should therefore be taken to assess what such systems can deliver and under what conditions that may be used in order not to jeopardise the right to a fair trial»3.

3.

Development of AI in judiciary – legal challenges ^

[8]

In a legal sphere, the AI systems are most frequently applied in advanced case-law search engines, online dispute resolution, assistance in drafting legal acts, predictive analytics systems, automated verification of legal compliance or legal aid chatbots4. The use of the AI systems to support the work of legal practitioners has initially been observed in the private sector (i.e., Ross [IBM] in the USA, Prédictice in France or Luminance in the UK). Recently though, the efficient data processing offered by the AI systems has been attracting increasing attention of governments and public authorities. As an example, the Brazilian project-in-progress VICTOR5 aims to support the Brazilian Supreme Court by analysis the lawsuit cases that reach the Court, using document analysis and natural language processing tools6. In Europe, Latvia is exploring the possibilities for the use of the machine learning systems in the administration of justice7.

[9]

The public use of the AI systems had varying degrees of success; some of the most known – and fairly controversial ones – include COMPAS, the US Correctional Offender Management Profiling for Alternative Sanctions. This risk-assessment algorithm created and used to predict potential hot spots of violent crime and assess the risk of recidivism, even though highly efficient, run a high risk of racial profiling and raised questions about non-discrimination. Similarly, the HART Harm Assessment Risk Tool – the AI-based technology created to help the UK police make custodial decisions based on the recidivism risk assessment – has been described as reinforcing the bias. In both of those instances, the consideration for the efficiency criteria in the use of the AI seemed to have overruled the ethical or human rights aspects8. In the age of information technology and the rapidly growing popularity of the AI systems, a thorough analysis of legal compliance and social consequences of their use remain a challenge for both scholars and policy-makers.

3.1.

Initiative of CEPEJ ^

[10]

On 3 – 4 December 2018 in Strasbourg the European Commission for the Efficiency of Justice (CEPEJ) during its 31st plenary meeting adopted the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment. CEPEJ, a Council of Europe expert body, acknowledged the increasing importance of the artificial intelligence in modern societies, and the expected benefits when it will be fully used at the service of the efficiency and the quality of justice and formally adopted five fundamental principles of use of the AI in the judicial system and its environment9:

  • principle of respect for fundamental rights,
  • principle of non-discrimination,
  • principle of quality and security,
  • principle of transparency, impartiality and fairness,
  • principle «under user control».
[11]

Addressees of the Charter are public and private stakeholders responsible for the design and deployment of the AI tools and services that involve the processing of judicial decisions and data. Authors of the Charter emphasize that five principles are intended also for public decision-makers responsible for a creation of legislative or regulatory frameworks for use of the AI in the legal sphere. Obviously, the guidelines of the Charter involve not only cases of the AI-application in the administration of justice (discussed in this paper), but – for the most part – cases of private initiatives of business entities in the field of legal data processing. However, with respect to the automation of judicial proceedings, it must be noted that systems used by public authorities should be required to have even higher levels of security and respect for human rights than those created by private entities.

3.1.1.

Principle of respect for fundamental rights ^

[12]

According to the first principle of the Charter, all AI systems used in the legal sphere must be designed and implemented with respect for fundamental rights guaranteed i.a. by the European Convention on Human Rights (ECHR)10. In this respect, the most important issue is to ensure the right to the court in automated judicial proceedings. In accordance with article 6.1 of ECHR: «In the determination of his civil rights and obligations or of any criminal charge against him, everyone is entitled to a fair and public hearing within a reasonable time by an independent and impartial tribunal established by law (…)». Similar guarantees for the right of access to justice have been established in article 47 of the Charter of Fundamental Rights of the European Union11, article 14 of the United Nations (UN) International Covenant on Civil and Political Rights12 and article 10 of the UN Universal Declaration of Human Rights13.

[13]

Focusing on civil proceedings, the access to justice enables individuals to protect themselves against infringements of their civil rights or to remedy civil wrongs. Its core elements include an effective access to a dispute resolution body, the right to fair proceedings, the timely resolution of disputes and the general application of the principles of efficiency and effectiveness to the delivery of justice14. Hence, the access to justice in civil procedure encompasses a number of rights, such as the requirement for proceedings to end within a reasonable time. Excessive delays can undermine respect for the rule of law and prevent the access to justice. The automation of civil proceedings may become a very effective tool in the field of: disburdening the judges of excessive, repeatable activities, improving the analysis of the judicature, accelerating the judicial proceedings and, as a result, increasing the efficiency of the judiciary. Since delayed justice is denied justice, one should not ignore the efficiency potential the creators of the AI systems offer to the judiciary.

[14]

However, the analysis of the use of the AI in the judiciary should not be limited only to considerations on its efficiency. The key to the proper automation of the judicial proceedings (including civil proceedings) is a comprehensive approach, which should also include an analysis of potential limitations and risks. The AI systems in the judiciary raise doubts about respecting human rights. One of the elemental requirements is to ensure proper procedural guarantees to the party of the automated proceedings. The research on procedural justice15, one of the basic requirements of human rights-oriented perspective, is conducted since the 1970s. John Thibaut and Laurens Walker published then results of their research. They proved that from the point of view of the person, whom the decision concerns, the manner of how the decision is taken is sometimes more important than its content16. They pointed out that despite extremely strong desire to win the court trial, the humans are ready to accept the defeat if they feel that the decision was taken as a result of a fair decision-making process. The elements of fair decision making-process are i.a.:

  • right to a hearing for a person, whom the judgment will concern (presentation of one’s position, right to be heard),
  • respect (person whom the judgment will concern shall feel that she is treated by the adjudicating entity with dignity),
  • impartiality (a sense of reliability on the decision making-process, including the confidence, that the adjudicator is neutral and trustworthy),
  • clarity (a feeling that the language, the content of the decision, and rights and obligations are clear and understandable)17.
[15]

In order to consider using the AI in the judiciary, all of these principles shall be observed in the course of automated proceedings.

3.1.2.

Principle of non-discrimination ^

[16]

The second principle of the Charter states that the development or intensification of any discrimination between individuals or groups of individuals in the AI systems should be specifically prevented. Previously mentioned examples of experimental use of the AI tools in COMPAS and HART systems perfectly illustrate the problem of bias in algorithmic profiling. In a Big Data world, the use of algorithms in order to improve decision-making processes seems obvious (due to the possibility to accelerate these processes and increase their effectiveness). However, previous experiences related with the practical use of these systems proved that this approach might have discriminatory and deterministic results. Using apparently neutral statistical data, showing that some African-American individuals are more often involved in criminal acts, the American system COMPAS, has led to a higher risk factor for the entire African-American population. Despite the fact that such systems are not designed to discriminate against anyone, the approach based only on statistics and machine learning has led to the denial of the idea of legal individualization.

[17]

Goodman and Flaxman reasonably sum the problem up: «machine learning can reify existing patterns of discrimination – if they are found in the training dataset, then by design an accurate classifier will reproduce them. In this way, biased decisions are presented as the outcome of an «objective» algorithm»18. According to article 21.1 of the Charter of Fundamental Rights of the European Union, any discrimination based on any ground such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, membership of a national minority, property, birth, disability, age or sexual orientation shall be prohibited. Similar regulation is included in Article 14 of the ECHR.

[18]

Implementing the principle of non-discrimination, the European Ethical Charter on the Use of AI in Judicial systems indicates three fundamental countermeasures. The first one is of preventive character, and it includes creation of multidisciplinary research teams whose task is to create systems without discriminatory trends. The second one concerns the situation of identifying the discrimination in the course of system’s operation. The authors of the Charter indicate: «(w)hen such discrimination has been identified, consideration must be given to corrective measures to limit or, if possible, neutralise these risks»19. The third remedy is of general character, and it relates more to the users of the system rather than to the system itself. It concerns the awareness-raising among stakeholders in the scope of limitations of this system and risks related to its practical use.

3.1.3.

Principle of quality and security ^

[19]

The third principle of the Charter sets out a guidance that the processing of judicial decisions and data should be:

  • based on certified sources and data,
  • created with models conceived in a multi-disciplinary manner,
  • used in a secure technological environment (ensuring integrity and intangibility of the system).
[20]

The availability of open data of judicial decision is one of the fundamental requirements for the proper development of the systems based on the machine learning. Considering Polish perspective, according to 2018 CEPEJ Studies No. 26 «European judicial systems. Efficiency and quality of justice» Poland is one of the few countries in Europe that does not provide full access to case-law20. Admittedly, in 2012 the «Judgements Portal» was established – the public database with judgments of Polish common courts. It allows free of charge online access to judgments accompanied by a statement of reasons for non-registered users. Originally, the «Judgements Portal» was meant to include all judgments passed by Polish courts after the portal was established, excluding explicitly indicated exceptions (family matters, matters involving protection of personal interests, matters related with mental health etc.). However, it turned out that only small number of court judgements are published. Judges (who decide about publication of their judgement in «Judgements Portal») do not publish judgements, which are irrelevant from legal or informational point of view or which are repeatable21. As a result, as of 21st December 2018, in «Judgements Portal» there are 284 688 judgements22, while Polish common courts pass annually around 7.5 million of judgements23. Due to lack of open data of judicial decisions of Polish courts, any attempts to create properly working case-law studying systems (also for the needs of automation of judicial proceedings) may face serious difficulties.

[21]

It should be remembered that in case of access to open data of judicial decisions, the data that is entered into a software which implements a machine learning algorithm should come from certified sources and should not be modified until they have actually been used by the learning mechanism. The whole process must therefore be traceable to ensure that no modification has occurred to alter the content or meaning of the decision being processed24. The authors of the Charter also deserve credit for their opinion that any AI system in the legal sphere should be created by multidisciplinary expert teams, on the basis of close cooperation between programmers, lawyers and representatives of economic, social and philosophical sciences. The proper risk analysis related with creation and functioning of the AI systems in the judiciary may be provided only by such a broad approach.

3.1.4.

Principle of transparency, impartiality and fairness ^

[22]

Analysing the fourth principle of the Charter, authors bring awareness to the necessity of existence of certification and auditing mechanisms for automated data processing techniques and a balance between an intellectual property of certain processing methods and a need for transparency (for example, open source code). However, the issue raised in relation to the fourth principle of the Charter is much broader and is related with the requirement of the AI systems’ transparency – widely discussed among the theoreticians and practitioners. The artificial intelligence is viewed as the black box. This approach is particularly relevant with respect to machine learning methods. Knowledge-based AI systems are programmed with rules reproducing the logic of legal reasoning, while the machine learning systems identify existing statistical models in the data and match them to specific results. For those reasons, machine-learning techniques are blamed for opacity. It has been frequently argued that much of the usage of algorithms in machine learning takes places without «understanding» causal relationships (correlation instead of causation), which may lead to bias and errors and raise concerns about data quality25.

[23]

Increased efforts towards promoting greater transparency and accountability are now needed in the area of the use of the AI systems in the judiciary in order to anticipate future difficulties and draw attention to human rights concerns. There is still a need for further consideration on «Explainable Artificial Intelligence» and taking initiatives such as FAT/ML (Fairness, Accountability, and Transparency in Machine Learning)26. Any attempts to change the machine learning algorithms are as important as critical analysis of the results of their use, raise of the public awareness in this respect and, as a result, starting relevant public debate.

3.1.5.

Principle «under user control» ^

[24]

The principle «under user control» is among others about using the AI systems as lawyer’s work supporting tool. It says that the system user (when it comes to the automation of the judicial proceedings – usually a judge) shall have unlimited control over the system. One of the fundamental requirements of this control is to enable the user self-reliant and unlimited access to the data (including judicial decisions) used by the system to produce a result. Whereas, in case the citizen is the system user, he shall be informed in clear and understandable language whether or not the solutions offered by the artificial intelligence tools are binding, of the different options available, and that he has the right to legal advice and the right to access a court.

[25]

The authors of the Charter emphasize the obligation to inform the party to the proceedings in a reliable way about the character of the AI generated decision. The subject of information shall be understood as broad as possible. The conclusions regarding «persuasiveness» of JDSS presented in the previous part of the article (section 2) prove that also judges should be informed about the consequences of using the AI systems in justice (including psychological ones). The awareness of persuasiveness of the JDSS could decrease negative effects by creating a critical approach to automatically generated results, at the same time increasing judges’ sensitivity to the quality of the decision.

4.

Conclusion ^

[26]

The rapid development of the AI techniques allows today to create systems that may be able to resolve legal disputes (or at least some of them). The AI-application in the field of justice has potential to revolutionize it, by inter alia: accelerating the judicial proceedings, unifying the jurisprudence, and increasing the cost-efficiency of the judiciary.

[27]

The question, whether judicial proceedings should be automated, could be answered as follows: yes, but under the condition, that the AI system will perform all assigned duties as well as the human judge. The automated judicial proceedings should be considered acceptable only if its functions performed by the AI tools will be carried out at least as good as currently (including ethical perspective), and preferably – much better. It is the intention of any judicial proceedings to regulate social relations by protecting those, whose rights are violated, and to deny to defend interests of the entities not deserving protection. Due to the social mission of the judiciary, the AI shall not be used to spread low quality judgements, without respecting fundamental rights of individuals they concern. The purpose is to increase the quality of the justice and its prosocial and pro-civic attitude.

5.

References ^

[28]

Burdziej, Stanisław, Sprawiedliwość i prawomocność. O społecznej legitymizacji władzy sądowniczej, Toruń 2017, p. 22.

[29]

CEPEJ Studies No. 26 «European judicial systems. Efficiency and quality of justice», https://rm.coe.int/rapport-avec-couv-18-09-2018-en/16808def9c, access: 2018-12-23.

[30]

Correia da Silva, Nilton et al., Document type classification for Brazil’s supreme court using a Convolutional Neural Network, Proceedings of the Tenth International Conference on Forensic Computer Science and Cyber Law – ICOFCS 2018, São Paulo, Brazil, 2018, p. 7.

[31]

Dijkstra, Jaap, Legal Knowledge-based Systems: The Blind leading the Sheep?, International Review of Law, Computers & Technology (2001), Vol. 15, No. 2, pp. 119 – 128.

[32]

Goodman, Bryce/Flaxman Seth, European Union regulations on algorithmic decision-making and a «right to explanation», In: Kim, Been/Malioutov, Dmitry/Varshney, Kush (Eds.), Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning - WHI 2016, New York, USA, 2016, p. 28.

[33]

Handbook on European law relating to access to justice, Publications Office of the European Union, Luxembourg 2016, https://www.echr.coe.int/Documents/Handbook_access_justice_ENG.pdf, access: 2018-12-23.

[34]

The Council of Europe Study DGI(2017)12 «Algorithms and Human Rights – Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications» prepared in March 2018 by the Committee of Experts on Internet Intermediaries (MSI-NET), March 2018, https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5, access: 2018-12-23.

[35]

The European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment adopted by the CEPEJ during its 31st Plenary meeting, Strasbourg, 3-4 December 2018.

[36]

Thibaut, John/Walker, Laurens, Procedural Justice: A Psychological Analysis, Lawrence Erlbaum Associates, Inc., Hillsdale 1975.

[37]

Todd, Peter/Benbasat, Izak, The influence of Decision Aids on Choice Strategies: An Experimental Analysis of the role of cognitive Effort, Organizational Behavior and Human Decision Processes (1994), Vol. 60, Issue 1, pp. 36 – 74.

  1. 1 Dijkstra, Legal Knowledge-based Systems: The Blind leading the Sheep?, International Review of Law, Computers & Technology (2001), Vol. 15, No. 2, pp. 119 – 128
  2. 2 Todd/Benbasat, The influence of Decision Aids on Choice Strategies: An Experimental Analysis of the role of cognitive Effort, Organizational Behavior and Human Decision Processes (1994), Vol. 60, Issue 1, pp. 36 – 74.
  3. 3 The Council of Europe Study DGI(2017)12 «Algorithms and Human Rights – Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications» prepared in March 2018 by the Committee of Experts on Internet Intermediaries (MSI-NET), March 2018 (https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5, access: 2018-12-23), hereinafter referred to as «the Council of Europe Study on the Human Rights Dimensions of Automated Data Processing» p. 8, 12.
  4. 4 Appendix I to the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment adopted by the CEPEJ during its 31st Plenary meeting (Strasbourg, 3-4 December 2018), hereinafter referred to as «the European Charter on the Use of AI in Judicial Systems» or «the Charter», p. 14.
  5. 5 VICTOR is a project at the Brazilian Supreme Court, developed in a partnership with the University of Brasília.
  6. 6 Correia da Silva et al., Document type classification for Brazil’s supreme court using a Convolutional Neural Network, Proceedings of the Tenth International Conference on Forensic Computer Science and Cyber Law – ICOFCS 2018, São Paulo, Brazil, 2018, p. 7
  7. 7 Appendix I to the European Ethical Charter on the Use of AI in Judicial Systems, p. 14.
  8. 8 The NGO ProPublica analysed COMPAS assessments and published an investigation claiming that the algorithm was biased (https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing – access: 2018-12-23), the NGO Big Brother Watch in the UK criticised HART system for «unfair and inaccurate decisions, and a «postcode lottery» of justice, reinforcing existing biases and inequality» (https://bigbrotherwatch.org.uk/wp-content/uploads/2018/07/Big-Brother-Watch-evidence-Policing-for-the-future-inquiry.pdf – access: 2018-12-23).
  9. 9 The European Ethical Charter on the Use of AI in Judicial Systems, p. 5.
  10. 10 Convention for the Protection of Human Rights and Fundamental Freedoms as amended by Protocols Nos. 11 and 14, supplemented by Protocols Nos. 1, 4, 6, 7, 12, 13 and 16, Rome, 4 XI 1950, hereinafter referred to as «ECHR».
  11. 11 Charter of Fundamental Rights of the European Union (OJ C 326, 26.10.2012, pp. 391 – 407).
  12. 12 International Covenant on Civil and Political Rights, adopted and opened for signature, ratification and accession by General Assembly resolution 2200A (XXI) of 16 December 1966, entry into force 23 March 1976, in accordance with Article 49.
  13. 13 The Universal Declaration of Human Rights proclaimed by the United Nations General Assembly in Paris on 10 December 1948 (General Assembly resolution 217 A).
  14. 14 Handbook on European law relating to access to justice, Publications Office of the European Union, Luxembourg 2016 (https://www.echr.coe.int/Documents/Handbook_access_justice_ENG.pdf, access: 2018-12-23), p. 17.
  15. 15 In the philosophical and legal literature, the notions of distributive justice and procedural justice are distinguished. Distributive justice concerns the content of the decision, whereas procedural justice – the manner of making a decision and how the party to whom the decision pertains is treated by the decision-maker.
  16. 16 Thibaut/Walker, Procedural Justice: A Psychological Analysis, Lawrence Erlbaum Associates, Inc., Hillsdale 1975.
  17. 17 Burdziej, Sprawiedliwość i prawomocność. O społecznej legitymizacji władzy sądowniczej, Toruń 2017, p. 22.
  18. 18 Goodman/Flaxman, European Union regulations on algorithmic decision-making and a «right to explanation», In: Kim/Malioutov/Varshney (Eds.), Proceedings of the 2016 ICML Workshop on Human Interpretability in Machine Learning - WHI 2016, New York, USA, 2016, p. 28.
  19. 19 The European Ethical Charter on the Use of AI in Judicial Systems, p. 7.
  20. 20 CEPEJ Studies No. 26 «European judicial systems. Efficiency and quality of justice», p. 218 (https://rm.coe.int/rapport-avec-couv-18-09-2018-en/16808def9c, access: 2018-12-23).
  21. 21 https://www.ms.gov.pl/pl/sady-w-internecie/portal-orzeczen/, access: 2018-12-23.
  22. 22 https://orzeczenia.ms.gov.pl/, access: 2018-12-28.
  23. 23 https://isws.ms.gov.pl/pl/baza-statystyczna/publikacje/download,2779,0.html, access: 2018-12-28.
  24. 24 The European Ethical Charter on the Use of AI in Judicial Systems, p. 8.
  25. 25 The Council of Europe Study on the Human Rights Dimensions of Automated Data Processing, p. 37.
  26. 26 http://www.fatml.org , access: 2018-12-23.