Jusletter IT

Transparancy Requirements for Algorithms and AI, Wishful Thinking?

  • Author: Robert van den Hoven van Genderen
  • Category of articles: Juristische Informatik-Systeme & Legal Tech
  • Category: Articles
  • Region: EU
  • Field of law: LegalTech
  • Collection: Conference proceedings IRIS 2021
  • DOI: 10.38023/8cef2a86-69a1-4d93-9717-306295b60db3
  • Citation: Robert van den Hoven van Genderen, Transparancy Requirements for Algorithms and AI, Wishful Thinking? , in: Jusletter IT 27 May 2021
Algorithms are the basic engine of AI and are increasingly supporting and channelling the outcome of deciscion making in all kind of different areas . Algorithms will drive many different applications, from industrial products to administrative and governmental decision making. This includes relevant legal decisions that will require information for data-subjects. The GDPR requires transparency of decision-making and explainability as well as information to the datas-subjects. In what respect and to what extent is this demand feasible? Must algorithms always be explainable or do we accept the black box to be black for different functionalities. Is a separate legal framework for algorithms necessary? It is not always necessary to know what is under the hood of the car to drive...

Table of contents

  • 1. Algorithms and explainability
  • 2. Explainability
  • 3. Interpretation of Algorithms
  • 4. High level ethical requirements as part of legal requirements?
  • 5. The use of AI and algorithms by government and industries
  • 6. The actual use of algorithms for profiling, Algorithm Based Decision making (ADS) Use for Risk assessment in Criminal behaviour and Fraud Investigations
  • 7. Transparency and Compliance with EU and GDPR Requirements
  • 8. Technical transparency and explainability
  • 9. The GDPR
  • 10. Conclusion
The Algorithm’s coming-of-age as the new language of science promises to be the most disruptive scientific development since quantum mechanics.1
[1]

The use of artificial intelligence (AI) is a development that is considered ambivalent. On the one hand, it provides great opportunities to achieve results more efficiently and effectively by analyzing large amounts of different data (“big data”) using AI. On the other hand, there is a fear that the use of self-learning algorithms and AI will result in more extensive damage to the already shrinking field of privacy. As stated in the considerations of the GDPR, the processing of personal data should be intended to serve mankind2

“Currently, many algorithms function like ‘black boxes’. 3 The algorithm gives answers but no explanations. This is bad news if you are refused a mortgage (‘algorithm says no’) or if the police arrests you (‘algorithm says yes’).” 4
[2]

Transparancy of Algorithms is required as a fundamental principle for processing personal data but there are contravening interests; intellectual property rights, trade secrets and interesting enough, privacy of others are to be balanced against the transparency requirements. And of course there is the general exception of Article 8.2 ECHR on national security. In the introduction to a research of the Ministry of Justice and Security in the Netherlands researchers state that:

“transparency will need to be optimized; transparency will need to be balanced with security. The Ministry needs to be open and transparent ‘where possible’ and to provide security and safety ‘where needed’.” 5
[3]

So the big question is to what extent is transparency of algorithms feasible, necessary and even possible? Until what level do we need and want transparency? And is the European (GDPR) perspective on the requirement for transparency of algorithms wishful thinking?

1.

Algorithms and explainability ^

[4]

Although there are many definitions of what an algorithm really is, I would like to use the following description: a set of mathematical instructions or rules that will calculate or process data by a computer to solve a problem or will create a certain result.

[5]

The research report to the European Parliament by Claude Castelluccia and Daniel Le Métayer give a broader description including the working of the algorithm. The authors describe an algorithm as an unambiguous procedure to solve a problem or a class of problems.6 It is typically composed of a set of instructions or rules that take some input data and return outputs. As an example, a sorting algorithm can take a list of numbers and proceed iteratively, first extracting the largest element of the list, then the largest element of the rest of the list, and so on, until the list is empty.7 Algorithms can be combined to develop more complex systems, such as web services or autonomous cars. An algorithm can be hand-coded, by a programmer, or generated automatically from data, as in machine learning.8

[6]

“machine learning refers to an automated process of discovering correlations (sometimes alternatively referred to as relationships or patterns) between variables in a dataset, often to make predictions or estimates of some outcome.”9

2.

Explainability ^

[7]

In the Guidelines of WP 29 and the EDP reference is made to the recital 39 of the GDPR that concerning data processing: “It should be transparent to natural persons that personal data concerning them are collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed. The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used” (...)to ensure fair and transparent processing in respect of the natural persons concerned and their right to obtain confirmation and communication of personal data concerning them.10 This explanation is accentuated in the comments of recital 42 GDPR be it that it is compared to the explanation of transparency in consumer contracts: “The requirement for clear and plain language means that information should be provided in as simple a manner as possible, avoiding complex sentence and language structures. The information should be concrete and definitive; it should not be phrased in abstract or ambivalent terms or leave room for different interpretations. In particular the purposes of, and legal basis for, processing the personal data should be clear.11

[8]

The problem with creating transparency and explainability of algorithms is that there are different addressees for this requirement. If the transparency of the technical functionality and decision tree has to be explained this is a task for the experts, the programmers and developers. The experts should be capable to make the technical structure of the algorithm logically comprehensible to the user/controller and the user or data subject should be informed about the consequences but is it necessary to be informed to be informed about the function and the consequences and effects but it is in my view not always required to know the in-depth processing aspects of the algorithm. The key question is how deep the explainability must go according to the legal requirements.

[9]

As stated in the study for the European Parliament: Explanations (of the working of algorithms (RHG) can be of different types (operational, logical or causal); they can be either global (about the whole algorithm) or local (about specific results); and they can take different forms (decision trees, histograms, picture or text highlights, examples, counterexamples, etc.)12

[10]

But even if the algorithm can be transparent the outcome can be different as a result of the data input. A black box can be changed in a crystal box, but one has to understand what one sees. The working of an advanced algorithm can be compared with the data processing by a human child. The education, genetic information and external influences makes it impossible to predict the output. A deep learning- or certainly a self learning algorithmic system will result in a comparable unpredictable output to reach its goal.

[11]

The other problem is that algorithms are not a guarantee against bias. They process the given, or available data that is based on human generated statistics and data files. One could simplify the problem statement as “bias in, bias out”.13 The awareness of these risks of the use of algorithms within decision making became a point of attention for the European Commission, by means of an in-depth study, following a proposal of the European Parliament considering that:

“Algorithmic transparency is an important safeguard for accountability and fairness in decision-making. From various AI applications to ranking results in search engines, algorithms govern the way we access information online. This has large implications for consumers and businesses in areas such as online platforms. Understanding algorithmic transparency in an in-depth manner is key for informed policy-making”. 14
[12]

With reference to Lawrence Lessig it is not just technology and law but also market, architecture(nature) and social norms that ultimately determine how to cope with implementation in society.15 I would add, or specify the constraints of Lessig, that interpretation of laws on technology also strongly depend on culture an political situation. As a result of these constraints also the interpretations of other rights and obligations covering fundamental rights, privacy, trade secrets, (national) security, public interest etc. will also limit the possibility of transparency and explainability of algorithms. This will not only be case concerning the use of algorithms used in the processing of personal data but any other application of algorithms in our algorithmic dependent society.

3.

Interpretation of Algorithms ^

[13]

In the case of interpretability, the point is that the outcome of an algorithm should be explained in comprehensible language, but what is comprehensible, for which parties? 16 Technical transparency can contribute to a certain extent, but it also has its limits. The problem is a problem of choice: Which variables will be taken into account? Some algorithms can include thousands of features in their reasoning process. The outcome of the algorithm is then based on so many factors that it is difficult to see which factors were decisive for the outcome. This also makes it difficult to understand or influence the conclusion. Secondly, some models used with self-learning algorithms are inherently opaque. This is particularly the case with deep learning neural networks. It will be easier to analyze afterwards on which information an algorithm bases its outcome.

[14]

The bottleneck in the development of AI now is crossing from correlation to causality. Take a classic example: When ice cream sales rise significantly, the number of people drowning in pools and accidents at the beach escalate as well. But ice cream probably does not cause drowning. Under operating AI technology, we can only know the result, but do not know how and why this result is reached. This is the difficulty of explaining the relevance.

[15]

Wachter et al. argue that “explainability” needs not rely upon general public understanding of how algorithms function, but should be oriented on the explanation of the purpose of the use of the algorithms, more in the sense of instructing the subjects to understand the goal of the process.17 For as far as it necessary to explain they suggest three aims for explanations directed on: understanding the decision; inform the data subject of grounds an possibility to contest the decision and create a possibility to change the decision making model in future.

[16]

In a conference on regulation of algorithms in Kyushu Japan, Professor Burkhart Schäfer suggested the possibility of “algorithmic apologies” as the mirror image of Wachter’s counterfactual explanations18. Putting the analysis in the context of recent Scottish legislative initiatives that created a legal framework for “apologies”, he argues that “apologies” as a form of explanation are technologically feasible and fulfil a distinct legal purpose.19

[17]

Also, the Netherlands Minister on Legal Protection admits that certainly with deeplearning algorithms the input data can be variable and not fixed so that it will be difficult to what extent this causes a change in the output. He states that analysis will be a very time-consuming process. Referring to the fact that there still will be a need for a certain need for all parties of society (companies, scientists, governments) to be credible in their decisionmaking, processes need to demonstrate the importance of traceable results of algorithms he also leans on the ex-post explanation. Therefore, he states that against this background, when algorithms can be explained, the focus is on describing the goal that the algorithm aims to achieve, which variables have been decisive for the outcome, the type of data used, and any decision rules. Naturally, this does not affect the fact that, as for instance justice and criminal procedures as well as relating to national security there may be grounds in the light of existing laws and regulations to limit the degree of interpretability.20 Although he does not explain how far this limitation stretches, if for instance this also should apply when algorithms are used to make predictions or to estimate risks.

4.

High level ethical requirements as part of legal requirements? ^

[18]

Inherently to the character of ethical principles the requirements for ethical requirements for the programming and use of algorithms, another escape from more concrete specifications is to be expected. Every self-respecting international organisation, Council of Europe, OECD and the EU has issued guidelines for ethical use of AI and/or algorithms. For the sake of clarity, I will limit myself to the ethical guidelines of the European Group on Ethics in Science and New Technologies Ethical Principles and Democratic Prerequisites and the Council of Europe, be it that they are directed to the use of AI in general, not specifically on the use of algorithms.21 The literature and perspectives of organisations though, generally mix the use of AI and the underlying algorithms.

[19]

As is to be expected, the European Ethical Guidelines on AI state very high level that AI should be robust, fair and trustworthy. According to the Guidelines, trustworthy AI should be:

(1) Lawful – respecting all applicable laws and regulations;

(2) Ethical – respecting ethical principles and values;

(3) Robust – from a technical perspective while taking into account its social environment.

[20]

This would result in the following sub requirements for ethics:

  1. Human dignity: respect for the human values in the use of algorithmic processing
  2. Autonomy: of humans (not AI entities!)
  3. Responsibility: AI research and application to align with a plurality of fundamental human values and rights
  4. Justice, equity, and solidarity: non discriminatory equal access to “autonomous” technologies and fair distribution of benefits and equal opportunities across and within societies
  5. Democracy: publically informed key decisions on the regulation of AI development
  6. Rule of law and accountability: unbiased access to justice and the right to redress and a fair trial
  7. Security, safety, bodily and mental integrity
[21]

The Council of Europe, to be precise, the Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data, in its Guidelines proposes a comparable set, logically more specifically oriented on the aspect of privacy as it concerns Convention 108:

[22]

The key elements of this approach are:

  • lawfulness, fairness, purpose specification, proportionality of data processing, privacy-by-design and by default, responsibility and demonstration of compliance(accountability),
  • transparency, data security and risk management.
  • AI applications should allow meaningful control by data subjects over the data processing and related effects on individuals and on society.22
[23]

Is this enough to instruct governments and industry to use AI in a human friendly and respectful way and could it also be considered of elements on regulating the development and use of algorithms? There is a broad range of ethical aspects that could be taken in consideration but not needs to be engraved in legal norms. As stated before, it all depends on the use and functionality of the algorithmic processing of personal data. The use in governmental decisionmaking or as part of court decisions requires different explanation than the use of data for surgery or autonomous vehicles. Therefore, it is necessary to have knowledge about the technical aspect of the working of algorithms as well as to know what the influence of the use of algorithms has on our daily life. In this influence there are always two sides of the medal; algorithms can enhance knowledge and improve efficient en well informed decision making, creating better results and less faults. On the other side the use of algorithms can have a negative influence on our privacy, can create dependency on those applications, can create bias and can disturb social contacts and transparency of decision making and legal procedures.

[24]

Therefore, the European Commission strives for algorithm awareness building. Algorithmic transparency is an important safeguard for accountability and fairness in decision-making. From various AI applications to ranking results in search engines, algorithms govern the way we access information online. This has large implications for consumers and businesses in areas such as online platforms. Understanding algorithmic transparency in an in-depth manner is key for informed policymaking.23

[25]

In the following paragraphs I will scrutinize the necessity of legal requirements for transparency of algorithms and the means of actual controlling the technical working of algorithms and transparency of the process as being understandable for the data subject.

5.

The use of AI and algorithms by government and industries ^

[26]

There is a natural tendency to use any new technology to improve control of the public, be it for governing or to sell products. Airbnb, Google, LinkedIn, Tesla, Microsoft and hundreds of other companies are using algorithms to profile their customer to “create better services”. Will it be possible to create a legal framework that will take into account the ethical aspects as well practical measures to scrutinize the development and application of algorithms in a broad sense? Or is the variety for the application and use of algorithms inherently so big that only high-level principles could present a guideline how to deal with an “algorithmized” society? Too many rules could be detrimental, working as “chilling factor” to the further development and use of algorithms. To regulate this multi-differential multi-technical and multi-functional engine for AI systems, robots and human behaviour is a very difficult task. Fear of the unknown should in any case not be a guideline for restrictive regulations for the development and application of algorithms. However, the prevention of negative aspects of the application of algorithmically supported decision-making must be taken into account.

6.

The actual use of algorithms for profiling, Algorithm Based Decision making (ADS) Use for Risk assessment in Criminal behaviour and Fraud Investigations ^

[27]

Government and business are happy to resort to the use of AI for profiling citizens and customers. In addition, (deep/self) learning algorithms are fed with (big) data, which form the basis of the system. Analyzing that data leads to results that lead to analysis that lead to results that form the basis for decision-making.

[28]

Well-known examples of support in the use of Algorithm based decision (ADS) making are face recognition, license plate registration and population screening based on background and behavioural data of citizens. Systems in the Netherlands such as SyRi (System Risk Indication) – in which the government combines the personal data of citizens to detect various forms of fraud, abuse and violations – seem to be surpassing their goal.

On 5 February 2020, the Court of The Hague ruled that SyRI (System Risk Indication) legislation is contrary to the European Convention on Human Rights. 24This case was brought by a large number of civil society organizations against the use by the State of the Netherlands to detect and combat fraud in a number of “risk areas” with the help of data linking and analysis using algorithms. The court ruled that there was insufficient balance between the use of new technologies such as AI, data analysis, algorithms, deep learning or self-learning systems - and respect for private life as set out in Article 8 of the ECHR. According to the court, there is also a risk of discrimination. The law is insufficiently transparent and verifiable and therefore unlawful. This could be an (European) Landslide case.

[29]

In the United States and China, the use of ADS in support of governmental decision making is even more wide spread. Exemplary is the well known case, concerning the use of the “Correctional Offender Management Profiling for Alternative Sanctions (COMPAS)” in the “State v. Loomis” is a good example of the (negative) effect of algorithmic based decision making. In 2013, the Circuit Court for La Crosse County, Wisconsin charged Eric Loomis with attempting to flee a traffic officer and operating a motor vehicle without the owner’s consent.25 The trial court based its decision for a part on a COMPAS risk assessment, which indicated Loomis might have extremely high risk to reoffend, sentenced Loomis to six years of imprisonment and five years of extended supervision. Loomis challenged the court’s using COMPAS in deciding his sentence in two aspects, including due process rights and considering gender at sentencing. In 2016, the Supreme Court of Wisconsin dismissed Loomis’ appeal, stressing the positive functionality of the use of COMPAS, “COMPAS has the potential to provide sentencing courts with more complete information to address enhanced need.” Concerning the question of due process, the court suggested that only when the risk assessment score was the determinative or role factor the judge considered shall the claim succeed. Moreover, the court concluded that Loomis could challenge the data that put into the algorithmic system, because the data was publicly available, such as administrative data, public records, and self- reporting and interviews from the defendant, Loomis could correct and determine the accuracy of his risk assessment score26. And to the question of gender, the court found that the use of gender as a factor in assessment served to promote accuracy, with no discriminatory intention, so there was no violation to Loomis’ constitutional rights. Though the court did not accept Loomis’ claims, it concerned the future using of COMPAS and outlined permissible using situations. Interesting is that the case raised attention from the whole society and triggered a series of debates over whether it is appropriate to use such algorithm at sentencing in criminal judicial system. Although risk assessment will not be banished the relatively of the outcome was recognized.

[30]

In the study for the European Parliament on the effects of the deployment of ADS it is concluded that this process forms a threat to privacy and data protection in many different ways. The first is related to the massive collection of personal data required to train the algorithms. Even when no external attack has been carried out, the mere suspicion that one’s personal data is being collected and possibly analysed can have a detrimental impact on people. For example, several studies and even have provided evidence of the chilling effect resulting from fear of online surveillance. Altogether, large-scale surveillance and scoring could narrow the range of possibilities and choices available to individuals, affecting their capacity for self-development and fulfilment. Scoring also raises the fear that humans are increasingly treated as numbers and reduced to their digital profile. Reducing the complexity of human personality to a number can be seen as a form of alienation and an offence to human dignity.27

7.

Transparency and Compliance with EU and GDPR Requirements ^

Transparancy

[31]

How is transparency considered in European law in general and important is transparency in European perspective? The WP 29 refers to the principle as one of the fundamental principles of the European legal system. Article 1 of the TEU refers to decisions being taken “as openly as possible and as close to the citizen as possible”. Article 11(2) states that “The institutions shall maintain an open, transparent and regular dialogue with representative associations and civil society”; and Article 15 of the TFEU refers amongst other things to citizens of the Union having a right of access to documents of Union institutions, bodies, offices and agencies and the requirements of those Union institutions, bodies, offices and agencies to ensure that their proceedings are transparent. Therefore, transparency is considered to create trust towards the citizens of the EU. It further is considered an element of the principle of fairness for the processing of personal data as laid down in Article 8 of the Charter of Fundamental Rights of the European Union.

8.

Technical transparency and explainability ^

[32]

If we look at the extent to which algorithms can be made transparent in a more general sense, it is important to realize that transparency is not an end in itself, but a means: transparency can contribute to the confidence that people are using it of algorithms by the government, and the possibilities of possibly resisting an outcome of their use.

[33]

The General Data Protection Regulation sets rules for the processing of personal data. Although the European governments are supposed to handle the personal data of its citizens with care; the requirements for processing personal data in the GDPR, whereby processing must be lawful, decent and transparent, do not always meet the same requirements. The data subject (that is the person whose personal data is being processed) has the right to know the logic behind the processing. In addition, the person has the right not to be subject to profiling that leads to a decision with legal consequences.

[34]

The advisory group on dataprotection and privacy, WP 29, now European Data Protection Board EDPD), isued guidelines concerning the requirements of transparency, probably because these requirements were not so transparent.28

[35]

According to the EDPD these (revised) guidelines should provide practical guidance concerning the and interpretation from the Article 29 Working Party (WP29) on the new obligation of transparency concerning the processing of personal data under the General Data Protection Regulation1 (the “GDPR”).

[36]

According to the Guidelines:

Transparency is an overarching obligation under the GDPR applying to three central areas: (1) the provision of information to data subjects related to fair processing; (2) how data controllers communicate with data subjects in relation to their rights under the GDPR; and (3) how data controllers facilitate the exercise by data subjects of their rights.29

[37]

The technical working of the processing as such is not mentioned but could be considered so in a ample interpretation of the first mentioned area. Even stronger, in a note to this introduction the Guidelines are explained to: set out general principles in relation to the exercise of data subjects’ rights rather than considering specific modalities for each of the individual data subject rights under the GDPR. This is recognized by the WP 29 itself because the Working Party already notices that:

these guidelines cannot address the nuances and many variables which may arise in the context of the transparency obligations of a specific sector, industry or regulated area. However, these guidelines are intended to enable controllers to understand, at a high level, WP29’s interpretation of what the transparency obligations entail in practice and to indicate the approach which WP29 considers controllers should take to being transparent while embedding fairness and accountability into their transparency measures.

[38]

Thus also these guidelines can be considered as rather high level, but it we will see if they can be of relevance to the specific position of algorithms because of the general opinion of many lawyers and scholars that the GDPR gives a clear requirement for the transparency of algorithms.

9.

The GDPR ^

[39]

Although Transparency is not defined in the GDPR “It should be transparent to natural persons that personal data concerning them are collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed. The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used (…) to ensure fair and transparent processing in respect of the natural persons concerned and their right to obtain confirmation and communication of personal data concerning them.30

[40]

The leading principle from the GDPR is that personal data must be processed in a manner that is transparent with regard to data subjects (Article 5, first paragraph, under a). This principle is elaborated in various information obligations. A general framework for this can be found in Article 12 of the GDPR. If a government department processes personal data using algorithms, it follows from that article that it must provide information about this processing that is concise, transparent, comprehensible and easily accessible. This information must be provided in clear and simple language, especially if the information is intended for children. If the processing is in the nature of automated decision-making, the GDPR also requires the data processor to inform the data subject. At least if this decision-making is based on profiling as is referred to in article 22, it must also provide useful information about the underlying logic and the importance and the expected consequences of that processing for the data subject (Article 13, second paragraph, under f, and 14, second paragraph, under g, GDPR. The clearest requirement for transparency and explainability of the processing mechanism is given in Article 22 of the GDPR as described in Recital 71:

“The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. (…)

[41]

In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision.

[42]

The problem is that data-subjects probably does not know they are subjected to algorithms with legal consequences because there are data processing activities under the notable surface taking place.

10.

Conclusion ^

[43]

The impression is created that the GDPR controls the application of algorithms. The GDPR does not specifically regulate the operation and application of algorithms. The words algorithm or artificial intelligence (AI) appear neither in the regulation nor in the explanation. This “high level” (meaning “vague”) regulation is the result of a lengthy process, started at a time when there was hardly any thought about AI. The GDPR sets requirements for the processing of personal data in a general sense, with the aim of technology neutrality. The GDPR stipulates, among other things, that the processing must be lawful, proper and transparent. The explanation of Article 22 refers to the wish to make the processing (by use of algorithms) transparent and explainable so that a party (the processor) can be held accountable for the reult if this result has (legal) consequences for the data subject

[44]

It is argued that the GDPR includes the requirement of fairness and social justice for the processors of personal data. This is “wishful thinking”. It has to be considered what requirements are possible and realistic in using algorithms and AI for the processing of personal data and for which purposes. Only then a credible regulation for the use of algorithms and AI can form the basis for personal data processing in international environment. The question is if it is necessary to explain the working of the algorithm for this purpose. It is more important to know which data is selected and what the result of the processing is. The weighing of the data, as in the Compass case is more important as well as the result in the sense that subject wants to know what the influence is on the decision making, to be explained by the decision makers in an ex-post explanation. This is necessary to know when one has to recur to the possibility to object to this decision. For that democratic right in my view the transparency of the algorithms is not required.

[45]

As stated by Arthur Conan Doyle: When you have eliminated the impossible, whatever remains, however improbable, must be the truth.

  1. 1 Bernard Chazelle, The Algorithm: Idiom of Modern Science, [https://www.cs.princeton.edu/~chazelle/pubs/algorithm.html].
  2. 2 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) Consideration 4.
  3. 3 Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (1st ed. 2015).
  4. 4 [https://responsibledatainnovation.wordpress.com/2018/05/23/transparency-of-algorithms/].
  5. 5 [https://responsibledatainnovation.wordpress.com/2018/05/23/transparency-of-algorithms/] (Informatiestrategie 2017–2022, pp 17, 23–24; and Informatieplan 2017, pp 15–19).
  6. 6 Institut national de recherche en informatique et en automatique – Inria, at the request of the Panel for the Future of Science and Technology (STOA).
  7. 7 See also Frederik Zuiderveen Borgesius, Discrimination, Artificial Intelligence, and Algorithmic Decisionmaking, Directorate General of Democracy, Council of Europe, at 9 (2018).
  8. 8 European Parliament, Understanding Algorithmic Decision-making: Opportunities and Challenges, p17 [http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf].
  9. 9 See David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should Learn about Machine Learning, 51 U.C. DAVIS L. REV. 653, at 671 (2017).
  10. 10 17/EN WP260 rev.01 Article 29 Working Party, Guidelines on transparency under Regulation 2016/679.
  11. 11 Idem p.8, Article 5 of Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts Recital 42 states that a declaration of consent pre-formulated by a data controller should be provided in an intelligible and easily accessible form, using clear and plain language and it should not contain unfair terms.
  12. 12 http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf Idem.
  13. 13 Sandra G. Mayson, Bias In, Bias Out, 128 YAL. L. JOUR. 2218, 2224 (2019).
  14. 14 https://ec.europa.eu/digital-single-market/en/algorithmic-awareness-building (study not yet available 1-1-2020).
  15. 15 L. Lessig: The Laws of Cyberspace, https://cyber.harvard.edu/works/lessig/ laws_cyberspace.pdf.
  16. 16 See generally, Judea Pearl, Dana Mackenzie, THE BOOK OF WHY: THE NEW SCIENCE OF CAUSE AND EFFECT (1st ed. 2018). 15 Michael Blastland, Just because?, BBC News.
  17. 17 Sandra Wachter, Brent Mittlestadt & Chris Russel, “counterfactual explanations without opening the black box: Automated decisions and the GDPR, Harvard Journal of Law & Technology Volume 31, Number 2 Spring 2018.
  18. 18 Wachter et al proposed a model of “explanations through counterfactuals” which would allow the affected parties to improve their submissions next time round.
  19. 19 Conference on Multidisciplinary Perspectives on Algorithms,Regulation, Governance, Markets, Kyushu University,21–23 november [https://www.kyushu-university-law.com/].
  20. 20 Based on the explanation of the minister of Legal protection in: Tweede Kamer der Staten-Generaal 2018–2019, 26643 nr. 570 [https://zoek.officielebekendmakingen.nl/kst-26643-570.html#ID-857880-d36e228].
  21. 21 [https://ec.europa.eu/research/ege/pdf/ege_ai_statement_2018.pdf].
  22. 22 [https://rm.coe.int/guidelines-on-artificial-intelligence-and-data-protection/168091f9d8].
  23. 23 [https://ec.europa.eu/digital-single-market/en/algorithmic-awareness-building].
  24. 24 See http://bit.ly/2S090Yo and http://deeplink.rechtspraak.nl/uitspraak?id=ECLI:NL:RBDHA:2020:865.
  25. 25 See State v. Loomis. (2017, March 10). Retrieved November 1, 2019, from https://harvardlawreview.org/2017/03/state-v-loomis/.
  26. 26 Washington, A. L. (2018). How to argue with an algorithm: Lessons from the compas- propublica debate. Colorado Technology Law Journal, 17(1), 139-141.
  27. 27 Study for the European Parliament, Understanding algorithmic decision-making: Opportunities and challenges, 2019 [http://www.europarl.europa.eu/RegData/etudes/STUD/2019/624261/EPRS_STU(2019)624261_EN.pdf].
  28. 28 Guidelines on transparency under Regulation 2016/679, 17/EN WP260 rev.01.
  29. 29 Guidelines, p.4.
  30. 30 Recital 39 GDPR.