Jusletter IT

About «Good Law»

  • Author: Daniela Tiscornia
  • Category: Articles
  • Region: Italy
  • Field of law: Legal Informatics, Artificial Intelligence & Law
  • Citation: Daniela Tiscornia, About «Good Law», in: Jusletter IT 11 September 2014
The goal of improving the quality of normative knowledge is not new in the development of legal informatics, but the concepts has changed its meaning from the 80s, influenced by the changes in perspectives and in expectations towards ICT potentialities, but also conditioned by the evolution of social reality, mainly by the more pervasive interest of citizens in participating and controlling policy making. The methodological change has shifted the analysis of formal representation models of the norms to the reconstruction of the complex structure of law.

Table of contents

  • 1. Introduction
  • 2. The past: Logic, Informatics & Law, a fruitful encounter
  • 3. The present. From formal modelling to conceptual integration
  • 3.1. The semantic layer
  • 4. Complexity as a value
  • 5. Conclusions
  • 6. References

1.

Introduction ^

[1]
I am in debt to John Sheridan of the UK National Archives, for the expression «good law» that gives the title to this article. John recently used this expression starting a twitter conversation1 with Richard Heaton of the UK Cabinet Office, and with Robert Richards, leader of the Community Legal Informatics blog ( I am a member of it myself), a discussion in which other members of the community also took part. The concept of «good law» is presented as a new idea, still under development, that intends to address, quoting Robert Richards, at least three aspects of legal knowledge:
  • «The quality of substantive legislation, in terms of appropriateness, effectiveness, etc.;
  • Intellectual accessibility or readability, making the law more comprehensible to the public, through means such as public legal education and the promotion of plain language principles;
  • Technological accessibility: in terms of retrievability, usability, etc. improving technological accessibility and usability of electronic legal information.»
[2]
The three aspects are strongly inter-dependent, since technologies, in making legal sources more accessible, could make them more understandable, therefore, facilitating the awareness of citizens and enabling closer attention and control over regulative decision-making.
[3]
The concept of «good law», even if under different names («plain legislation»2 or «better regulation»3 is not new. The novelty lies in proposing an integrated vision of the three aspects: substantive quality, comprehensibility and technological accessibility, in which each step ahead made from one perspective produces positive effects on the other. Legal informatics dedicated a great deal of effort, since its very beginnings, to the field of legislative drafting. Making legislative language «plain» and legislation formally consistent was considered one of the relevant outcomes that the conjunction of ICT to law would have been able to achieve.
[4]
As I have been working in legal informatics for several years, the concept of «good law» (namely, its evolution across more than 40 years of legal informatics) seems to be a good topic for my contribution to a «festschrift» in honour of Friedrich Lachmayer, who devoted much of his scientific activity to investigating how formal logic can provide a clarified view of legal knowledge. This paper argues that, although the goal of improving the quality of normative knowledge dates from the '80s, the concepts has changed its meaning, influenced not only by changes in perspectives and expectations towards ICT potentialities, but also conditioned by the evolution of social reality and especially by the more pervasive interest of citizens in participating and controlling policy making.

2.

The past: Logic, Informatics & Law, a fruitful encounter ^

[5]
The two issues of the Journal «Logica, Informatica e Diritto» edited in 1979 by ITTIG4, as well as the three Conferences organized on the same topic, in the following years,5 marked the birth of a new discipline, Artificial Intelligence & Law, as well as the beginning of the idea of a new legislative science grounded on logic and on ICT. The objective was to ask for contributions from the community of legal theorist in order to build «a computerised model of a system of legal norms»6 and on its basis to simulate through algorithms the reasoning processes used by lawyers in applying the law. The system generated by the software would be able to infer all the possible combinations of consequences that can be deducted from a finite number of normative premises (the universe of cases7), widening the mechanisms of legal syllogism. The idea was to obtain a complete, rational and consistent reconstruction of a subsystem of norms for the purpose of identifying antinomies, redundancies, lacunae and logical inconsistencies. The result could be used for eliminating «pollution» in real legal systems and for giving support to the application of the law.
[6]
The basic assumption for building computational models is that there is a logic suitable for reproducing the law and lawyers' reasoning. In this sense, the birth of Artificial Intelligence & Law revitalised the theoretical debate on the relationships between logic and law, marked, above all in the '60s and '70s, by heated polemics between opposing positions that ranged from the formalistic positions of legal neopositivism to American and Scandinavian jusnaturalism and realism schools.
[7]
One of the first attempts to apply formal methods to law was made in Laymen Allen’s (1957) normalized version of legislative texts, that used the classical propositional logic to reformulate syntactical connectives in statutory texts as logical connective, thus allowing the disambiguation of the logical structure of normative content.
[8]
With the advent of logic programming, first-order logic was introduced to represent «legislation as a logic program8» consenting a law to be represented as a logical theory. Many pieces of legislation have, in fact, a typically definitional structure, or, in any case, can be reformulated in similar models: the norms for the acquisition of citizenship, for example, can be seen as a set of conditions that define the status of the citizen. Therefore, they are logically able to be structured as an axiomatic theory. «There is an analogy here with axiomatic systems in mathematics. The formalization of legislation results in an axiomatic theory that represents legislation; the derivation of programs from specifications correspond to the proof of theorems from axioms. Although we rightly demand that theorems be rigorously derived from axioms and similarly that programs be rigorously derived from specifications, there is no correspondingly rigorous way to justify the axioms themselves»9. It is interesting to note how the formalisation method follows a path opposite to that of the mathematical sciences: whilst in the construction of a logical/mathematical theory the primitives and the syntax are defined, in norm representation, the theory is built from the top, breaking the conditions down into sub conditions and deciding at what level of detail to consider that a condition can be broken down no further – therefore, as a primitive. The typical case is represented by open textured legal concepts (for example, good faith, reasonable compensation..) that can be defined only through interpretative exemplifications.
[9]
A legislative texts translated into a logical sequence of deductive rules, i.e. a logical theory, where the consequent of one rule provide the main premise of another rule, is well suitable to be processed by Prolog, the computational language of logic programming. The same logic provides, therefore, the knowledge representation language (axioms & facts) and the inference rules to process the formal representation.).
[10]

The theoretical debate was, in those years, centred around the computational tractability of classical logic and its limits in expressing the characteristics of legal knowledge, first of all, the deontic aspects of normative inferences. Alternative models were proposed for the formalisation of normative systems by syntactic theories10, or by negating the necessity for logical tools11. In the most simple approaches, legal activities are represented as mapping mechanisms comparing the real world with the ideal regulated world, in order to verify where the two aspects diverge. Infringement of the legal order is none other than an unsuccessful matching between the ideal and the real worlds. «Norms address specific situations in the world by means of a reference to patterns of behaviour. Behaviour is thus interpreted as situations, i.e., a collections of behaviour descriptions that can be seen as a single entity.»12. This has the advantage of appling very simple problems solving processes, insomuch as the computational interpretation of legal activities translates into a matching between collections of elements.

 

3.

The present. From formal modelling to conceptual integration ^

[11]
After several years of experimentation in formal modelling of legislation, the relative failure of legal expert systems begs back to the question about the mechanical, thus computational, nature of law. «Though not often thought of this way, law is inherently computational. It is a set of algorithms that prescribe how various computations are to be carried out. What is my standard (tax) deduction? Am I eligible for family and medical leave? On what day did I become liable for unemployment taxes? Determinations such as these are like mathematical functions: given various inputs, they produce corresponding outputs.[...]This is not to say that the law is entirely mechanical. It’s not. It’s rife with vagueness, ambiguity, and inconsistency. However, a large part of what makes the law inaccessible is its logical complexity: the way different facts and concepts relate to each other. And this aspect of law can (and has) been represented computationally»13.
[12]
This last aspect – the failure to represent the complex interconnection among elements of law – was one of the main limits of AI-based practical applications, and one reason for the failure of the formal modelling approach. Logical representations of norms was focussed on limited, isolated fragments, the attention being mainly oriented on the inferential process and internal logical coherence of set of rules, more than on external relations to the legislative system and on the impact of normative changes on it. Even though the study of the dynamic of normative systems and its formal accounts reached an excellence level in scientific research, the formalisation of large pieces of legislation turned to be too heavy and costly for practical applications. In addition, the alleged isomorphism between logical representation and legislative texts encountered a large amount of scepticism14. Knowledge representation still remains a subjective process, the product of the interpretation of legal experts mediated by knowledge engineers; the role of logic is mainly to make this process explicit, detecting syntactic/semantic ambiguities, thus enabling formal reconstruction of all possible interpretations.
[13]
There is, therefore, a change in perspective and a resulting methodological renewal as far as the idea of «good law» is concerned. No more models of norms, on which simulate legal inferences to check coherence and consistency, but a deep analysis of the structural and conceptual links within and among legal sources; no more a work of representation, but a work of structural and conceptual reconstruction. The impressive progress in ICT, especially in computational linguistic, will support applications enabling citizens to «know the law», by providing them with a complete, coherent picture of normative domains. The goal of «plain legislation» envisaged by Layman Allen and of «better regulation» promoted by the EU Commission mainly require, in such updated meaning, tools able to manage the systemic complexity of the law.
[14]

We return, with a completely renewed vision to the themes of information retrieval15 with a clear distinction between services directed towards legal professionals, still bounded to documents16,and tools aimed at improving communications between citizens and policy-makers, by means of a simplified view on law. Programmes for eGovernment set their sights on the «openness» of public data. Legislative data, given its social role, were the first to be made available within the context of open government data. Because of the semantic ambiguity of the expression, that can be understood as a matter of making transparent the process of governing and decision making; or making available (part of) the data produced and distributed by government, many governments adopted this second meaning, assuming that transparency coincides with access to documents and information, and leaving to themselves the merely the passive role of information producers and distributors. But, can we say that legal knowledge coincides with access to primary sources or, in other words, can public providers of normative data be considered to have respected the right of citizens to get complete knowledge of the norms that regulate them, by merely allowing free access to legislation?

[15]
«The better models of e-Gov posit high levels of informational communication between citizen and state. Unfortunately, in one area, communication has traditionally been poor: that is, access to sources of law. There have been a number of reasons for this, but a primary one has been that law was historically mediated for the citizen by the legal profession. This situation is changing with ever increasing numbers of unrepresented litigants being involved at all levels of national court systems in each and every country as well as a generally higher level of intrusion of legislation into everyday home and business life. There have been attempts to improve access through internet based services, but these have improved communication (‹understanding of law›) to only a limited extent.» 17
[16]

The concept of «good law» seems, therefore, to depend on such a process of re-engineering of legal sources. We could say that what is expected is to give solutions to two main open questions. The first is about structural interconnection among data, the short-term goal that address interoperability of data sources and information elicitation; the enormous amount of legal documents -legislation, case law, commentaries, literature and interpretive decisions– are spread out over public and private sites. There is a problem of reliability, of quality, of technical accessibility. Documents and information in both structured and non-structured form and in different formats are stored in local and often inaccessible databases. Several programmes promoted by governments for legal documents standardisation18, have reached a god level of diffusion and collaborative initiatives aimed at sharing results and standards are improving technical interoperability; this would enable the technical interconnection of complex legal systems based on a multi-layered structure, framed according to the levels of legal discourses (and legal force), the hierarchical organizations of rules (supranational, national, local), and their systematic organization. But the poor level of semantic information attached to documents, still prevents a large and consistent conceptual interconnection and sharing of information19. And here we come to the second question, and to the long-term goal, i.e. conceptual integration.

3.1.

The semantic layer ^

[17]
The second question relies on the conceptual complexity of legal knowledge: in addition to the structural complexity, legal knowledge is the product of a multi-step process of legal reasoning: recognition, reconstruction, organisation, literal interpretation, conceptual modelling. Human language is the most practical medium to practice law and the interpretation of law is still primarily conducted by humans. Norms constitute the interpreted meaning of a set of contexts, whose literal meaning is conditioned by the social dimension in which they are placed, whereby norms dynamically define and fix their object in relation to a continually evolving social context.
[18]
As a consequence, legal concepts should be considered as a repository of meaning, whose content is dynamically modified by the influence of external factors. Changes in meaning of legal concepts occur within a diachronic process in relation to the cultural, political and social evolutions of the environment in which they are created. It is mainly through the work of the judiciary that the meaning of terms, like «public interest», «due diligence», can be dynamically modified and registered. From a strictly semantic point of view, we cannot expect to find any direct «referents» in reality, contrary to what happens for concepts in natural sciences, but, instead, examples of factual situations denoted by such kinds of concepts. If we move from a national (monolingual) dimension into a transnational (multilingual) dimension, further complexity arises: legal terminologies used in both European and non-European legal systems express not only the legal concepts which operate in the different countries, but also reflect the deep differences existing between the various systems and the varying interpretations given by lawyers in each system. . Given the structural domain specificity of legal language, we cannot talk about «translating the law» to ascertain correspondences between the legal terminology in various languages, since the translational correspondence of two terms satisfies neither the semantic correspondence of the concepts they denote, nor the requirements of the different legal systems.
[19]
All of this considered, how can we make law more understandable? How can we express in a simplified form, suitable for ordinary speech, the conceptual content of legal sources? Research in legal ontologies have proven that top-down, artificial, constructions of shared systems of concepts cannot capture the peculiarities of legal texts: law is not only language, but it is language-dependent. The new way to handle such a linguistic/conceptual dichotomy is to link the linguistic and the conceptual layer in an explicit way, by means of the models, languages and tools of the Semantic Web. Terms are «concept labels» rather than concepts, so labels can be associated with more than one concept. The lexical layer comprises labels, definitions and contexts of the domain entities that populate the ontology. The lexical layer does not convey any kind of domain information, that is expressed at the ontology layer, but maintains references to contexts. It will also enable links to simplified explanations (e.g., wikipedia or commentaries and practical cases). Each new concept in the ontology (for instance, a new definition of a given concept) has a one-to-one correspondence with its label and context. This distinction is of crucial importance in order to manage the dynamic changes in the system of concepts, and it will enable re-use, localisation and mapping among existing ontologies, but also comparison among legal systems. Changes in one of the two layers will produce reciprocal impacts, since the introduction of a new legal concept will propagate changes in its lexicalisations, not necessarily extended to all languages and legal systems, but only to those where the new concept is introduced.
[20]
So far, we have,on one hand, a good framework for conceptual representation, on the other, a rich set of NLP-based tools for the lexical-syntactic parsing of normative texts and for detection of semantic patterns to apply to the classification of normative statements (Winkels and Hoekstra 2012, Francesconi 2011), concept extraction (Francesconi, Montemagni, Peters and Tiscornia 2009) and ontology learning(Buitelaar 2006).

4.

Complexity as a value ^

[21]
In the tradition view of Artificial Intelligence, understanding was conveyed besides the language in a representation of precise and unambiguous knowledge, and meaning interpretation was embedded in this model. Today, semantic technology and Natural Language Processing (NLP), have a different role: they give support to the representation of meanings but they are not the point of arrival, but rather the tool able to make the complexity emerge.
[22]
Today, the representation and analysis of complexity represent one of the major challenge, but also the added value that ICT can give to knowledge about the law. A new model of legal knowledge representation corresponds to a new model of the law as a seamless net of knowledge units that are strongly interlinked in «small worlds» (Casanovas et alii 2010) or, as in (Bommarito, Katz et alii 2009) like a web of citations and conceptual interconnections.
[23]

It is a matter of fact that legal pluralism has substituted the traditional new-positivistic vision of legal systems, where burdens between national legal orders and international law are clearly defined and international law is embedded into the respective national systems. «What is globalisation or – to use the less pretentious expression – de-nationalisation about?[....]In norm-application, in turn, the establishment of dispute solving or sanctioning bodies beyond the control of nation states suggests the emergence of transnational law[....] EU law provides another example of a legal system which has detached itself from its international-law foundation. Primary EU norms derive from international treaty law, but secondary norms, such as regulations and directives, cannot be classified in terms of international law. Furthermore, primary norms, as interpreted by the European Court of Justice (ECJ), treat even private persons as legal subjects, which departs from the premises of international law. The municipal legal order has lost its monopoly on determining legal relations involving private individuals.... ‹Pluralism of legal orders› can be defined as a situation where more than one legal order claims authority within the same geographically delineated social space...»20

[24]
The new models take into account social and institutional changes, that make it so, above all in Europe, that the process of democratisation must go beyond the ambit of the state and state laws, and needs to be engaged with diverse state and non-state actors within plural legal orders and across multiple sites and transnational networks. From the perspective of legal pluralism, what information and communications technology can do is to consistently integrate the networks of legal information creators and distributors, as well as to connect institutional sources of law with the new forms of self-regulation (Creative Commons licenses are a good example).
[25]
The main phenomenon of this historic moment is the enormous quantity of data available through digital devices and the ability of computers and networks to manage such quantities of data and to analyse massive amount of data sets. This also makes it possible to interpret outcomes from several perspectives and to share, map and merge information in order to acquire and provide new forms of awareness and comprehension.
[26]
In this context of networked information, new concepts for the legal domain emerge: the redefinition of relevance, originating in the world of information science, that may be extended to a notion of legal relevance, by providing new tools for the analysis of the behaviour of the courts, i.e. identification trough the citations links of relevant cases for the development, clarification or modification of a legal principles (Malgrem 2011, Winkels and de Ruiter 2012); their conceptual connotation by the visualisation of the principle’s scope, variation, and compliance across the network of contexts; the quantitative measurements of compliance with precedent based on the accurate coding of cases that enables different types of checking and comparison. Multiple forms of searching, comparison and compliance checking can give new insight into the conceptual characterisation of balancing, that has become an essential methodological criterion for judgements, especially in the field of constitutional rights21.
[27]
Managing complexity has two dimensions: on the one hand, it requires technical solutions, for the interoperability of data sources, for the elicitation of implicit semantics and for the detection of interconnections and relations; but it also requires a strong theoretical background for conceptualizing complex notions such as proportionality, compliance and relevance. With regard to these aspects, I see a new fundamental role that legal theory can play for acquiring once more a central place in the development of legal informatics.

5.

Conclusions ^

[28]
The scope of this article was to analyse changes in perspectives of legal informatics, by looking at how to improve knowledge and comprehension of law. The objective, that with different names, has been a protagonist for forty years, has only partially reached the goals set by Artificial Intelligence and Law in the ’80s. Today, there is a methodological change that has shifted the analysis of formal representation models of the norms to the reconstruction of a complex structure of the law. It is a work in progress, moving towards the goal of «good law», understood as global computational systems able to publish a complete and interconnected network of legal sources as they exist in a legal document, and simultaneously to provide a parallel view of their content as data that a computer can understand and interpret.
[29]
Computational tools, nowadays more powerful than in the ’80s, play a central role, but even today, as at the beginnings of IA&Law, legal theory and logic can and must make a necessary contribution for the conceptual characterisation of new legal phenomena and, in particular, to provide strategies for evaluating and measuring the law.

6.

References ^

Agnoloni Tommaso, Sagri Maria.Teresa, Tiscornia, Balancing Rights and Values in the Italian Courts: a Benchmark for a Quantitative Analysis in: M. Palmirani, U. Pagallo, P. Casanovas e G. Sartor (eds.), «AI Approaches to the Complexity of Legal Systems – Models and Ethical Challenges for Legal Systems, Legal Language and Legal Ontologies, Argumentation and Software Agents», pp. 93-105, Springer LNCS, 2012

Buitelaar P., Ciiano P. and Magnini B. (Eds.), Ontology learning from Text: an Overview. In Ontology learning, IOS Press, 2006

Casanovas Pompeu, Pagallo Ugo, Sartor Giovanni and Aiani Gianmaria (Eds.), A I Approaches to the Complexity of Legal Systems, Springer, LNCS, 2010

Francesconi Enrico, Montemagni Simonetta, Peters Wim, Tiscornia Daniela, Semantic Processing of legal texts, Springer LNAI, 2010

Francesconi E.,Structuring Legal Semantics in Legislation’», pp. 299-309 in Strukturierung der Juristischen Semantik -- Structuring Legal Semantics, A. Geist, C.R. Brunschwig, F. Lachmayer, G. Schefbeck (Eds.), Editions Weblaw, Bern 2011

Katz, D.M., Zelner, J.L., Bommarito, M.J., Spriggs, J., Fowler, J. (2009). The Development of Community Structure in the Supreme Court’s Network of Citations. Network Analysis in Political Science, Harvard University, Kennedy School of Government. June 15-16, 2009

Malmgren Staffan, Towards a theory of jurisprudential relevance ranking – Using link analysis on EU case law. Master Thesis, Stockholm University, 2011

Winkels, R.G.F. & de Ruyter, J. (2012). Survival of the Fittest: Network Analysis of Dutch Supreme Court Cases. In M. Palmirani e.a. (eds.) AICOL Workshops 2011, Springer Lecture Notes in AI, vol 7639, Berlin, pp. 106-115, 2012.

Winkels, R.G.F. & Hoekstra, R. (2012). Automatic Extraction of Legal Concepts and Definitions. In B. Schäfer (ed.). Legal Knowledge and Information Systems. JURIX 2012: The Twenty-Fifth International Conference. Volume 250 of Frontiers in Artifici al Intelligece and Applications, IOS Press, Amsterdam, pp. 156-165.


 

Daniela Tiscornia, Director of Research at The Institute for Theory and Tecniques for Legal Information of the Italian National Research Council (ITTIG-CNR), Florence, Italy.

  1. 1 Designated by the hashtag #goodlaw https://twitter.com/search?q=%23goodlaw.
  2. 2 Allen L.E. and Saxon C.S., 1986. Analysis of the Logical Structure of Legal Rules by a Modernized and Formalized Version of Hofheld’s Fundamental Legal Conceptions, in Automated Analysis of Legal Texts, (Martino, Socci(eds.), Amsterdam: North-Holland.
  3. 3 Mandelkern Report on Better Regulation (2001), The Report served as a basis for drawing up the Better Regulation policy in the EU. Commission communication – COM (2012) 746 (12 December 2012).
  4. 4 Informatica e Diritto, Vol. IV, April-May 1978 and Vol. V, January-March 1979, Le Monnier, Firenze.
  5. 5 1981, 1985 and 1989.
  6. 6 Martino A.A., Ciampi C., Maretti E., Introduction to: Informatica e Diritto, Vol. IV, April-May 1978 and Vol. V, January-March 1979, Le Monnier, Firenze, p.2.
  7. 7 Alchourron C. and Buligyn E. ,1971. Normative System, Vienna:Springer Verlag.
  8. 8 Sergot, M.J., Sadri, F., Kowalski, R., Kriwaczek, F., Hammond, P., Cory, H.: The British Nationality Act as a Logic Program. Communications of the ACM 29(5), 370–386 (1988).
  9. 9 Sergot et alii, ibidem, p. 376.
  10. 10 C. E. Alchourrón and A. A. Martino, A sketch of logic without truth, in Proceedings of the 2nd International Conference on Artificial intelligence and Law, pp. 165-179, ACM New York, NY, USA, 1989.
  11. 11 Joost Breuker, Nienke den Haan, Separating world and regulation knowledge: where is the logic?, Proceedings of the 3rd International Conference on Artificial intelligence and Law, pp. 92-97, ACM New York, NY, USA,1991.
  12. 12 Valente A. and Breuker J., A Functional Ontology of Law, in Preproceedings of the Convegno del Venticinquennale IDG, Firenze, 1993, pp. 3-6.
  13. 13 Casellas N.: On Linked Legal Data: Improving Access to Regulatory Information, poster presented at BOOM (Bits on Our Mind), 4 April 2012, Cornell University, Ithaca, NY, USA.
  14. 14 Coenen F.P. and Bench-Capon T.J.M., Isomorphism and legal knowledge based systems, in Artificial Intelligence and Law, 1992, Volume 1, Issue 1, pp 65- 86.
  15. 15 Schweighofer E., The Revolution in Legal Information Retrieval or: The Empire Strikes Back, European Journal of Law and Technology, Vol 1, Issue 1, 1999.
  16. 16 Bonmarito M.: «...’search» is the only informatics tool that fits into the current legal paradigm, which I call the library model – law is a field of humans interpreting words, words live on documents, and documents live in libraries. Legal training focuses on reading and interpreting words and documents. Success in practice depends on locating, interpreting, and communicating information. Therefore, for a new tool to be accepted by lawyers, it must complement this library model to allow lawyers to locate, interpret, and communicate faster and better. (Posted on November 13, 2011 by mjbommar).
  17. 17 Leith P., «Re-engineering Sources of Law for Unaided Litigants», in European Journal of Law and Technology, Vol 1, Issue 1, 2010.
  18. 18 Initiatives on adoption of XML standards for the representation of legislative document structures and metadata have been brought on both at national and international level in different countries in recent years. To cite the most successful: USGovXML (http://www.usgovxml.com/) in the U.S. And the Crown XML Schema in the U.K., that provide the UK legislative data bases (legislation.gov.uk) with the most rich and complete datasets made available by governments in open XML. Other initiatives in European countries , like NIR (NormeInRete) standard in Italy or Metalex in the Netherlands have also lead to further development for a panafrican standard (AkomaNtoso) and to the international initiative of Metalex/CEN global interchange standard of legal sources. To cite also the XML and Open Standards in Parliament movement (http://www.ictparliament.org/).
  19. 19 «It is a maxim that ignorance of the law is no excuse, but it is profoundly unsatisfactory if the law itself is not practically accessible. To a worryingly large extent, statutory law is not practically accessible today, even to the courts whose constitutional duty it is to interpret and enforce it. There are four principal reasons. … First, the majority of legislation is secondary legislation.… Secondly, the volume of legislation has increased very greatly over the last 40 years … Thirdly, on many subjects the legislation cannot be found in a single place, but in a patchwork of primary and secondary legislation. … Fourthly, there is no comprehensive statute law database with hyper links which would enable an intelligent person, by using a search engine, to find out all the legislation on a particular topic.» Lord Justice Toulson in R v Chambers [2008] EWCA Crim 2467.
  20. 20 Kaarlo Tuori, Ratio and Voluntas: The Tension Between Reason and Will in Law, Hasgate Publishing., 2010, UK. p. 298.
  21. 21 See, among others, Agnoloni et alli 2012.