1.
Introduction ^
[1]
Schweighofer and I have been friends for many years despite the difference in approach we have taken as to how to best use technology in legal information handling. Such friendships do not usually survive in the academic world where there is a tendency for social relationships to align closely with philosophical perspectives – the attacking of the other’s core professional beliefs hardly encourages trust and camaraderie. But that has never been the case with us despite the gap in approaches, and perhaps this gives me an opportunity to approach his work and see what elements are common to us both and whether there is a possibility of a sociological approach taking something from the formalising perspective and moulding a new approach where we could each ignore aspects which are less attractive to the other.
[2]
Certainly, there are parts of each of our approaches which would be difficult to meld together: the use of logical formalism would always be a problem to the sociologist who sees meaning lie in social life rather than in formal symbol, and the out and out scepticism of the sociologist («the technology will never replace the person!») would hardly be attractive to an approach which is generally optimistic about the technology of formalisation being able to resolve these complex problems of representing meaning in legal documents. But sometimes in research we heighten the differences between approaches so that our own contributions will stand out more clearly: and if that is the case, it does our ultimate goal of producing systems little good at all.
[3]
My own approach to logical formalisms is generally available and was developed very early in my academic career1 . My PhD was essentially a critique of the view that logic could be used as a means of representing law. Tammelo’s perspective of law2 being essentially logical never struck me as being a feasible one to hold given the social nature of the legal system ‹in action›. From that early work a critical attack on the notion of expert system in law developed as the 1980s saw every research egg being placed in the basket of legal expert systems. But I have really done very little work in this field for the past twenty years or so, so it is interesting to me to think how I – if I was a young researcher rather than one who is in the final stages of a research career – might develop tools and approaches based on the work which has occurred over the past thirty years or so.
[4]
In this paper then, I will look to Schweighofer’s approach as one example of a path I took early (my PhD included program writing) before my career became one more based the critical assessment of other’s tools rather than tool building itself. Indeed there is a certain irony here since my original academic post was in computer science and while I have done some implementation of systems I have been one of the lesser implementers of computer based tools in the field of computers and law during the past decades.
[5]
My primary interest in this paper is theKonterm system which was a major development project for Schweighofer during the 1990s, utilising his philosophical approach to legal documentation as the basis for a system which was to help automate the indexing and representation of law through document analysis3 . Unlike many of the other experimental systems which were developed during the explosion of interest in artificial intelligence approaches to law,Konterm was based on a much more sophisticated model of knowledge. Rather than one where knowledge could simply be extracted without problematic, Schweighofer argued that knowledge and information had a symbiotic relationship. He gave a useful example:
«[a]n experienced lawyer and a nonprofessional read the newest amendment of the university organisation law. The experienced lawyer will read and interpret the amendment in a different way from the nonprofessional. In the latter case, the lasting result of the interpretation process will be rather non-understanding and useless information, whereas the lawyer will not only include the amendment his knowledge but also reorganize his legal knowledge on university organization law. Knowledge guides attention and allows the interpretation of messages first and foremost.» [p13]
[6]
This suggests that to build systems which support legal information retrieval, there has to be a developing context which is built around the system and which changes and refines itself as more information becomes available. This actually sets a very high standard for any tool which is to be built. It also moves away from those naïve systems which attempted to, for example, take lawyers away for a weekend and build ‹expert systems› on their knowledge with rule based logic4 . Building a system under Schweighofer’s model is much more difficult and requires the moulding together of a variety of kinds of legal information and legal knowledge.
2.
Is there a problem? ^
[7]
Schweighofer’s goal is to simplify access to law – the problem being one of too much documentary information and which is organised in ways which are not so easy to manage. This is particularly the case, he earlier suggested, with those learning law who are too often fed basic rules rather than being encouraged to develop skills to understand topics broadly and then be able to determine the actual details of law via online research skills. The problem certainly exists – a continual promulgation of law in a European context where forces of harmonization are bringing different legal systems together into one amorphous mass of text means there is now a huge corpus of legal materials which defies neat classification or organising.
[8]
Another important goal for Schweighofer was to reduce the costs of producing the systems which simplified law. Most of the other demonstration systems from the 1980s and 1990s utilised hand coding of rules and other formalisms which meant that they were time consuming to build. Schweighofer argued that automation was an essential element of building systems which could be constructed within reasonable cost limits:
«The intelligence that characterizes knowledge-based systems, while evidently useful, is costly: the acquisition of knowledge is expensive. Such systems will therefore continue to be restricted to small applications, with big databases like legal information systems proving too expensive.» [p156]
[9]
The approach which Schweighofer advocated was one of using the tools to carry out formalization themselves (just as the real developments in computing came when the computer was itself used to carry out computing tasks such as converting higher level languages into object code). In effect he foresaw the way that information retrieval tools actually did go in the decade of the 2000s – with almost total automation of linking and indexing in IR systems.
[10]
Since the mid-1990s there has been a development of various systems which have utilised more advanced techniques than simple Boolean search tactics of databases – for example, Google is well known to use a patented search algorithm5 to help determine which results to present to the user (its ‹PageRank› method of calculating the importance of a document by the number of other documents which link to it). However, in law the commercial or free to user databases which have been available have not incorporated such developments until very recently and this incorporation has been only partial and experimental. Documents in legal databases have certainly been automatically processed and interlinked for many years now, but there has been little attempt to utilise such editing/indexing in search results (though the recent citations index in AustLII/BAILII uses this information to build a well known tool known as a ‹legal citator›6 ). Such an approach may well be counter-productive, always pointing the user towards superior court decisions rather than what may be more interesting new developments which appear in lower court decisions. Indeed there appears to be scepticism amongst the developers of some legal databases for systems such as ‹conceptual information retrieval› methodologies and there is reliance upon traditional Boolean techniques which locate only using word-based indexes. There is also scepticism that those systems which are now describing themselves as ‹semantic search engines› are doing no such thing and simply over-hyping a traditional approach with marketing-speak.7
[11]
BAILII in the UK has made no attempt to utilise any semantic approach; AustLII, with its closer connection to the artificial intelligence movement has suggested in the past that semantic searching would be a goal for AustLII but have not yet incorporated it8 . Lexis-Nexis has developed a ‹semantic› product, but this only deals with patent materials rather than wider areas of law. Generally, at the present time, the utilising of non-Boolean techniques in legal information handling remains in its infancy.
[12]
Boolean techniques are certainly powerful, but it is difficult to believe that they are the only possible means of accessing data from databases. Further, to gain the most from them, users have to have an understanding of the indexing process, and the weaknesses which arise from this (e.g. is indexing in paragraphs, sentences, nearness, omitted words,…). In a documentary domain as complex as that of the European Union, while these traditional techniques are highly useful, there is certainly a problem of access which is real and growing.
[13]
Not only, therefore, is the time right that we should now re-consider how we might move forward and develop the basic Boolean-type of legal information system into a more complex system which fits the less expert user’s needs, but it is also an appropriate time to build links between the research community and the developers of the new free to user systems such as BAILII. BAILII does not have the time or the resources itself to experiment with new technologies – it runs a highly efficient system with the bare minimum of manpower. However, what it conceivably does allow is access to a corpus of materials which allow real-life testing of systems – something which was impossible in the earlier decades of the research field when almost all legal information was viewed as proprietary by firms such as Westlaw, Lexis-Nexis and the various smaller European on-line publishers. To date there has been almost no interest from the research community in accessing the very large data sets which these free to user systems have built up (there has been some from smaller publishers) but surely that must change as the possibilities for research in real world problems become more attractive to the research community.
3.
Konterm ^
[14]
Schweighofer’sKonterm project was carried out in the early 1990s and detailed in a text published in the late 1990s. It is thus not a new project, but it certainly has relevance to work which needs to done today to improve upon legal information retrieval systems. Of course, there are references in the text to the ‹new› developments of the internet and publishing of legislation and judgments online, but generally the technology of handling legal materials has not changed too much. As noted above, the major publishers of legal information as well as the new free-to-user publishers are currently using the traditional Boolean techniques which were well developed by the 1970s. In fact it seems to me thatKonterm remains an important system which has research relevance to guide us as to where we should be developing systems in this new century. In this section I will give a brief outline of what the project was and link it into later research work; the reader is encouraged to find more detail from the project textbook.
[15]
One of the primary aspects of theKonterm project was that so far as possible, the scanning of documentation should be done in as automatic a manner as possible. Schweighofer, for example, in the discussion of how to include context related rules suggested: that one could assume:
«[t]hat specific information in the text is recognizable through specific patterns. For the purposes of initial analysis, one does not read the document through butscans over it looking out for particular text patterns. The program should undertake this intelligent behaviour when the structures included in the knowledge … are found in the text corpus or new documents …»
[16]
I am not so sure that this can be properly called ‹intelligent› (given that it is simply following algorithms in a rote manner) but there is certainly plenty of later evidence from other projects that such scanning of text can be a productive technique. For example, van Noortwijk et. al.9 have utilised this technique in assessing student work. Basically, patterns of text are sought which match essays marked as ‹good› and the more the found patterns of relevant ‹good› text, the higher the student’s work is marked. While this may seem somewhat harsh on student’s efforts, I find – both after testing the system and considering my own marking techniques where I know which phrases, cases and suchlike I am looking for in a student’s work (or indeed, which ideas I don’t expect to find and am pleased when I do) – it does indeed mirror marking in the real world. It most probably also mirrors the reading of case law when it is initially ‹scanned› to see whether it is relevant to the problem in hand. When we use an information retrieval system, because it will always produce too many documents which are low in relevance, we need to scan the texts quickly to remove the non-interesting documents. The goal is to quickly focus on those whichmay be relevant and then read these in a more detailed manner to find out whichare relevant. Scanning the text for phrases is the obvious way to do this in human form, and therefore appears a good and effective strategy in computer form.
[17]
Scanning (‹pattern matching›) of judgments is done routinely now. For example, the basis of theLawCite program10 is that it is possible to find references to cases in legal decisions through pattern matching techniques and thus automate the hyper linking between these, providing a listing of what cites what and when – a valuable tool which would be exorbitantly expensive to provide through editorial intervention. Not all references are in the proper format, so the program has to carry out various methods (‹heuristics›, but perhaps better described as ‹work arounds›) to determine which are correct and which might require automated editing.
[18]
Konterm also utilised human created lists of relevant words and synonyms which enabled pattern matching to be carried out with limited further human intervention.
[19]
In simple terms, whatKonterm was offering was a means to produce a number of descriptions of documents. Why was this seen to be important? It is simply because the number of documents which make up a legal corpus is so large – not just the decisions and legislation but also commentaries, textbooks, and now blogs. Processing of this quantity of material by human intervention – through an editor – is a highly expensive form of processing. By automating as much of this as possible, it would be feasible to produce low cost and efficient indexing systems for information retrieval which would enable better relevance scores in usage of the systems. Utilising knowledge and expertise to impose a structure upon documents will mean that many systems which would be useful to the user cannot be economically produced: only automation can produce systems which are cost-effective.
4.
Schism ^
[20]
My view is clearly thatKonterm was a very interesting project from the 1990s which will surely have something to tell us in the 2010s. That does not mean that I view it as being without weakness. The strength of the project was that it took law and legal knowledge as a complex animal and attempted to find a number of techniques which could be combined to produce a useful tool towards the goal of ‹representing legal knowledge›. It differed from most other systems which attempted to represent law in very simplistic manners (e.g. either as ‹rules› or ‹cases›). The weakness – in my view – is that it too closely followed the prevalent view of the time from the artificial intelligence community: that law could be formalised and captured by a program so that the program becomes a representation of law itself.
[21]
In my view this is an error. The weakness of any formalisation is that it captures a fixed image of its target. It never captures the moving target itself. As I wrote in 1990:
«Law is an area of much complexity, where individuals both seek aid and justice, and where individuals receive punishment. It is a political system just as much as, if not more than, a technical system. Yet, computer science has treated it as an area where mere technical tools can be brought to successfully solve problems which are too great for simple lawyers. In part this is because of formalism in law. While the very nature of debate in law is intrinsic to it – two sides argue with all their abilities for guilt/innocence, responsibility/non-responsibility – the legalistic framework of law suggests that there is a coming together to consensus over the nature of each and every rule in law.
It is striking that, in a process so fraught with argument, counter argument and attempts to find the truth between these, the formalist notion has managed to suggest a cohesiveness of purpose which, it seems to me, cannot possible be there. For law is not about consensus, it is about control, money and power. Though the days are now gone when each feudal lord kept a gibbet outside his castle, law is still a means of social control – those who infringe the law are punished. Money is at the centre of most civil disputes and many criminal ones. And power is wielded by the court system itself. It is only the formalist agenda of legalism which claims that it is not about this. Legal formalism, in its desire for idealism, ignores the social reality of the situation.»11
It is striking that, in a process so fraught with argument, counter argument and attempts to find the truth between these, the formalist notion has managed to suggest a cohesiveness of purpose which, it seems to me, cannot possible be there. For law is not about consensus, it is about control, money and power. Though the days are now gone when each feudal lord kept a gibbet outside his castle, law is still a means of social control – those who infringe the law are punished. Money is at the centre of most civil disputes and many criminal ones. And power is wielded by the court system itself. It is only the formalist agenda of legalism which claims that it is not about this. Legal formalism, in its desire for idealism, ignores the social reality of the situation.»11
[22]
In my view, law is an agonistic activity: forever demonstrating debate over meaning and interpretation, as each participant attempts to mould arguments to their own needs. Systems which attempt to ‹represent the law› can only represent one window upon the law, and do not necessarily always do that very accurately. Imagine two systems which each attempts to represent legal knowledge and which differ in their interpretations (as we frequently find with human interpretations): how do we resolve the differences? By building a third system? Of course not. Textbooks on law will frequently have differing approaches to legal issues, and the solution has never been to write another textbook to produce an arbitrated version (that only produces another view, not a final one). The solution – as ever – has been to build up one’s best evidence and put the arguments to a judge.
[23]
My differences with Schweighofer and thisKonterm approach is that – to my view – systems which try to represent the law and which omit this agonistic element can never beusable as sources of law. If, though, we still think that tools can be build which move along the path Schweighofer attempts to travel, how might this agonistic perspective be included in aKonterm-2010s project?
5.
A more Socially-based Wiki approach? ^
[24]
Schweighofer’s approach was to try to push as much ‹knowledge› into the system as possible. Utilising limited ‹expert› information to build up a dictionary of terms and phrases allowed most of the building up of information in the system to the automated processes. The system thus becomes a relatively autonomous program which, like the Newtonian billiard ball, once pushed keeps moving on its own accord. This, of course, is the aim of the ‹semantic› approach to information retrieval – the system carries out the indexing work rather than the system designers.
[25]
However, over the past decade or so, what we have seen has been a rise in a different model of computer based ‹knowledge›. Rather than a monolithic system offered to the user, we have seen the integration of program and user so that the content upon which the program operates arises as much from user input as from any other source. Sometimes the user input is nearly everything: think ofWikipedia . This aims for a goal near identical to that of Schweighofer – producing a database which undercuts the expensive model of editorial input by the program producers. It does it in a different manner, of course, by using no-cost editorial input rather than no editorial input. It is, however, a highly successful approach which has been used in any number of ‹Wiki› systems. This approach – as defined by its inventor – is: «The simplest online database that could possibly work.»12 It is the simplest because there is no automated indexing or suchlike, simply the database organisation comes from the interlinking of information as decided by the editors. This approach is the classic Apple ‹Hypercard › one from the 1980s13 .
[26]
SinceKonterm the most significant development in computing has been the development of communication between users via programs. Systems such asFacebook with a half billion registered users indicates that there is an ease – a usability – about such systems and that individuals are prepared to communicate in that manner. The rise of blogs, wikis, etc., also indicates that those with expert skills are happy to communicate and participate in development of ‹knowledge based› systems in the widest sense.
[27]
There are two problems to be found when trying to include this approach into the classic information retrieval model. First, is finding out what would attract that input giving impetus in users.Facebook obviously gives communication with friends, blogs allow self-promotion, and it is not always immediately clear to users whatWikipedia offers it’s authors. In research interviews with authors, though, Forte and Bruckman suggest that:
«[t]he incentive system that motivates contributions to Wikipedia resembles the incentive system observed in the scientific community. The notion of credit exists in Wikipedia both as reward and as credibility that empowers individuals in the community. Still, the nature of the encyclopedia-writing enterprise, the technology on which the community is built, and the values of the community change the incentive system in important ways.
Perhaps the most flagrant difference between the scientific community and Wikipedia is the indirect attribution of authorship. On the surface, it appears that contributors receive no credit whatsoever for their contributions. None of the articles are signed; most have been edited numerous times by numerous people and explicit attribution would seem to be impossible. In fact, interviews revealed that Wikipedia authors recognize one another and often claim ownership of articles…»14
Perhaps the most flagrant difference between the scientific community and Wikipedia is the indirect attribution of authorship. On the surface, it appears that contributors receive no credit whatsoever for their contributions. None of the articles are signed; most have been edited numerous times by numerous people and explicit attribution would seem to be impossible. In fact, interviews revealed that Wikipedia authors recognize one another and often claim ownership of articles…»14
[28]
We know that BAILII and AustLII, for example, are used by barristers and solicitors, students, government bodies etc. We know this because the funding model is charitable and the funding is given from all these bodies to fulfil the IR publishing function – they would not fund something they were not using. Might these be the people who would provide the wiki-input to a revisedKonterm ? Perhaps, but there does appear to be problems when academics occupy the role of ‹author›. Experience with e-learning materials has shown that authors of online lessons are rarely keen to have others amend or change their work. This was found with the successfulIolis software from Warwick15 , where users in law schools wanted to change aspects of theIolis teaching materials as they would have done with a textbook (taking different chapters from different texts, say, or adding critical commentaries for students to read). This is a social question – one of building groups who will work for the common good – but one which seems to me to be at the heart of technical success. If authors can be brought on board, there are a number of ways in which commentaries, additional information sources, pointers to related work, could all be brought into aKonterm III system to increase the knowledge which is held in the system.
[29]
The second problem is one of producing a system which is as simple as possible for the user (‹usability›). TheHypercard approach was powerful approach because it was such a flexible notion and could be used in ways which the original designer never imaged. A very useful visual overview ofHyperCard is available including an interview with the ‹inventor› via the Internet Archive16 which demonstrates the enthusiasm (‹fun› appears to have been the keyword) which was found to surround the program, in part because it allowed those with various interests to produce ‹stacks› (hypertext linked documents and images) which gave a dynamic quality to their interests – for example, the teacher of composition could produce a ‹stack› which helped students follow scores and which included historical and musicological information. The significant move away from traditional programming models was a major factor ofHyperCard’s success – the user could develop stacks interactively through adding first one button to a card, seeing how it looked or performed, and then adding more. It was, in many ways, a revolution in programming. The inventor ofHyperCard was Bill Atkinson who later reported having failed to foresee the potentially huge inventive step whichHyperCard suggested in terms of the program as centre of a communicating network:
«I have realized over time that I missed the mark with HyperCard,» he said from his studio in Menlo Park, California. »I grew up in a box-centric culture at Apple. If I'd grown up in a network-centric culture, like Sun, HyperCard might have been the first Web browser. My blind spot at Apple prevented me from making HyperCard the first Web browser.»17
[30]
We see that usability and flexibility continues to be a major factor in the reasons why some software takes off and some doesn’t. Rarely is the designer of successful software really aware of how the product will finally be used – blogging began, for example, as online diary software and took off when the diary (usually private) became a way which users saw to publish easily.
[31]
How is this to be done?Konterm was relatively complex and fixed in terms of program structure – unsurprising given that its aim was to produce a system which was relatively free from human input. Is there a way in which simplicity can be brought to bear on the complex problem of making legal knowledge accessible by computer tool? At present we don’t really know, but that should be sufficient to encourage research into how elegance can be brought into the conceptual design of these second generation IR systems.
6.
User-Need ^
[32]
To my sociologically inclined view, the second aspect aboutKonterm which I would consider to be a weakness is that there was an assumption made by Schweighofer about what the user wanted and who the user was. My own feeling is that system designers should be intimate with the needs of their users, rather than guess what these are. Of course this does not fit so easily with my assertion that designers can’t predict how their tools will be eventually used, but that is not the same as saying that a tool should not be designed to fulfil a need. In legal information retrieval there has really been very little work carried out on how users interact with the system, what kinds of errors they find, what tactics do they use to determine which documents are ‹relevant› and which aren’t.
[33]
For example, do they actually need tools which make access to a corpus of cases easier? Perhaps there are other ways in which this need – if it exists – could be made and could be produced in an easier way? My feeling – which I share with Schweighofer – is that there is indeed a need for tools which help users use IR systems, but even after having many years of interest in such tools I am still not sure exactly what form that need actually is or how best to provide for it. It would be useful, therefore, before a design for aKonterm-2010s was produced, to carry out some investigative work looking at what problems users actually have in accessing legal documentation. From that basis, then a revised system could be designed to try to meet found needs rather than perceived needs.
7.
Conclusion ^
[34]
It seems to me that the time is now ripe to return to tool building in the legal domain but to free it from the deadlock of the 1980s and ‹expert systems›. In the 1980s there was, in my view, far too much interest in artificial intelligence approaches which meant that alternatives techniques were rarely tried18 . The rise of the view of a program as an automated tool to handle expensive processing and pattern matching in text is something which fits well with the view which Schweighofer took of legal information handling. What was not so evident inKonterm was the view of the program as being something which was at the heart of communication between users – the program was seen as self-standing, created by an author and used by the user as a recipient. It seems to me that Schweighofer’s original version of Konterm could be much developed by taking the former view and also adding an essentially social component (the latter being the program as communication tool) and developing systems to improve access to law for both the expert and the novice lawyer.
8.
References ^
Forte, Andrea & Bruckman, Amy Why Do People Write for Wikipedia? Incentives to Contribute to Open-Content Publishing GROUP 05 workshop:Sustaining community: The role and design of incentive mechanisms in online systems . Sanibel Island, FL. 2005 Available atwww.andreaforte.net/ForteBruckmanWhyPeopleWrite.pdf
Greenleaf, Graham et al. 1997. JILT 1997 (2) – The AustLII Papers New Directions in Law via the Internet. Avaialble atwww2.warwick.ac.uk/fac/ soc/law/elj/jilt/1997_2/
Leith, Philip Formalism in AI and Computer Science, Ellis Horwood/Simon and Schuster, London and New York (1990).
Leith, Philip Leith P., «The rise and fall of the legal expert system», inEuropean Journal of Law and Technology , Vol 1, Issue 1, 2010.
Lexis-Nexis, 2009 «White Paper: The Evolution of Semantic Search on the Web», Available online atwww.lexisnexis.co.uk/pdf/brochures/totalpatent-whitepaper.pdf
van Noortwijk Kees, Visser Johanna and De Mulder, Richard , Ranking and Classifying Legal Documents using Conceptual Information, JILT 2006 (1).
Paliwala, Abdul ‹E-learning and culture change: the Iolis story›The Law Teacher 39(1) (2005)
Schweighofer, Erich Legal Knowledge Representation: automatic text analysis in public international and European law, Kluwer Law International, The Hague/Lonon/Boston, 1999.
Tammelo, Ilmar Modern Logic in the Service of Law, Springer-Verlag, Wien (1978).
Philip Leith, Professor, School of Law, Queen’s University of Belfast, Belfast BT7 1NN, UK,p.leith@qub.ac.uk
Greenleaf, Graham et al. 1997. JILT 1997 (2) – The AustLII Papers New Directions in Law via the Internet. Avaialble atwww2.warwick.ac.uk/fac/ soc/law/elj/jilt/1997_2/
Leith, Philip Formalism in AI and Computer Science, Ellis Horwood/Simon and Schuster, London and New York (1990).
Leith, Philip Leith P., «The rise and fall of the legal expert system», inEuropean Journal of Law and Technology , Vol 1, Issue 1, 2010.
Lexis-Nexis, 2009 «White Paper: The Evolution of Semantic Search on the Web», Available online atwww.lexisnexis.co.uk/pdf/brochures/totalpatent-whitepaper.pdf
van Noortwijk Kees, Visser Johanna and De Mulder, Richard , Ranking and Classifying Legal Documents using Conceptual Information, JILT 2006 (1).
Paliwala, Abdul ‹E-learning and culture change: the Iolis story›The Law Teacher 39(1) (2005)
Schweighofer, Erich Legal Knowledge Representation: automatic text analysis in public international and European law, Kluwer Law International, The Hague/Lonon/Boston, 1999.
Tammelo, Ilmar Modern Logic in the Service of Law, Springer-Verlag, Wien (1978).
Philip Leith, Professor, School of Law, Queen’s University of Belfast, Belfast BT7 1NN, UK,p.leith@qub.ac.uk
- 1 Leith, 1990.
- 2 Tammelo, 1978.
- 3 Schweighofer, 1999.
- 4 This was actually a common perspective in the 1980s.
- 5 US 6285999 Method for node ranking in a linked database.
- 6 These citators have been around for many years – mostly first produced as printed books, but later incoporated into commercial legal IR systems.
- 7 As Lexis-Nexis accuses others in Lexis-Nexis, 2009.
- 8 Greenleaf et al. 1997.
- 9 van Noortwijk, 2006.
- 10 www.lawcite.org/LawCite/ .
- 11 Leith, 1990.
- 12 www.wiki.org/wiki.cgi?WhatIsWiki .
- 13 Seewww.archive.org/details/hypercard_2 . The recording is KCSM TV’s «The computer Chronicles», 8 January 1990.
- 14 Forte & Bruckman, 2005.
- 15 Paliwala, 2005.
- 16 Seewww.archive.org/details/hypercard_2 . The recording is KCSM TV’s «The computer Chronicles», 8 January 1990.
- 17 www.wired.com/news/mac/0,2125,54370,00.html .
- 18 Leith, 2010.