® InfoJur.ccj.ufsc.br
Instituut voor Taal- en Kennistechnologie
Institute for Language Technology and Artificial Intelligence

Towards a global expert system in law

E. Verharen

December 1--3, 1993, the Instituto per la documentazione giuridica del Consiglio Nazionale delle Ricerche (IDG), Florence, Italy, organised in honour of its 25th anniversary a conference on the application of AI techniques in law. Although this was a special occasion, it was the fourth time that such a conference was held on this topic and by now the conference has a very good name in this field, thereby attracting many experts in the field of AI and Law.

The opening session held welcoming addresses by the presidents of CNR and IDG about the usefulness of information technology for legislative drafting, legal data processing and law making and enforcement, and keynote speeches.

Keynote speeches

The first was by Luigi Lombardi Vallauri (University of Florence), 'Towards a global expert system in law' who gave a holistic view of a global expert system for the whole juridical process from law-making (general rule making), via legal science (general rule elaboration) and advisory and jurisdictional activities (individual rule making) to documentation systems. Each point held references to papers that would be presented in coming sessions. He also gave us his view on issues where the help of expert and information systems could be valuable or indispensable, but also some where it would be impossible. (see the report by John Zeleznikow on the conference for remarks on this)

The second keynote speech was by Antonio Martino (University of Pisa) called 'Logic, informatics, law', (the topic of the last Florence conference and proceedings). A presentation on different inference schemes in law and the difference between syntactic and semantic representation forms, with the conclusion that there is too little attention paid to semantics in logic and law. (also see the papers in this THINK issue for comments on this)

The main topics of the conference were discussed in five sessions, divided in invited speeches and communications (short presentations). Being of most interest to the author and because of the LEDA system demo obligations and discussions with other participants, only the first and fourth session will be discussed to some extent, while I restrict myself to giving the main conclusions of the other sessions.

Informatics and Law-Making

The first session 'Informatics and law-making' had a strong emphasis on the legislative drafting process and how computer science, and AI-based techniques in particular, could support the work of legislative draftsmen.

First the contribution of G. Ugo Rescigno (Universita di Roma) was read by Prof. Giachi. In his speech, with the title 'What the draftsman of statutes expects from informatics', he pointed out that draftsmen expect help in simple actions and revealing of mistakes he or she made (if possible with correction), but that it would not be possible to replace the draftsmen, even by AI techniques, because of the creative work that must be done: draftsmen rewrite the text that holds norms several times until it is a running text, and not just internally consistent. A possible application of computer programs lies in the enforcement and application of a set of "recommendations", which should be given in the form of advice to the draftsmen. As an example the Italian "Recommendations" were mentioned, but, in contrast to the Dutch "Recommendations for Regulation" that also hold methodological guidelines on the preparation, design and layout of legislation, they only enhance technical guidelines on the phrasing and structure of the legislative text. This should be combined with a checklist of important legislative methodological questions about the formulation of the text. The LEDA system (described elsewhere in this issue) both incorporates the checklist of methodological questions and applies all Dutch "Recommendations". Very important in the support of the draftsmen also is the use of legal archives, to build up the set of recommendations. (The IKBALS system described by John Zeleznikow in this issue uses this approach for case-based reasoning)

Following this address, Pietro Mercatali of IDG presented the paper 'Informatics as an aid to the legal draftsman'. He went on where Rescigno stopped and after explaining legistics and legimatics (the study of designing legal documents following certain rules) he presented the LEXDIT2 system. As the LEDA system this is a system based on hypertext technology that automatically checks the legal document text on conformity to the Italian (structure) recommendations. It has important additional features like the internal references to other legislation and a juridical lexicon. In contrast to the LEDA system the advice on the content, on the basis of the recommendations, is less dynamic and does not comprise of facilities to edit text while working with the system.

The last full paper in this session was by Layman E. Allen (University of Michigan) on 'Minimising inadvertent ambiguity in the logical structure of legal drafting by means of the prescribed definitions of the A-Hohfeldian structural language'. In his presentation Layman Allen explained the (linguistically based) theory of Hohfeld and his additions. The set of "lowest common denominators of law": deontic operators, legal relations, sentence connectives and legal conceptions, are used to explain imprecision in legal document texts (caused by vagueness and ambiguity). With this logic based theory the logical contents of sentences with words like 'shall', 'can' or 'may' can be expressed. The examples he showed were taken from the founding statutes of the European Community. He also build an expert system to show and compare different interpretations of sentences.

The communications in this sections were restricted to the contribution of Ronald Stamper (University Twente), because Gonzaga (Assessoria Directoria Executiva, Brazil) and Varga unfortunately were not able to come to Florence.

Ronald Stamper discussed in his contribution 'Generating information systems as a by-product of the legislative process' semantic modelling approaches (conceptual and behavioural models) as another way of describing the legal domain, as opposed to the pure logical approaches commonly used and still favoured by legicians. The models not only help in understanding and making regulations but can also be the basis of information systems on the domain under consideration. With this presentation Stamper tried to shake the firm logic tree in juridical science. (Also see the contributions of John Zeleznikow and Trevor Bench-Capon for remarks on this)

Informatics and Legal Interpretation

The second session was on 'Informatics and legal interpretation'. It included contributions of Antonio Cammelli (IDG) His presentation dealt with 'Multiple interpretation models in legal knowledge representation'; Enrico Pattaro (CIRFID, Bologna) held a very philosophical presentation on 'Informatics and legal interpretation: the role of the general theory of law and of the philosophy of law'.

The last invited speaker in this session was L. Thorne McCarty (Rutgers University) who gave us the results of an exercise in computational jurisprudence, titled 'Ownership: a case study in the representation of legal concepts'. He showed that although the intellectual currents in the AI-Law field flow mostly in one direction -- from legal philosophy to AI -- there are also some insights to be gained from a computational analysis of the Ownership relation. He suggested a computational explanation for the emergence of abstract property rights, divorced from concrete material objects. The communications in this session were by Da Costa (Department of Philosophy, Sao Paulo) on 'Logical models for the reconstruction of legal reasoning: fuzzy and paraconsistent logics'; Weusten (University of Utrecht) on 'A methodology for building legal knowledge based systems', concerning the use of decision tables in legal expert systems; and Aqvist (Uppsala University) on 'Prima facie vs. Toti-resultant obligations in deontic tense logic: towards a formal reconstruction of the Richard Price -- W.D. Ross theory', a difficult presentation on deontic tense logic.

The third session with an emphasis on 'Informatics and legal decision-making' contained invited speeches by Ciampi (IDG) who gave an overview of the 'Advances in legal knowledge acquisition and organisation'; Berni-Canani (Supreme Court of cassation, Roma) who gave us a logic presentation on 'Towards a predictable law'; and Richard De Mulder (Erasmus Univ, Rotterdam) on 'Probabilistic approaches to legal concepts'. Not afraid to stir the emotions he presented a unique use of Baysian statistics and fractal/chaos theory to determine the relative weight and thereby classify legal concepts, documents and cases. This is very useful in conceptual information retrieval. He concluded with the wise words that 'more information not always leads to better decisions; a small choice/decision can have big consequences'.

The communications in this session described several legal knowledge based systems and techniques.

Massie (Paris) 'Intelligence artificielle et droit: le système-expert comme substitut aux juristes ?' described the development of an expert system in Quebec, with an emphasis on the diagnostic and evaluation functions. His conclusion was the we need models of communication and cognition before using the system.

Sopor (University of Southampton) described in his contribution 'Using a knowledge based model to structure the retrieval of legal documents in hypertext' new ways of retrieving legal documents and the development of a framework for knowledge based hypermedia retrieval. He compared free text database retrieval with semantic retrieval and hypertext (browsing semantic links) and described the link between knowledge based systems and hypermedia. In his approach hypermedia contains the text of cases, whereas the knowledge base contains a logical model that structures these cases in terms of important issues. The two systems communicate through messages on these issues. The structure in the knowledge base can be used to link new cases to existing ones, based on their structure. A query to the knowledge base then automatically links the new case to old (similar) cases and structure.

The last speaker was John Zeleznikow (La Trobe University and visiting fellow of the ITK, of whom you can find a paper in this issue on similar topics). To raise some discussion and controversy he pointed out, in a strong voice, that logic is not the panacea for legal systems, if not inadequate. In his presentation 'Integrating rule-based reasoning, case-based reasoning and information retrieval in law' he described two systems developed at La Trobe: IKBALS and SPLIT-UP that uses other reasoning and representation techniques like case-based reasoning, neural networks, statistical methods and petri-nets.

Informatics and Legal Documentation

The fourth session covered topics on 'Informatics and legal documentation'.

Taddei Elmi (IDG) treated us into his invited speech 'Electronic Information in law: Towards a total electronic legal system' on the history of the computer in legal systems and more specific in legal information retrieval and documentation systems. He also pointed out their shortcomings and how new developments like neural networks, machine learning (and self-programming) systems and parallel computers could make valuable contributions. He explicitly mentioned the trend that was obvious throughout the conference, namely the shift from systems working with simple association and string matching to (semi)intelligent systems based on concept recognition. He concluded with some ethical, psychological and sociological remarks on using computers in law (-making, -documentation, -interpretation) and what we sometimes forget: 'people should not have to work for the computers in law and these computers should not be used to do more work, but better work, i.e. people should be released of dumb quantitative work so there is more time for qualitative, creative work'.

Borruso (Supreme Court of cassation, Roma) illustrated in the invited speech 'The conversion of traditional law into "software-law" in accordance with the legislator"s classifications of programmed instructions' the need for fast and accurate retrieval systems by hard numbers from Italian legislation. There are so many regulations and laws that it would take a person a lifetime to read only 10 % of them. We need good information systems to find our way in this jungle. This is what has been done in Italy but has overshoot the mark. The main goal no longer seems to be to use information technology to make all regulations and laws available, but to prove what laws, regulations, etc. have become obsolete. The legal system itself is partly responsible for getting out of hand, since the goal of a legal system was that every citizen knew the law and should comply to these rules. When designing new regulations and laws one should take in considerations the accessibility and applicability of the new rule. The computer is essential in this. Hence the futuristic idea of software laws. The legislator writes the law as a program with the norms of the law as algorithms. Then the suspect/ defendant/ prosecutor/ public only has to bring in the data to be tested against the law. The data should be multi-medial, the speaker even prefers the language of images for representation of thought over written language. This also has advantages for explaining the goal and working of the law to the public. Of course many difficulties have to be overcome, not in the least the question if laws can be expressed as a set of algorithms (many people argue it cannot), and the ambiguity of words and pictures. But still interesting ideas like this force us to think about the goal and structure of regulations and laws and how computers can support (the development of) them.

Poullet (CRID, Notre-Dame) also told a story of public wanting to have insight in the working of legal expert systems. In his speech, titled 'Marketing of data banks or expert systems', but better covered by his alternative title 'A legal framework for the provision of legal information products', he told the story of a group of Austrians that demanded access to and inspection of the documentation of legal expert systems that made decisions for the government. The government refused by saying it would be too expensive. The group went to the European Commission and on ground of art. 10: the right of information, they not only were granted access to the documentation but also to the knowledge base and inference schemes of the expert system. This clearly shows that one should take this right into account when designing legal systems. The system should be easy accessible and applicable to for instance lawyers. The remains of the presentation addressed questions like how to set up public services for legal information and the administration and control of these services.

The communications of this session started with a presentation by Trevor Bench-Capon (University of Liverpool, and one of the contributors to this Think issue) on 'Structure based retrieval of legal documents'. He pointed out that the organisational structure of documents can help in the retrieval of the documents. Hypertext is a suitable technique for this. However here hypertext is not used in the usual way of pleasing the user when browsing for information. In legal systems it is very important which route is taken through the material, and it is usually guided by the structure of the document. The use is no longer recreational but essential, based on a specific task. There are good possibilities for this since legal documents are very structured. The same techniques not only apply to the retrieval of documents but also in the design of these documents. Bench-Capon takes a distributed AI approach: a multi-agent frame work for this.

Rossiter (Newcastle University) presented in 'Assuring quality in legal documentation by the use of formal methods' a formal hypermedia model based on legal document structures using Entity-Relationship models for the structure objects (document structure, text, pictures) and Petri-net models for the navigation methods (actions). Instead of separated static (ER) and dynamic (PN) representations he would like an integrated approach as the object-oriented paradigm gives. In spite of the mass attention OO is given there still are very few formal models. Therefor Rossiter is now looking at Category Theory, which can express actions (dynamics) and consequences (statics) and can be implemented (using functional languages). Also an attempt is made in higher order logics, which have the advantage of expressing more semantics.

The last short presentation was by Savoy (University of Neuchatel) on 'Searching information in legal hypertext systems'. The main question in legal information systems is "how to find the information I want". Several methods of searching were presented, among them "browsing" (often not useful because of complexity of the information), and "automatic indexing (based on frequencies). Also statistical information retrieval (recall/precision, binary, weighted keyword search) techniques were presented. The speaker claimed that much better results could be made when hypertext links were taken into account. After using a statistical method the hypertext links in the retrieved set of documents should be followed to get a larger set with better precision and recall (?). The problem is how to avoid unimportant links. This calls for a model in which the importance (weight) of links can be given.

Documentation Science and Legal-Historical Research

The last session on 'Documentation science and legal-historical research' was missed because of the pleasant aftereffects of the excellent conference diner in a Renaissance villa in the hills surrounding Florence. It included presentations by Cascio Pratili (IDG) 'Legal lexicographical research at the Institute and its contribution to the history of environmental legislation'; Fiorelli (University of Florence) 'The files of Italian legal language'; and Padao Schioppa (University of Milan) 'Computers and research in legal history'. And the communications by Palazzolo 'Problemi di un"edizione informatica delle fonti giuridiche romane'; and Sassi (ILC-CNR) 'Il contributo dell' Istituto di linguistica computazionale alla creazione di archivi storico-giuridici'.

The closing session consisted of communications and the Final Report by Frosini. Contributions were by Alarcin (University of Sevilla) 'Constitutive Derogation' on the role of computers in writing the Spanish constitution; Arena (Buenos Aires) 'L"importanza dell"interfaccia utente nei sistemi di recupero documentale' on the use of a graphical user interface to tackle search problems, like the combination of search terms; Comand (Pisa) 'L'apporto dell' informatica alla liquidazione del danno alla persona'; Galindo (University of Zaragoza) 'The construction of a documented solution system for legal cases in the area of environmental law'; Herrestad 'The right direction'; Lachmayer (Federal Chancellery, Wien) 'Austrian legal information system'; MacCrimmon (University of British Columbia) 'Operational and strategic improvements for a major environmental problem: application of computer models and expert judgement driven by legal requirements'; Nagel (University of Illinois at Urbana-Champaign) 'Computer-aided law decisions'; Parrano (Arezzo) 'Certezza del diritto e sistemi esperti decisionali nell"ordinamento giuridico italiano'; Puga (University of Sao Paulo) 'Non-alethic preference logic'.

The 'Final Report' by Frosini (University la Sapienza, Roma) included many highlights of the last days and references to all the wonderful speeches and presentations given. Of course the role of the IDG was commemorated and with a lot of thank-yous the conference was closed.

Closing Remarks

As mentioned before and can be seen from the titles of the papers and presentations there was an obvious trend towards (semi) intelligent hypertext and conceptual information retrieval systems. Although many researchers still believe in the power of logics, a number of alternative AI representation and reasoning techniques are used today in the development of legal information and expert systems. This THINK issue holds a number of them. Despite the interest in and models and ideas for knowledge based and expert systems it was striking to see that there were so little real systems build.

Because this was the fourth time the conference was organised and by know is renowned all over the world, many contributions were sent to the organising committee. In stead of a strict selection and possible loss of interesting papers, and also because this was an anniversary conference, there was only a mild selection. Except for the invited speakers only a limited number of contributors was asked to illustrate their papers by short presentations. To give every participant the change to get to know the work of the others a special proceedings was prepared containing all abstracts of the selected papers, including an extra session (only in paper) about 'Law, Informatics and the Public Administration' and the addresses of the authors (still 270 pages). The proceedings containing the full papers will follow (hopefully this year, see the remark in John Zeleznikows conference report).

All in all a very interesting and extremely well organised conference (and personally successful, because of the interest in our LEDA system), not only because of the presentations but also because of the many, many researchers from many, many countries all over the world, for which we have to thank our host the IDG.


© Arthur van Horck