Teaching Artificial Intelligence to Law Students

 

 

 

Dan Hunter, University of Melbourne, Australia

 

 

 

Introduction

 

The Law School at the University of Melbourne introduced a number of computer-related subjects relatively early by international standards. We have for a number of years taught subjects at undergraduate and postgraduate level on the law relating to information technology. We also have an undergraduate subject which focuses on the application of artificial intelligence techniques to modelling legal reasoning. It is called Law and Artificial Intelligence.

 

The subject was introduced in 1991 by Andrzej Kowalski, but unfortunately he was only able to offer it for one year. Andrzej wrote about the experience of teaching this subject in issue 2(2) of the Journal of Law and Information Science, and further details can be found there. The subject was not offered for a number of years, and was only resuscitated earlier this year. This report details some of the experiences of teaching a course on artificial intelligence to law students.

 

Overview

 

At the outset it was important to ascertain why a course like this might be offered to final year students. The most important objective of the course was to expose students to a number of theoretical stances on legal reasoning, and to have then recognise the difficulties with legal and linguistic indeterminacy. Melbourne Law School has a policy of requiring students to undertake a number of subjects which focus on legal theory. The Law and Artificial Intelligence course is one of these legal theory courses. However, unlike many other legal theory subjects, this subject actually allows students to discover for themselves the indeterminacy of legal language, in a practical rather than abstract way. Students found that, when they had to begin representing legal concepts in a computer formalism, their assumptions that law is certain and complete were wrong.

 

Unlike the previous time the course had been offered, this time the subject did not have a quota. Almost forty students took the course, which was something of a surprise. The large numbers meant that it was impossible to assume any computer experience on the part of students. We had students who had done undergraduate computer science or mathematics courses, a physicist, a number of economics graduates, and a number of people who had a pathological fear of computers. It was important then to provide for all the different experiences of the students, which meant balancing the theoretical and practical, and the humanistic against the technological.

 

The course is run with a combination of theoretical lectures and practical seminars. In the seminars the students are required to build a small legal expert system. This expert system together with an extensive report forms 50% of their assessment. Students work either alone or in teams of two, three or four students - most students worked in teams. Students were expected at the outset to choose for themselves the area of law (the 'domain') in which the expert system would operate. The expert systems this year were built in domains as diverse as the laws on copyright, medical negligence, sexual offences, homicide, and trusts. Guidance was given to students about how to choose suitable domains. The most important factor stressed was that it had to be manageable within the thirteen weeks of the course. Most students found that the domain they chose at the beginning became too large, and narrowed the focus during the course of the project.

 

The alternative to having the students model a domain of their choosing is to give them a toy domain of hypothetical domains. This approach has its advocates, but I decided to see whether students could manage the domain analysis themselves. There are a number of benefits in having the students choose the domain. It drives home the lesson that the domain must be reasonably small, discrete and quite well-settled, and it shows the students what it is like to be both the computer implementor ('knowledge engineer') as well as the domain expert. There are some difficulties with this approach, but I will discuss these below.

 

The expert systems were built in production rules (IF <circumstance> THEN <conclusion>). Students used a production rule expert system shell called VP-Expert. This shell was chosen largely due to its ease-of-use. It is relatively simple to learn and handles most of the inferencing for the students. This allows the students to concentrate on representing rules and explanations, and not concern themselves with how to program interface and inference matters. It is not an ideal shell, and we will probably use a different shell next year. Its major limitation is its failure to provide any hypertext explanation facility. Hypertext is an extremely useful way of having the expert system explain its inferencing process, and it provides the lawyer-users with an intuitive way of referencing information relevant to the expert system consultation. Since the importance of hypertext was stressed in lectures it was embarrassing not to have a shell which did not support it. Next year we hope to use either the DataLex WorkStation software or SoftLaw's Statute software, both of which allow for extensive hypertext use.

 

As well as the practical component described above, the course dealt with the theoretical background necessary both to build the expert systems and gain a basic understanding of artificial intelligence techniques in modelling legal reasoning.

 

Theory

 

One of the difficulties of teaching an interdisciplinary course like Law and AI is the lack of material. This problem was solved by my colleague, John Zeleznikow, an artificial intelligence researcher at La Trobe University. He suggested that we co-write a book which would be suitable for both the Melbourne course as well as his artificial intelligence course at La Trobe. We did so, and an early draft of the book, Building Intelligent Legal Information Systems, was used as printed materials in the course. The book is currently being published by Kluwer and will be available in a couple of months, at a reduced price for students.

 

The course followed the basic structure of the book. It began with discussions of the nature of computer systems lawyers need, rationales for developing legal expert systems and a general introduction to the various theories about legal reasoning. We then examined the use of logic programming and production rules, network and frame-based representations of legal concepts, case-based reasoning and finally neural networks. I sought to balance the explicitly legal theoretical with the artificial intelligence theoretical/practical. The exam also followed this pattern, with equal weight given to a practical problem section and a theoretical essay section. This emphasis was generated in part because of the explicitly interdisciplinary nature of the subject; but more pragmatically to provide for the different strengths of the students. The subject was not about who could program the best: it was about a range of legal theoretical perspectives as well as an understanding of the artificial intelligence paradigms which can be used to implement them.

 

Lessons Learned

 

Feedback on the course was good. Most people found that building production rule systems is so easy that even lawyers can do it. The difficulty is in resolving open texture in legal predicates; an explicitly legal requirement. Feedback indicated that the students did feel they had gained a better understanding of both legal theory and artificial intelligence practice.

 

Somewhat less successful (on a number of levels) was the assessment. First, the subject is not your standard law subject, and so both the students and I had little idea what to expect from each other. This caused some concern on the part of around half the class (I'm not joking) who came to see me after the exam to discuss their results. Part of this was just bad organisation on my part, but part of it is in the nature of the course. There will always be a tension between the good programmers who are bad lawyers, and bad programmers who are good lawyers. How one resolves this problem is, at present, beyond me. It will take some time to work it out.

 

Secondly, the project can be both overwhelming and boring. At first the students don't know whether they are shot or poisoned. I waited until we had covered the introductory material before scheduling specific classes on how to build the project's expert systems. This meant that the students waited until halfway into the subject before beginning their projects. This put them under too much pressure. I will change this next year, and provide an introduction to the production rule language in the first two classes. Even so, there is still some difficulty with allowing students to build expert systems in a domain of their own choosing. Once the students learn how to turn legal predicates into production rules, building the expert system becomes somewhat repetitive. I think the way to resolve this is to make the domains smaller, and make the expert system a smaller part of the assessment. This will provide scope for an exercise in other artificial intelligence approaches, like rule induction, neural networks or similar.

 

In summary then, teaching artificial intelligence in law schools is extremely rewarding and not as difficult as it might appear. It does require however a recognition of the difficulties that some students face, and how some fundamental tensions can give rise to difficulties in teaching these types of 'innovative' subjects to law students.

 

Perhaps the best feature of teaching such a subject is the enthusiasm of the students. Of the thirty-five students who finished the course, some seven or eight are interested in doing research in this area next year. This augurs well for the future of artificial intelligence systems at Melbourne Law School.

 

 

Published in the Law Technology Journal: Vol 3, No 3