Mon - Sun 24/7

Should Law Students Be Using AI — Even on Exams?

An email from a faculty member at the University of Toronto on the topic of AI made the rounds at law schools across Canada recently. It’s about using AI on final exams.

It points out that if a student has an app already open when they launch Examplify – the software most schools use to administer exams – they will have access to that app while writing the exam. This could be a browser with Lexis+AI or the app version of ChatGPT, which would still be online during the exam.

To avoid this, the company that makes Examplify advises running exams in ‘secure mode.’ This cuts off access to both the net and the user’s hard drive.

The author of the email was keen to share the discovery that running exams in this setting may now be necessary across the board – since it took them only a few minutes to find a program online that will run a language model on one’s computer without having to access the net. One that would give “correct answers” to some of the questions they asked.

Two things about this email stood out to me. One is that we’re close to the point where AI is baked into the system and not merely an app we can cordon off. We may already be there with certain functions of Apple Intelligence, a system feature available wherever text can be entered on a Mac.

But the more interesting point is the assumption that hovers over the whole discussion that goes without saying: that using AI on a law school exam is to be avoided at all costs.

I’m a law professor co-teaching a course this term on ‘AI, Law, and Justice,’ and I’ve spoken to many students in our faculty about their use of AI.

I argue that it may be time to question the assumption that using AI is tantamount to cheating.

Whether law schools are ready for it, students are rapidly embracing AI. Simply excluding it from exams and assignments is both unrealistic and imprudent.

Where law students are at with AI

I’m noticing a wide range of immersion among students using AI at our law school, but the general trend is unmistakable. Students are making more use of it over time. The more they experiment, share, and learn about the technology, the more apt they are to make it an important part of their approach to law.

From discussions in the AI, Law, and Justice course and from talks on AI we’ve hosted at TRU Law, it’s clear that students are eager to see AI assume a higher profile in their education. While it has a place in our optional Advanced Legal Research course, many think a practical course on AI should be mandatory, with guidance on effective and ethical uses of it in research and practice.

The thornier question of where it belongs

In the AI course, we had a lively and fruitful debate on the tougher question of whether students should be allowed to use AI on exams or assignments. There are two schools of thought. Articulating them helped us see a middle ground and a glimpse of what the future of legal ed with AI might involve.

The skeptical view holds that students should not have access to AI during an exam, because it would defeat the main purpose: testing one’s ability to do legal analysis. With access to AI, it wouldn’t be clear that a student had grasped central concepts in contract, criminal, or administrative law.

The pragmatist view, by contrast, sees the effort to examine without access to AI as futile and unrealistic. When will a lawyer not have the benefit of AI in practice? When would their hands be tied to think or write without it? Lawyers are already using AI frequently at firms where students are summering. Why not examine students on their ability to use it effectively?

A middle ground in sight

Students in the AI course proposed a middle ground: aim to test not a student’s ability to think without AI, but to think effectively with AI.

As one student put it: if an assignment or the answer on an exam were nothing more than the direct output of a chatbot, it wouldn’t pass muster. It wouldn’t address a problem fully and accurately.

In most cases, students will need to know how to prompt effectively and to revise the answer to bring it into line with precisely what was asked for.

The future of grading in the age of AI

On this view, over time, more of the grading in law school will involve assessing a person’s ability to work effectively with AI rather than without it.

But the bar would be raised. The quality of the output would be more polished – or expected to be. It would also be held to a higher standard. Answers would have to be entirely correct, accurate, and complete to get at a good grade. But to get the highest grade, one would have to go above and beyond: showing some special insight, creative twist or innovative policy argument not likely to have emerged from a chatbot.

Just one view of AI in legal ed

There may be other visions of how this plays out, as the conversation continues. But students I’ve worked with are hoping that faculty are thinking carefully about the place of AI in legal ed – that we question more of the unspoken assumptions about it.

Both students and the profession are leading by example. We shouldn’t be far behind.

The post Should Law Students Be Using AI — Even on Exams? appeared first on Slaw.

Related Posts