Mon - Sun 24/7

Generative AI and Self-Represented Litigants in Canada: What We Know and Where to Go

The 2022 release of ChatGPT led to many breathless headlines about how generative AI would be an A2J boon. The mood is now more muted: while AI has driven some change in the legal context over the last three years, much has stayed the same. Given the time that has passed, some reckoning with where we are at seems appropriate. In this column, I focus on one piece of the A2J and AI puzzle: generative AI and self-represented litigants (SRLs) in Canadian courts and tribunals.

How has generative AI impacted the experiences of SRLs in Canada?

To answer this question, I offer the following seven observations.

  1. There is a lot we don’t know. An initial caveat is that we do not actually have a detailed picture of how SRLs are using generative AI. While several Canadian courts require litigants to disclose AI use, not all courts and tribunals do. Moreover, even for courts with disclosure mandates, we are unlikely to have a clear picture of SRL AI use. Unless something “goes wrong” and problematic AI-generated material is identified, there is no reliable way to always know if someone used AI to prepare their submissions but did not disclose the use.
  2. That said, some proportion of SRLs are clearly using generative AI. In an April 2025 Slaw.ca column, Jennifer Leitch, Executive Director of the National Self-Represented Litigants Project, reported:

The very early data that the National Self-Represented Litigants Project (NSRLP) has gathered from self-represented litigants (SRLs) who complete our intake survey suggests that they are generally cautious about the use of AI applications like ChatGPT, given their current reputation for inaccuracy. However, at the same time, there is no denying that the speed and accessibility of AI programs will likely mean that they are accessed and deployed by more and more SRLs,

To echo the above, given that the majority of Canadians have used general-purpose generative AI tools like ChatGPT, it is to be expected that SRLs are also using this technology. Between anecdotal accounts from adjudicators across Canada and reported decisions of “generative AI gone wrong” (discussed below), there is a rising tide of evidence of SRL use of AI.

  1. SRLs have been part of the “fake AI cases” epidemic. In my last Slaw.ca column, I reviewed the increasing phenomenon of lawyers in filing problematic AI-generated content in court. A parallel phenomenon exists with SRLs. The appendix at the end of this column lists 17 cases in which SRLs have filed hallucinated legal authorities with Canadian courts and tribunals. Despite publicity around some of these instances and warnings issued by some courts and tribunals to exercise care when relying on generative AI tools, we are seeing a rapid acceleration of decisions reporting problematic AI-generated content filed by SRLs. Of the 17 cases below, 11 (65%) are from the last three months alone. And, of course, located cases represent only the tip of the iceberg. Not every case where an adjudicator spots a non-existent case filed by a SRL is reported in a published written judgment. Also, not all problematic AI-generated content is necessarily going to be caught, particularly if the adjudicator does not rely on the legal authority or if the legal authority is not “straight-up made-up” but rather has other more subtle problems, such as a real case being cited for a wrong or partially inaccurate proposition.
  2. SRLs are also using generative AI to present or analyze evidence. Much of the focus on generative AI in courts and tribunals has been on the fake case phenomenon. It should also be noted, however, that some SRLs have (unsuccessfully) used generative AI in another way: to present or analyze evidence. Some examples:
    • In Yang v. Gibbs (dba D & G Cedar Fencing), one of the parties in a dispute before the British Columbia Civil Resolution Tribunal (BC CRT) used ChatGPT to try to prove that two emails in evidence came from the same device. Drawing on previous judicial decisions critiquing litigant reliance on ChatGPT, the adjudicator found that the information provided by ChatGPT in respect of the origins of the emails was “unreliable at best” and gave no weight to this information.
    • In Westcore Industries Ltd. v. Le, also before the BC CRT, a party unsuccessfully tried to submit opinion evidence in relation to testing water lines and installing laundry boxes. The adjudicator noted, among other things, that tribunal rules require expert evidence to come from a person and “ChatGPT does not meet the requirements.”
    • In LaPointe v. Chief Animal Welfare Inspector, the adjudicator found that a self-represented appellant before the Ontario Animal Care Review Board had “produced false or misleading information” through the use of “artificial intelligence tools” which included “veterinary opinions that are likely fabricated” in addition to problematic case citations.
    • In Ng v. ICBC, an adjudicator rejected the attempt by an applicant before the BC CRT to rely on a medical definition provided by ChatGPT, commenting “generative artificial intelligence, such as ChatGPT, is not so intrinsically reliable that I am prepared to accept it as evidence.”
    • In Maxwell v. WestJet Airlines Ltd., the adjudicator in a BC CRT matter gave no evidentiary weight to a party’s submission of a ChatGPT response to a question the party posed about whether he would have made a connecting flight in time.
  1. No flood of deepfake evidence (yet?) but some adjudicative engagement with deepfake claims already. Generative AI tools that can create realistic, but synthetic (i.e. fake), audio, image and video representations of people and events are rapidly improving in technical ability and accessibility. And, we do not currently have a consistently reliable way of detecting if an audio or visual representation is, in fact, “deep-faked.” These dynamics create a perfect storm for bad actors to intentionally submit deepfake evidence in legal proceedings in an attempt to improperly sway cases in their favour. There are growing concerns that a flood of deepfakes could enter courts and tribunals and wreak havoc on fact-finding processes. So far, we have not seen any such flood. One thing that we have seen in Canada to date are instances of SRLs unsuccessfully claiming or otherwise expressing concern that certain evidence is deep-faked (see, e.g. v. Cheng and Paynter v. Deputy Head (Canada Border Services Agency)). Stated otherwise, while it is certainly comforting that we aren’t seeing a flood of fake evidence, this does not mean that our courts and tribunals are free from having to engage with questions relating to deepfake technology. These questions are already here.
  2. Also, no apparent flood of new filings. A concern voiced in the early days of ChatGPT was that “robot lawyers were about to flood the courts.” The thought was that so many people who have legal claims cannot access legal help and, if generative AI could now provide this help, then courts and tribunals would see a massive and debilitating uptick in claim volume. So far, I have not heard of courts or tribunals being overwhelmed with new claims. (That said, there could certainly be more modest generative AI-driven movement in some courts and tribunals that isn’t getting widely reported). This does not mean, of course, that generative AI use by SRLs is not imposing any new resource burdens. For example, the fact that courts and tribunals now need to respond to problematic AI-generated content takes time and energy away from other things. Additionally, my understanding is that some tribunals, with relatively more relaxed page limit requirements and evidence rules, are seeing some SRL submissions increase significantly in length due to what is perceived to be AI-assistance and that this is putting new pressures on adjudicators’ time. As generative AI becomes more ubiquitous and, over time, makes submissions more voluminous both in number and length, the resource demands on courts and tribunals may, indeed, become more profound.
  3. Lots of money and development in commercial legal AI space, much less (but some!) dedicated activity in the A2J and SRL space. Over the last three years, a tremendous resources and time have been devoted to building generative AI tools specifically for the legal context. A June 2025 inventory created by Legaltech Hub counts 638 legal tech tools on the market that incorporate generative AI. This is an impressive array of offerings. But, for the most part, these tools are meant to serve lawyers directly and often target well-resourced lawyers who represent the wealthiest of clients. It is not, overall, an A2J-focused market. That said, there has been some development of specialized generative AI-driven tools that relevant to SRLs. Here are some Canadian examples:
    • To help people more effectively obtain legal information, People’s Law School – a non-profit legal education and information organization – has released a generative AI chatbot called Beagle+.
    • Community Legal Education Ontario (CLEO) is currently engaged in a multi-year project exploring, among other things, how generative AI might be built into systems that can help people without lawyers more easily fill in court documents.
    • CanLII’s use of generative AI to create case summaries is presented as not only saving time for lawyers but also as helping to simplify complex legal decisions for the public.

In addition to these examples, many A2J organizations in Canada are exploring how generative AI might be used to streamline non-legal tasks – like administrative and communications work – in order to free up resources for more front-line work.

The developments in the A2J and AI space are definitely small in comparison to what is happening in the private, large law firm space, but there are some good things happening.

Where do we go from here?

I’ll wrap up with some thoughts about moving forward. Here, I’ll limit myself to three points:

  1. More education. The above observations make clear that, while generative AI has not radically reshaped the justice system, it is being used and otherwise referenced by SRLs. The fact that reports of generative AI misuse by SRLs are increasing emphasizes the need for the word to get out to the public about the limitations of this sort of technology. To do this effectively requires efforts on several fronts. A few ideas:

a. Courts and tribunals should provide prominent and specific warnings to those filing materials about the need to be careful with generative AI, including when it is used for legal authorities and clarifying how it can (and cannot) be used for evidentiary purposes. Perhaps in addition to standalone practice directions or notices, notices could also be placed where people file materials (either electronically or in person).

b. Adjudicators should, where possible, publish written decisions when they encounter problematic AI-generated material filed by SRLs. Although this brings more attention to the person who made the mistake, decisions can be written thoughtfully in a way that helps perform an important educative function in addition to outlining the context-specific consequences in the matter at hand. The more the problem is aired, the better placed others will be to avoid making the same mistakes.

c. Adjudicator education is critical to all of this. Generative AI is coming to courts and tribunals from a variety of directions, whether in how submissions are prepared, the evidence is presented or even the subject matter of cases. It is essential that the person receiving the information has an appropriate understanding of the technology. Given how fast the technology and our understanding of it are evolving, adjudicator education needs to be continuing and regularly updated.

d. Mainstream media can also play a role. AI has captured the public’s attention. Stories that offer a balanced picture of the benefits and limitations of the technology can help steer SRLs away from running into trouble with their court submissions.

  1. More study. The above discussion highlighted that, like with so many things about the justice system, we don’t have a clear picture of the experience of SRLs with generative AI. To develop appropriate responses, more data would be helpful. For example, and to tie this to the previous point, if we had a clearer picture of which SRLs are using generative AI tools, which tools they are using and for what purpose, it would help inform the development of targeted education efforts for the public. It could also help focus adjudicative attention and responses.
  1. More development in the A2J AI space. Generative AI is not a panacea for all that ails our justice system. The A2J crisis is complex and structurally deep. Technological responses alone will not do the trick. Moreover, there is a lot we can do with technology and the justice system that does not involve generative AI. Simpler forms of automation, strategically deployed, could go a long way. This sort of reality check, however, does not mean that generative AI is useless in helping more people access justice. As noted above, there are already several projects in Canada that have shown promising pathways to leverage this powerful technology to help people. Developing more such projects, in effective and ethical ways, should be a priority. Given the economics, this is likely going to require significant public investment. While some tools may be self-sustaining or perhaps even make a profit, many A2J interventions are hard to sustain commercially.

Wrapping up!

Those are my thoughts about where we are at generative AI and SRLs in Canadian courts and where we should go from here. I leave it to readers now to fill in the picture. What did I miss? Where do you think we should go?

___________

Appendix of cases where Canadian courts and tribunals have commented on problematic descriptions of legal authorities arising from generative AI use or suspected generative AI use.

* Many thanks to my excellent research assistant, uOttawa law student Wade Radmore, for his work in helping to locate these cases. Please note that this list does not include cases where the adjudicator has more generally commented on generative AI use, where lawyers (as opposed to SRLs) have gotten into trouble for misusing AI, or where the use relates exclusively to presenting or analyzing evidence. Search results are as of August 15, 2025 *

    1. Choi v Lloyd’s Register Canada Limited, 2024 CIRB 1146 https://canlii.ca/t/k9v4z
    2. Duarte v. City of Richmond, 2024 BCHRT 347 https://canlii.ca/t/k8rk6
    3. Geismayr v. The Owners, Strata Plan KAS 1970, 2025 BCCRT 217 https://canlii.ca/t/k9h7l
    4. Q. v. B.T., 2025 BCCRT 398 https://canlii.ca/t/kbbg1
    5. SQBox Solutions Ltd. v. Oak, 2025 BCCRT 408 https://canlii.ca/t/kbbg4
    6. Simpson v. Hung Long Enterprises Inc., 2025 BCCRT 525 https://canlii.ca/t/kbrh9
    7. Zahariev v Zaharieva, 2025 BCSC 1057 https://canlii.ca/t/kcjvx
    8. Attorney General v. $32,000 in Canadian Currency, 2025 ONSC 3414 https://canlii.ca/t/kctk1
    9. R.V. v N.L.V, 2025 BCSC 1137 https://canlii.ca/t/kcsnc
    10. LaPointe v. Chief Animal Welfare Inspector, 2025 ONACRB 159 https://canlii.ca/t/kcz4p
    11. AQ v. BW, 2025 BCCRT 907 https://canlii.ca/t/kd08x
    12. NCR v. KKB, 2025 ABKB 417 https://canlii.ca/t/kd696
    13. Blaser v. Campbell, 2025 BCCRT 962 https://canlii.ca/t/kd7kk
    14. Moradi v. British Columbia (Human Rights Tribunal), 2025 BCSC 1377 https://canlii.ca/t/kdbjv
    15. Hakemi v. ICBC, 2025 BCCRT 1035 https://canlii.ca/t/kdfjr
    16. Halton (Regional Municipality) v. Rewa et al., 2025 ONSC 4503 https://canlii.ca/t/kdn3w
    17. Maxwell v. WestJet Airlines Ltd., 2025 BCCRT 1146 https://canlii.ca/t/kdv4m

The post Generative AI and Self-Represented Litigants in Canada: What We Know and Where to Go appeared first on Slaw.

Related Posts