The Practice Directions
On June 23, 2023, the Court of King’s Bench of Manitoba (MBKB) issued its Practice Direction on the Use of Artificial Intelligence in Court Submissions. Shortly thereafter, on June 26, 2023, the Supreme Court of Yukon (YKSC) issued its Practice Direction on the Use of Artificial Intelligence Tools. (As of the date of drafting this post, no other court practice directions from Canadian courts are known to the writer.)
Digressing for a moment from the subject of legal citation, one can’t help but observe that the two practice directions have been the subject of some legal commentary being the first of their kind in Canada to require legal practitioners to disclose the use of or reliance upon AI. This use or reliance applies to “legal research or submissions” for the YKSC and the “preparation of materials filed with the court” for the MBKB. The rationale behind the practice directions is stated by the courts with nearly identical phrasing. Both practice directions note “legitimate concerns about the reliability and accuracy of the information generated from the use of artificial intelligence”.
Both practice notes feature a dearth of definitions that may require further clarification for compliance. “Artificial intelligence” is not defined in either of the courts’ practice directions, although the YKSC Practice Direction provides a non-exhaustive statement of what may qualify as artificial intelligence, this being “Chat GPT or any other artificial intelligence platform”. Even if the practice directions contained clear, concise, and current definitions of AI that may facilitate compliance, factors such as the 1) evolving nature of AI; 2) varying degrees of technological literacy and understanding within the profession; 3) possibility of incorrect AI claims surrounding certain products, among others, may contribute to questions regarding what tools and methods would qualify for disclosure.
The practice directions seem to vary in terms of the scope of the subject matter for disclosure. The YKSC Practice Direction indicates that parties or counsel must advise the Court of “the tool used” and “for what purpose”. This suggests the need to identify specific tools in addition to their functionality. The MBKB Practice Direction indicates that materials filed with the court must indicate “how artificial intelligence was used”. This suggests a broader disclosure that may include the identification of specific tools alongside methods of use, possibly including user-generated prompts.
Finally, the practice directions do not seem to clarify how far back in the legal process or workflow disclosure would apply. The practice directions seem to encompass broad and, at times, lengthy activities such as “preparation” (MBKB) and “legal research” (YKSC) supportive of court filings or submissions. These processes be undertaken by a group of practitioners and staff alongside counsel named on the submission. While excessive disclosure may be better than a lack of disclosure in certain contexts, practitioners may be inclined to disclose only what is required and necessary.
The Canadian Guide to Uniform Legal Citation
In addition to what and when to disclose the use of generative AI tools, another question prompted by the practice directions is how to disclose the use of generative AI tools. This is where legal citation enters.
On May 26, 2023, about a month before the release of the MBKB and the YKSC practice directions, the 10th edition of the Canadian Guide to Uniform Legal Citation (McGill Guide) was published. This edition includes notable developments, including the Statement of Commitment to citing Indigenous sources of knowledge in time for the 11th edition as well as the establishing of the CanLII citation as a main, standalone citation when citing jurisprudence without a neutral citation.
As a law librarian, the Secondary Sources and Other Materials chapter of the McGill Guide has always been an exciting one to review. This may be due, in part, to the inclusion of new sources legal professionals and researchers may turn to for commentary on the law and the evolving nature in which they choose to receive legal information. The chapter provides some insights into legal information sources and how these change over time in practice.
A few years ago, the 9th edition of the McGill Guide featured citation formats for sources such as blogs, podcasts, social media platforms (like Facebook and Twitter), forums (like Reddit), and online video aggregators (like YouTube). The 10th edition now contains explicit mention of platforms such as TikTok and Twitch, an example of how to cite LinkedIn content, and a discussion on citing blockchain technology and non-fungible tokens (NFTs). This chapter also includes a new Legal Research Services sub-chapter, which provides citation formats for commercial databases such as Practical Law and Practical Guidance, which have since made their way into legal research processes for some organizations.
A Legal Citation Format for Generative AI “Sources” and Content
The process of having more contemporary sources and citation formats included in the McGill Guide seems to lead, informally, to the acceptance of those sources as references in legal research, taking into consideration their scope, currency, reliability, and authoritative value. Since its initial publication, Canadian legal researchers have turned to the McGill Guide for what resources to cite and how to cite them. Given this, it follows to wonder: If Canadian courts are moving in a direction toward the disclosure of the use of AI tools in court submissions, are we in need of a legal citation format for AI tools and AI-generated content?
While generative AI is a relatively new phenomenon and citation guides may lack specific guidelines for referencing AI-generated tools and content, some citation guides have addressed this in various ways, using ChatGPT as an example.
1. APA Style
The APA Style recommends that content generated by an AI tool be cited as “personal communication” (despite the lack of another person in the communication process), with a reference list entry and the corresponding in-text citation. The APA Style also indicates how, in some instances, full-text responses from a generative AI tool may be included as an appendix of a paper or in online supplemental materials, so that readers have access to the exact text that was generated by the AI tool.
When citing generative AI as a large language model or software, the APA Style recommends both reference and in-text citations. It recommends using the author of the model (or the organization that developed the model) as the author (for example, OpenAI). The year of the version of the model is used in the date field (for example, 2023), and the name of the model is used in the title (for example, ChatGPT).
Example of a reference list entry:
OpenAI. (2023). ChatGPT (Mar 14 version) [Large language model]. https://chat.openai.com/chat
2. MLA Style
The MLA Style recommends using a template of core elements or standardized criteria issued in other citation formats. It recommends citing a generative AI tool whenever the content is paraphrased, quoted, or incorporated into an author’s work. It also recommends acknowledging the functional use of the tool (for example, editing or translating) and vetting secondary sources cited.
The APA Style and the MLA Style contain key differences. For example, the reference list entry of the MLA Style incorporates the user-given prompt as the title of source. The MLA Style does not recommend treating the AI as the author of the content. Instead, it recommends using the AI tool’s name in the title of container field. (To draw an analogy, when citing a more traditional resource such as a journal article, the article title would be found in the title of source field and the journal name would be found in the title of container field.) The date the content is generated is used in the date field, unlike the APA Style which recommends using the year of the version of the AI model.
Example of a works-cited-list entry:
“Identify and briefly describe the parties and interveners involved in the case CM Callow Inc v Zollinger, 2020 SCC 45” prompt. ChatGPT, 13 Feb. version, OpenAI, 8 Mar. 2023, chat.openai.com/chat.
3. Chicago Manual of Style
The Chicago Manual of Style (“Chicago Style”) recommends citing AI tools and generated content in a numbered footnote or endnote note (for the notes-bibliography system) or as a parenthetical text reference (for the author-date system). It does not recommend including the AI tool in a bibliography or reference list. The Chicago Style recommends including the name of the AI model (for example, ChatGPT) as the author of the content and the organization that developed the model (for example, OpenAI) as the publisher of the content. Like the MLA Style, it recommends using the date the content was generated in the date field. When using the notes-bibliography system, user prompts may be entered in the footnotes or endnotes. Like the MLA style, the user prompt is indicated using quotation marks, as one would indicate a title of a work belonging to a larger collective.
Example of a footnote or endnote (without prompt):
Text generated by ChatGPT, May 16, 2023, OpenAI, https://chat.openai.com/chat.
Example of a footnote or endnote (with prompt):
ChatGPT, response to “Identify and briefly describe the parties and interveners involved in the case CM Callow Inc v Zollinger, 2020 SCC 45,” May 16, 2023, OpenAI.
Depending on the citation style guide used, the process of citing AI tools and AI-generated content is likely to raise interesting questions surrounding recommendations for (or against) the treatment of the AI tool as the author, the use of the model version or the date the content was generated as the date, the treatment of the organization that developed the model as the publisher, and the incorporation of the user-generated prompts in references or bibliographic notes. These questions go beyond format and may influence how AI tools and AI-generated content are viewed.
For the time being, guidance for citing AI tools and AI-generated content for Canadian legal research seems to remain piecemeal. Practitioners and researchers may need to consult with individual institutions, organizations, or publishers for preferred formats and subject matter for disclosure. Whether any of the above formats would be suitable for compliance with the MBKB or YKSC practice directions or whether the courts themselves would be inclined to introduce guidance for citing AI tools and AI-generated content (in light of the recent publication of the 10th edition of the McGill Guide and the time until the next edition is published, and whether this will be addressed) remains to be seen. Courts themselves are no strangers to issuing citation guidelines. (For examples, see the BC Court of Appeal and Alberta Court of King’s Bench.)
The availability of citation formats for AI tools and AI-generated content in other established style guides coupled with courts from Canada requiring the disclosure of the use of AI in court filings and submissions seem to foster a need for further accountability and transparency from users of this new technology in legal contexts. As a librarian, I am pleasantly surprised at how the topic of generative AI has opened new discussions surrounding legal citation and, given this brave new (AI-infused) world, further reinforces the importance of the latter in legal teaching and practice.