Written By Stephen Burns, Sebastien Gittens, Kees de Ridder, David Wainer and Michael King
Organizations are increasingly embedding artificial intelligence (AI) in operations in order to drive efficiency; however, the risks of hallucinated output are being progressively considered by regulators, government bodies, and courts in Canada. In order to protect their integrity and minimize the potential risk in relying on AI, organizations should ensure they are complaint with applicable regulatory guidance as it relate to the use of AI.
This article canvasses three main topics:
- Canadian Securities Administrators (CSA) requirements for the use of AI systems in capital markets;
- Trademarks Opposition Board (TMOB) requirements for the use of AI in documents prepared for the purpose of proceedings before the TMOB; and
- court guidance on the use of AI in court submissions.
CSA Requirements for the Use of AI
On December 5, 2024, the CSA published CSA Staff Notice and Consultation 11-348: Applicability of Canadian Securities Laws and the use of Artificial Intelligence Systems in Capital Markets (Notice). This document offers guidance on how securities legislation applies to the use and implementation of AI by market participants.
In issuing the Notice, the CSA highlights several "overarching themes" regarding the use of AI systems in capital markets. For example, the Notice states that "it is important to note that it is the activity being conducted, not the technology itself, that is regulated"—underscoring the technology-neutral nature of Canadian securities laws. The Notice also urges market participants to develop and implement robust governance and risk management practices when deploying AI systems. Importantly, the Notice recognizes that certain AI systems have lower levels of explainability, and encourages market participants to balance the need for explainability against the advanced capabilities that AI systems can offer.
The following will focus on the Notice's requirements relating to non-investment fund reporting issuers that are subject to the disclosure requirements under National Instrument 51-102 Continuous Disclosure Obligations (NI 51-102) (each a Non-IF Issuer and collectively Non-IF Issuers).
Guidance for Non-IF Issuers
As "the cornerstone of investor protection and confidence", Non-IF Issuers are generally required under securities laws to publicly disclose certain information about their business and affairs in a prospectus and on a continuous basis thereafter. When considering how disclosure regarding a Non-IF Issuer's use of AI should be addressed, the Notice states that "a materiality determination should be made by Non-IF Issuers," and any disclosure should:
- be "tailored" (and not boilerplate);
- account for the materiality of the use and risks of the AI systems; and
- facilitate an investor's understanding of the use of AI systems by the Non-IF Issuer, including: (1) how the Non-IF Issuer defines and uses AI; (2) the material risks posed by the use of AI; (3) the impact the use of AI is likely to have on the Non-IF Issuer's business; and (4) the material factors or assumptions used to develop any forward-looking information about the use of AI.
Disclosure of Current AI Systems Business Use
Non-IF Issuers using or developing AI systems should provide tailored disclosures when such activities are material to their operations. These disclosures should avoid generic statements and offer entity-specific insights to help investors understand the operational, financial and risk implications of AI use. Specific details may include, for example, the nature and impact of AI applications, associated benefits and risks, material contracts and the effect on competitive positioning. Non-IF issuers should also disclose the source and providers of the data that the AI system uses to perform its functions, and whether the AI system is developed internally or by third parties, with similar expectations extending to prospectus filings.
AI-related risk factors
Non-IF Issuers must disclose material AI-related risks in prospectus and continuous disclosure documents, avoiding boilerplate language in favor of clear, entity-specific explanations. Effective risk disclosure should outline how the board and management assess and manage AI-related risks, providing investors with a clear understanding of their impact. Issuers are encouraged to implement robust governance practices, including accountability, risk management and oversight related to AI use. Examples of AI-related risks to consider include operational risks (such as the impact of disruptions), third party risks (relying on such third parties), ethical risks (social issues arising from the use of AI), regulatory risks (compliance and legal challenges), competitive risks (adverse impact of rapidly evolving products) and cybersecurity risks.
Promotional Statements about AI-related use
Non-IF Issuers must ensure that disclosures about their use or development of AI systems are fair, balanced and not misleading, with a reasonable basis for any claims made. “Overly promotional” (i.e., disclosing that the Non-IF Issuer uses more AI than it actually does) or vague disclosures lacking sufficient detail may mislead investors and violate applicable securities laws. Non-IF Issuers should provide substantiated, clear definitions of their AI use, addressing both benefits and associated risks to avoid presenting an unbalanced view. Unfavorable news must be disclosed as promptly as favorable news, and Non-IF Issuers must maintain high-quality, consistent disclosure practices across all platforms, including social media, to comply with their reporting obligations.
AI and Forward-Looking Information
Non-IF Issuers should consider if any statement about its use of AI may constitute forward-looking information (FLI). If so, Non-IF Issuers must not disclose such statements unless they have a reasonable basis for doing so. Disclosure of FLI regarding the prospective or use of AI systems must: (1) be clearly identified; (2) include a caution that actual results may vary; (3) disclose the material factors or assumptions used to develop the FLI; and (4) outline risk factors that could cause actual results to differ materially from the FLI.
The Notice provides an example wherein a Non-IF Issuer discloses that it plans to integrate AI systems into its products because it expects such integration to increase revenues by five percent. In that event, the Non-IF Issuer is required to disclose all the material factors and assumptions that were used to develop that estimate and provide any necessary sensitivity analysis.
As AI continues to evolve and reshape industries, its adoption in capital markets presents both opportunities and challenges. The Notice seeks to promote responsible use of AI by providing clarity on existing securities laws and inviting feedback from stakeholders to guide future guidance. Accordingly, market participants should adapt their operations to ensure that their deployment aligns with: (1) the principles of fairness, transparency and market integrity; as well as (2) applicable securities and other laws.
TMOB Requirements for the Use of AI
On June 4, 2025, the TMOB published a practice notice to govern the use of generative AI in documents prepared for the purpose of proceedings before the TMOB. This practice notice came in response to observed instances of AI "hallucinations" in documents submitted to the Registrar of Trademarks.
This notice requires parties to a proceeding pursuant to sections 11.13, 38 or 45 of the Trademarks Act to include the following declaration on any document that includes content generated by AI:
"Artificial intelligence (AI) was used to generate content in this document. All content generated by AI, and the authenticity of all authorities cited in this document, has been reviewed and verified by the [include the name of the party to the proceeding], or their trademark agent."
A declaration will not be expected where legal authorities are provided or evidence is generated in response to search queries having only objective criteria. Failure to provide a declaration when required, or providing a false declaration, may constitute unreasonable conduct leading to an award of costs against the non-compliant party.
Court Guidance for the Use of AI in Court Submissions
Several courts in Canada have provided guidance regarding whether submissions must identify the use of AI.
Principal to this guidance is the importance for human control to certify satisfaction as to the authenticity of cited authorities within a cited submission. In certain jurisdictions, a declaration regarding the usage of AI within a submission may also be required.
Notices and Directions on the Use of AI
On October 6, 2023, the Alberta Court of King's Bench (ABKB) issued a notice regarding the integrity of court submissions when using large language models. The notice urges practitioners and litigants to exercise caution when referencing legal authorities or analysis derived from LLM's in their submissions. For all references to case law, statutes or commentary, the notice requires parties to rely exclusively on authoritative sources such as official court websites, commonly referenced commercial publishers or well-established public services such as CanLII. Finally, the notice requires any AI generated submissions to be verified with meaningful human control.
The Superior Court of Quebec and the Supreme Court of Newfoundland and Labrador also issued notices in 2023, with fundamentally the same guidance as that provided by the ABKB. The Provincial Court of Nova Scotia has issued similar guidance, including an additional requirement for any party wishing to rely on materials that were generated with the use of AI to articulate how the AI was used.
In June 2023, both Court of King's Bench of Manitoba and the Supreme Court of Yukon issued practice directions regarding the use of AI in court submissions. The Manitoba practice direction requires that when AI has been used in the preparation of materials filed with the court, the materials must indicate how AI was used. The Yukon practice direction requires that if any counsel or party relies on AI (such as ChatGPT or any other AI) for their legal research or submissions in any matter and in any form before the Court, they must advise the Court of the tool used and for what purpose.
On May 7, 2024, the Federal Court of Canada (FC) published a notice requiring parties to proceedings before the FC to inform it, and each other, if documents submitted to the Federal Court for the purpose of litigation include content created or generated by AI. This declaration shall take place by way of a declaration in the first paragraph stating that AI was used in preparing the document, either in its entirety or only for specifically identified paragraphs.
Case Law Guidance on the Use of AI
The British Columbia Supreme Court (BCSC) has advised counsel to disclose to the court and opposing parties when any materials submitted to the court include content generated by AI tools such as ChatGPT. In Zhang v Chen, 2024 BCSC 285, a lawyer who had included AI hallucinations in their notice of application was held personally responsible for costs, despite noticing and removing the fabricated citations.
In contrast, the Ontario Superior Court (ONSC) has shifted the discussion regarding counsel's use of AI into a general concern regarding a lawyer's duty to certify their satisfaction as to the authenticity of cited authorities. Rule 4.06 of the Ontario Rules of Civil Procedure, as enacted in 2024, requires that a "factum shall include a statement signed by the party's lawyer, or on the lawyer's behalf by someone the lawyer has specifically authorized, certifying that the person signing the statement is satisfied as to the authenticity of every authority cited in the factum." The ONSC has noted that Rule 4.06 was enacted specifically to remind counsel of their obligation to check the cases cited in their legal briefs to ensure they are authentic, in the light of the non-universally understood risks and weaknesses of AI.
In Ko v Li, 2025 ONSC 2766, a lawyer who had included AI hallucinations in her factum and had failed to include the mandatory certification in 4.06, was said to have "sidestepped the process designed specifically to avoid the issue that arose here."
Conclusion
Incorporating and embedding AI into operations has the potential to drive efficiency; however, organizations should: (1) mitigate the risk of relying on generative AI by ensuring all output is reviewed for accuracy; (2) constantly analyze whether the use of AI would be considered material or not; and (3) implement robust governance and risk management relating to the use of AI.
If you have any questions about how your organization may use and implement AI, we invite you to contact one of the authors of this article.
Please note that this publication presents an overview of notable legal trends and related updates. It is intended for informational purposes and not as a replacement for detailed legal advice. If you need guidance tailored to your specific circumstances, please contact one of the authors to explore how we can help you navigate your legal needs.
For permission to republish this or any other publication, contact Amrita Kochhar at kochhara@bennettjones.com.