Over the past two years, much of my writing in this space has focused on the accelerating risks associated with artificial intelligence and the uneven state of AI regulation in Canada. I have written about stalled federal legislation, the growing role of privacy regulators, the increased risks of AI use for regulated professionals, and the early signs of AI related litigation beginning to surface in Canadian courts. Taken together, these developments point to a growing tension. Artificial intelligence is being deployed at speed, while the institutions tasked with managing risk remain fragmented, reactive, and unevenly equipped.
This column steps back from specific cases and statutes to address a broader institutional question. If Canada’s AI governance landscape is increasingly fragmented and reactive, what kinds of structures are capable of supporting sustained, cross sector engagement on risk and accountability? In particular, what role can independent, non-profit institutions play at a moment when formal regulation is stalled but real-world deployment continues at speed?
It is against this backdrop that I have been working to establish the Canadian Centre for Responsible AI Governance. The Centre is conceived as a national, independent forum for convening stakeholders across government, industry, academia, and professional communities, with a focus on AI risk, governance, and institutional design.
The Governance Gap
Canada’s current approach to AI governance can best be described as partial and uneven. The collapse of the Artificial Intelligence and Data Act left the country without a dedicated federal framework for managing the implementation of AI systems. In the absence of that framework, responsibility has fallen to a patchwork of existing institutions, including privacy commissioners, professional regulators, courts, and internal corporate governance processes. Each plays an important role, but none was designed to address AI risk as a systemic issue that cuts across sectors.
What is missing is not expertise. Canada has no shortage of researchers, policy thinkers, computer scientists, lawyers, and public servants working on AI related issues. I have been particularly impressed in the past year by the AI stakeholder discussions organized by the Law Commission of Ontario and by academic colleagues such as Ben Perrin, who have demonstrated how cross sector engagement can be done well. The challenge ahead lies in extending this work across multiple sectors and at a scale that matches the breadth of contemporary AI deployment.
Much of the current activity occurs in silos, separated by disciplinary boundaries, institutional mandates, or political constraints. As a result, we see recurring patterns. The same governance questions are debated repeatedly in different forums. Lessons learned in one sector are slow to migrate to others. Opportunities for early, preventive engagement are often missed until harms have already occurred.
This gap becomes particularly acute in periods of regulatory uncertainty. When formal rule making stalls, governance does not disappear. Instead, it shifts into less visible forms. Decisions about AI deployment are made inside organizations. Risk trade offs are resolved through procurement processes, internal policies, and contractual arrangements. Without shared reference points or common forums for discussion, these decisions tend to reflect local pressures rather than broader public values.
Why Independent Convening Matters
One response to this governance gap is to call for faster legislation. That impulse is understandable, but it is not sufficient. Even well-designed statutes require interpretation, implementation, and ongoing adaptation. They also tend to lag behind technological change. In the meantime, AI systems continue to be deployed in healthcare, education, justice, and the private sector, often with limited external scrutiny.
Independent convening institutions serve a different but complementary function. Their value lies in creating structured spaces where stakeholders can engage with complex governance questions before those questions harden into crises or adversarial disputes. When designed well, these spaces support candour, learning, and iterative problem solving in ways that formal regulatory processes often cannot.
Independence is crucial. AI governance cannot be effective without the participation of those who design, deploy, and manage systems in real world settings, including industry actors. At the same time, convening bodies that are closely tied to a single institution, funder, or political agenda risk losing credibility with other stakeholders. In a polarized environment, trust becomes a form of governance infrastructure in its own right. It is built through clear institutional boundaries and a willingness to engage with disagreement rather than advocate predetermined outcomes.
The Role of CCRAIG
The Canadian Centre for Responsible AI Governance has been designed with these considerations in mind. Its mandate is deliberately narrow. I will serve as the Centre’s founding director, with responsibility for its initial convening and research agenda. The Centre will not provide legal advice, develop or recommend commercial products, or advocate for specific legislative outcomes. Instead, it focuses on three core activities.
First, convening. The Centre brings together participants from government, industry, academia, civil society, and professional communities to discuss AI governance challenges in a structured and informed setting. The emphasis is on shared understanding rather than consensus, and on facilitating dialogue across perspectives rather than advocating specific outcomes.
Second, applied research. The Centre undertakes and supports research projects that examine how AI governance operates in practice. This includes work on institutional design, risk management frameworks, and the interaction between formal regulation and informal governance mechanisms. The goal is to generate insights that are useful to decision makers across sectors, not only to academic audiences.
Third, public education. While much of the Centre’s work occurs through targeted roundtables and research initiatives, there is also a need for accessible analysis that helps practitioners and policymakers make sense of a rapidly evolving landscape.
It is also important to emphasize what the Centre is not. It is not affiliated with or directed by any single university, government department, or corporate sponsor. At the same time, it is designed to collaborate with, and receive support from, partners across academia, government, and industry. It does not exist to validate particular technologies or business models, nor to engage in lobbying or advocacy for specific policy outcomes. Its purpose is more modest and, I would argue, more durable. It aims to strengthen the connective tissue of AI governance in Canada at a time when formal structures are under strain.
Institution Building as Governance Work
There is a tendency to think of governance primarily in terms of rules, enforcement, and compliance. Institutions themselves receive less attention, except when they fail. Yet the history of effective regulation suggests that durable governance depends as much on the quality of institutions as on the content of laws.
In the AI context, this awareness is particularly important. Many of the most significant risks associated with AI systems are multi dimensional in nature. They emerge from interactions between technology, organizational incentives, human behaviour, and legal frameworks for instance. These dynamics have also surfaced in my doctoral work on institutions and regulatory reform in Canadian AI governance, which examines how technical and institutional factors meet in practice. A central insight is that addressing these risks requires ongoing dialogue across domains that do not naturally intersect, yet our existing institutions are rarely designed to hold or sustain these conversations.
From this perspective, building and maintaining spaces for that dialogue is itself a form of governance work. It is slow, often unglamorous, and difficult to measure. However, it is also one of the few ways to ensure that governance keeps pace with innovation rather than perpetually chasing it.
Conclusion
The Canadian Centre for Responsible AI Governance is being established at a moment when AI deployment in Canada is accelerating, formal regulation remains uncertain, and responsibility for managing risk is increasingly dispersed. The work ahead is substantial and cannot be undertaken by any single individual or organization. Meaningful progress on AI governance will depend on the participation of partners across government, industry, professional communities, civil society, and academia, and from every region of the country. The Centre is intended as a platform for that engagement, not as its endpoint.
For those working on AI governance challenges in their own institutions, or who see value in contributing to a national, cross sector dialogue on risk, accountability, and institutional design, I welcome those conversations. The Centre will only be useful if it reflects the experience, concerns, and insights of those grappling with these issues in practice. As we continue to develop the Centre’s digital infrastructure, I ask for your patience and invite you to reach out directly in the meantime at michael@ccraig.ca.
Note: Generative AI was used in the preparation of this article.
The post Beyond Regulatory Silos: Announcing the Canadian Centre for Responsible AI Governance appeared first on Slaw.
