Over the past several months, I have had the opportunity to speak with leaders across a range of sectors about artificial intelligence. These conversations have taken place in boardrooms, universities, professional development seminars, and informal gatherings following presentations. The contexts vary and the industries differ, however a common pattern has begun to emerge.
The organizations I encounter are not dismissive of AI. Quite the opposite. Most are experimenting with generative tools, reviewing internal processes, or considering policy development. Many have established working groups. Some have launched pilot projects. Others are waiting for clearer regulatory direction before moving further. At first glance, the tone is thoughtful and measured.
Beneath that surface, however, a more subtle but significant governance issue is taking shape.
In this column, I want to discuss three of the most common responses I am hearing when from senior managers, lawyers and executives when discussions turn to responsibility for AI risk. The responses to questions around responsibility for AI initiatives or risk management often take a familiar form: “We have a committee.” “IT says it’s fine.” “We trust our people to use it responsibly.” Each of these statements is reasonable in isolation and signals that attention is being paid. Taken together, however, they reveal a more concerning pattern, namely the diffusion of responsibility across structures, departments, and organizational culture.
AI governance is uniquely prone to this problem. Unlike traditional technology deployments, AI systems sit at the intersection of technical infrastructure, professional judgment, regulatory exposure, and institutional strategy. When accountability is distributed thinly across committees, delegated entirely to technical teams, or left to individual discretion, no single actor retains clear ownership of the risk.
This column is not about fault-finding. The individuals involved in these conversations are uniformly thoughtful and well-intentioned, and the issue is structural rather than personal. As AI tools become more deeply embedded in everyday workflows, however, structural ambiguity around responsibility is becoming a material governance risk.
In what follows, I examine three statements I continue to hear “from the road” and what they reveal about the current state of AI oversight. The objective is not to criticize but to clarify. In the context of artificial intelligence, clarity of responsibility may be among the most important governance tasks ahead.
“We Have a Committee.”
In many of the organizations I encounter, the first response to questions about AI oversight is reassuring: “We have a committee.” Often this committee is cross-functional and includes representatives from IT, legal, compliance, operations, and senior management. It meets periodically, monitors developments, and in some cases is tasked with drafting policy.
At first glance, this appears to be an appropriate institutional response. Artificial intelligence is a cross-cutting issue that touches infrastructure, professional standards, privacy law, human resources, procurement, and strategy. A cross-functional body reflects the reality that no single department can address these issues in isolation.
Committees are also not inherently flawed. They are frequently composed of thoughtful and capable professionals who are attempting to approach a complex issue carefully. In many organizations, the formation of a committee signals that leadership recognizes AI as something that warrants structured attention rather than informal experimentation. The challenge lies in the nature of committees themselves. They are designed to deliberate, to gather information, and to provide recommendations. They are not typically designed to assume concentrated risk ownership.
In practice, committee members usually carry full portfolios. AI oversight becomes one item among many. Meetings are periodic and mandates are often exploratory rather than executive. Recommendations may be developed, but ultimate accountability can remain unclear. When responsibility is shared across a group, clarity about who ultimately owns the consequences of a decision can diminish.
There is also a practical reality that should be acknowledged. Artificial intelligence is technically complex and rapidly evolving. Even experienced professionals may not have the time required to develop sustained, specialized literacy in the tools under discussion. Without dedicated authority, expertise, and resourcing, committees can become monitoring bodies rather than governance mechanisms.
At the same time, AI deployment is no longer theoretical. Generative tools are already embedded in everyday workflows, sometimes formally approved and sometimes adopted informally by staff seeking efficiency. When technology moves faster than governance structures, an exploratory committee model may prove insufficient.
Cross-functional dialogue remains essential. However, dialogue alone does not constitute accountability. Effective AI oversight requires clarity about who is responsible for risk assessment, policy approval, escalation decisions, and ongoing monitoring. Absent that clarity, the reassuring statement “we have a committee” may mask a more difficult question about ownership.
“IT Says It’s Fine.”
Another response I frequently hear, particularly in public sector and government contexts, is this: “IT says it’s fine.”
This response is understandable. Information technology departments play an essential role in evaluating software tools. They assess cybersecurity vulnerabilities, data storage architecture, vendor compliance, and integration with existing systems. In many organizations, IT teams are the first line of defense against technical instability and data breaches, and their expertise is indispensable.
The difficulty arises when technical clearance is treated as synonymous with overall approval.
IT departments typically manage technical risk, including whether a system is secure, compatible, and operationally stable. Artificial intelligence, however, introduces a broader range of concerns that extend beyond infrastructure. AI systems can affect professional obligations, regulatory exposure, fiduciary duties, human rights considerations, reputational risk, and the integrity of institutional decision-making. These are governance questions rather than purely technical ones.
In regulated professions such as law or medicine, individual practitioners carry independent duties that no technical clearance can discharge. A tool may be secure from a cybersecurity perspective and yet still generate inaccurate outputs, embed bias, or encourage overreliance in ways that create professional liability. Technical approval does not resolve questions about appropriate use, supervision, documentation, or compliance with professional standards.
This observation is not a criticism of IT teams. It is a clarification of institutional roles. Expecting technical departments to assume responsibility for enterprise-wide ethical and regulatory risk places them in a position that extends beyond their mandate. It may also allow senior leadership to conclude that oversight has been achieved when, in reality, only one dimension of risk has been addressed.
AI governance requires coordination among technical expertise, legal analysis, operational leadership, and strategic oversight. When the phrase “IT says it’s fine” becomes the end of the conversation rather than the beginning of a broader assessment, responsibility is once again dispersed rather than clearly assigned.
“We Trust Our People to Use It Responsibly.”
A third response I often hear is more values-oriented: “We trust our people to use it responsibly.”
This statement reflects confidence in professional judgment and organizational culture. Institutions depend on individuals exercising discretion and acting in good faith, and in many contexts that trust is warranted.
Trust alone, however, does not amount to a governance framework.
Artificial intelligence tools differ from many technologies that preceded them. They do not merely transmit information. They generate it. They summarize, interpret, draft, and recommend. In doing so, they may also fabricate, distort, or oversimplify. Their outputs can appear authoritative even when they are incorrect. This combination of fluency and fallibility creates a distinctive risk profile.
Where organizations rely primarily on individual discretion without articulated policy guidance, training, and oversight, responsibility shifts downward in subtle ways. Professionals are left to determine for themselves when AI use is appropriate, how outputs should be verified, what documentation is required, and how client or stakeholder interests may be affected. Practices can become inconsistent, and risk tolerance may vary across departments or individuals.
If an error occurs, the absence of clear institutional guardrails can produce further ambiguity regarding responsibility. Without defined expectations, it may be difficult to determine whether a failure reflects individual judgment or structural oversight.
Trust remains an essential organizational value. It is strengthened, rather than diminished, by clear parameters, defined accountability, appropriate training, and ongoing monitoring. Without those elements, reliance on individual discretion may again reflect diffusion rather than ownership.
Why AI Is Especially Prone to Diffusion
Taken individually, each of these responses is understandable. Committees promote collaboration. IT departments safeguard infrastructure. Trust reflects institutional confidence. The difficulty emerges when these mechanisms are treated as complete.
Artificial intelligence occupies an unusual position within institutions. It depends on technical infrastructure, engages legal and regulatory exposure, shapes operational workflows, and influences strategic direction. Because it sits at the intersection of so many functions, it can easily fall between them.
Committees discuss it. IT evaluates it. Professionals use it. Legal teams review it when prompted. Risk managers may include it within broader enterprise risk frameworks, and boards may receive periodic updates. Yet in many organizations there is no clearly designated owner of AI risk as such. Responsibility is distributed, but ultimate accountability remains indistinct.
Enterprise risk management frameworks are designed for issues that cut across silos. They require identification of risk owners, articulation of risk appetite, defined escalation pathways, and ongoing monitoring. Artificial intelligence fits squarely within that category. Treating it as a temporary project or purely technical deployment risks underestimating its institutional impact.
Where no one clearly owns AI risk, many may participate in it, yet no single actor remains accountable for its consequences. That dynamic reflects the essence of diffusion of responsibility.
Conclusion
Artificial intelligence is advancing through institutions at a pace that challenges traditional governance structures. Its adoption is rarely reckless. More often, it is incremental and pragmatic. Tools are introduced to increase efficiency. Staff experiment to improve workflows. Committees monitor developments. Technical teams evaluate vendors. Professionals exercise judgment.
When responsibility is dispersed across structures, functions, and culture, however, clarity can erode. Oversight may appear present while ownership remains indistinct.
AI systems influence outputs, shape decisions, and generate content that may carry legal, professional, or reputational consequences. In a regulatory environment that continues to evolve and where enforcement bodies are interpreting existing legal frameworks in new ways, institutions cannot rely on ambiguity as a safeguard.
Governance requires definition. It requires clear assignment of responsibility, defined escalation pathways, and articulated expectations for use. These mechanisms provide the foundation for sustainable innovation.
The statements examined here reflect common and understandable institutional instincts. Collaboration, deference to expertise, and confidence in professional judgment each have value. None, however, replaces the need for clearly defined ownership of AI risk within the organization.
As AI becomes embedded in everyday practice, thoughtful adoption will matter less than clear accountability. Institutions that define ownership early will be better positioned than those that later discover that responsibility was distributed broadly but held nowhere in particular.
Note: Generative AI was used in the preparation of this article.
The post AI and the Diffusion of Responsibility: Dispatches From the Road appeared first on Slaw.
