Connect with us

Thought Leaders

Healthcare AI Has an Accountability Problem

mm

In healthcare, AI is now embedded in everything from clinical decisions to HR and finance. Yet many organizations still lack the risk-management delegation needed to ensure AI tools don’t end up causing harm. The absence of structured oversight means AI-related decisions are made without clear accountability, exposing organizations to the risk of ethical and regulatory violations.

When no one is responsible for the decisions and actions AI takes, blind spots will expand rapidly. The consequences of an AI system making high-stakes decisions without oversight are numerous and far-reaching, especially when people’s lives are on the line.

Today’s AI governance gaps look a lot like earlier inflection points where the technology curve steepened faster than the enterprise’s ability to manage it. We went through this with cloud computing: teams adopted SaaS, IaaS, and “shadow IT” to move faster, while governance lagged on basics like data classification, identity and access management, vendor oversight, logging/monitoring, and shared-responsibility clarity—so accountability got scattered across IT, security, procurement, and the business.  We’ve also seen this with the rapid consumerization of IT and mobile/BYOD, where employees brought new devices and apps into regulated environments long before organizations had mature policies for encryption, endpoint controls, app vetting, and e-discovery. In each case, the adoption was rational and often value-creating — but the absence of clear ownership, standardized controls, and lifecycle oversight created predictable failures. The lesson for AI is straightforward: governance can’t be an afterthought bolted onto innovation; it has to be built like other critical infrastructure—intentionally, with defined decision rights, continuous monitoring, and enforceable guardrails.

The problem with diffused accountability

The rapid deployment of AI has outpaced the development of governance and accountability standards, leading to a “diffused accountability” gap where no single entity takes responsibility when AI fails.

Liability is already an omnipresent issue in healthcare, and AI has only brought new challenges. AI tools have no recognized legal identity, meaning they cannot be sued or insured against, nor can they pay legal compensation to victims. In legal proceedings, fault must be transferred to a human actor or a corporation, not a tool.

Researchers in The Lancet, a leading medical research journal, recently argued that “institutional liability structures must redistribute responsibility from clinicians to the organisations that design and deploy [AI] tools.” It’s clear that such questions around liability will persist well into the future.

The European Union is attempting to address these issues on a regional scale. The bloc has introduced two major legislative instruments: the AI Act, which regulates AI usage by degree of risk and emphasizes the preservation of human oversight; and the AI liability directive, which establishes new rules that make it easier for people to seek compensation for harms caused by AI.

But regulation alone will not solve the problem. Hospitals operate within a complex web of vendors, clinicians, administrators and IT teams, so when an AI system produces a harmful or biased output, responsibility gets passed like a ball between stakeholders: the vendor may point to improper use, clinicians may say the design is flawed, and leadership could blame regulatory ambiguity.

All this means accountability is diffused, leaving hospitals vulnerable to major legal battles.

Practical steps to close governance gaps

The good news is that even without comprehensive regulations, healthcare organizations can proactively close gaps in AI governance. To start, leaders can begin with the World Health Organization’s report, “Ethics & Governance of Artificial Intelligence for Health,” which seeks to maximize the promise of AI while minimizing risk.

The steps outlined in this report aim to protect autonomy, promote human well-being and public safety, ensure transparency and explainability, and foster responsibility and accountability. To address governance gaps, let’sfocus on the latter two points.

Implement a unified approach to AI governance, ensuring it’s directed top-down by boards or experts. Currently, many organizations let individual departments use AI where they see fit, leaving leaders unable to explain how and where the organization is using these tools. Visibility is paramount, so make sure you have a list of exactly which tools are being used where and for what purpose.

It’s equally important to establish clear lines of accountability across the AI lifecycle. This means making a person or department responsible for everything from procurement and validation to deployment monitoring, and incident response. Hospitals must require vendors to meet defined transparency and auditability standards, and ensure internal teams are trained to understand both the capabilities and limitations of AI systems.

Finally, governance must be operationalized, not just documented. Embed policies into workflows by integrating AI risk assessments into procurement processes, conducting regular audits of AI performance, and creating mechanisms for frontline staff to report concerns without friction.

In practice, closing the governance gap is less about introducing new principles and more about enforcing discipline: standardize how AI enters the organization, define who owns it at every stage, and ensure its performance is continuously scrutinized. Without that discipline, AI tools will continue to outpace the structures designed to keep them safe.

The hidden risk: data quality

Even when accountability structures are in place, another risk is often underestimated: the integrity of the data feeding AI systems and how those systems evolve over time. Any AI system is only as reliable as the data i’s trained on and continuously learns from, and hospital data environments are notoriously fragmented, inconsistent, and prone to gaps.

Electronic health records, imaging systems, and administrative platforms often operate in silos, creating discrepancies that can directly impact AI outputs. A model trained on incomplete or biased datasets can produce flawed recommendations that may go unnoticed until the harm has already been done. It’s particularly dangerous in clinical settings, where small deviations in accuracy can translate into significant consequences for patients.

Compounding this issue is “model drift”: the tendency of AI models to deviate from instructions and context as more data enters the system. As patient populations evolve, new treatment protocols are introduced, and external factors affect operations, the AI tools’ baseline assumptions can shift. Without continuous monitoring and recalibration, an AI system that once performed reliably may start taking actions or suggesting solutions that depart from its training.

To address model drift, hospitals must treat AI systems as dynamic, high-risk assets rather than static tools. This means implementing continuous performance monitoring, establishing clear thresholds for acceptable accuracy, and defining ownership for retraining and validation. Data governance must also be strengthened, with standardized practices for data quality, interoperability, and bias detection.

Without confronting the risks tied to data quality and model drift, even the best AI governance frameworks will fall short. For healthcare AI systems, which are only as good as the data underpinning them, overlooking this layer of risk creates the potential for a systemic failure sooner or later.

Get it right before you get it running

AI has the potential to transform healthcare by improving efficiency, accuracy, and patient outcomes. But without clear ownership of the risks it surfaces, that very potential can quickly become a liability.

Hospitals cannot afford to treat AI governance as a compliance exercise. It must be treated as a core operational priority: define ownership, structure oversight, and evaluate continuously. Because in healthcare, when something goes wrong, the consequences can be far worse than who’s at fault.

Errol Weiss joined Health-ISAC in 2019 as its first Chief Security Officer and created a threat operations center headquartered in Orlando, Florida to provide meaningful and actionable threat
intelligence for IT and infosec professionals in the healthcare sector.

Errol has over 25 years of experience in Information Security beginning his career with the National Security Agency (NSA) conducting penetration tests of classified networks. He created and ran Citigroup’s Global Cyber Intelligence Center and was a Senior Vice President Executive with Bank of America’s Global Information Security team.