Connect with us

Thought Leaders

Everyone Wants AI in Risk Management. Few Are Ready for It

mm

Everyone’s racing to deploy AI. But in third-party risk management (TPRM), that race could be the biggest risk of all.

AI depends on structure: clean data, standardized processes, and consistent outcomes. Yet most TPRM programs lack those foundations. Some organizations have dedicated risk leaders, defined programs, and digitized data. Others manage risk ad hoc through spreadsheets and shared drives. Some operate under tight regulatory scrutiny, while others accept far greater risk. No two programs are alike, and maturity still varies widely after 15 years of effort.

This variability means AI adoption in TPRM won’t happen through speed or uniformity. It will happen through discipline, and that discipline starts with being realistic about your program’s current state, goals, and risk appetite.

How to Know if Your Program Is Ready for AI

Not every organization is ready for AI, and that’s okay. A recent MIT study found 95% of GenAI projects are failing. And according to Gartner, 79% percent of technology buyers say they regret their latest purchase because the project wasn’t properly planned.

In TPRM, AI readiness isn’t a switch you flip. It’s a progression, and a reflection of how structured, connected, and governed your program is. Most organizations fall somewhere along a maturity curve that ranges from ad hoc to agile, and knowing where you sit is the first step toward using AI effectively and responsibly.

At the early stages, risk programs are largely manual, dependent on spreadsheets, institutional memory, and fragmented ownership. There’s little formal methodology or consistent oversight of third-party risk. Vendor information might live in email threads or the heads of a few key people, and the process works, until it doesn’t. In this environment, AI will struggle to separate noise from insight, and technology will magnify inconsistency rather than eliminate it.

As programs mature, structure begins to form: workflows become standardized, data is digitized, and accountability expands across departments. Here, AI starts to add real value. But even well-defined programs often remain siloed, limiting visibility and insight.

True readiness emerges when those silos break down and governance becomes shared. Integrated and agile programs connect data, automation, and accountability across the enterprise, allowing AI to find its footing — turning disconnected information into intelligence and supporting faster, more transparent decision-making.

By understanding where you are, and where you want to go, you can build the foundation that turns AI from a shiny promise into a true force multiplier.

Why One Size Doesn’t Fit All, Despite Program Maturity

Even if two companies both have agile risk programs, they won’t chart the same course for AI implementation, nor will they see the same results. Every company manages a different network of third parties, operates under unique regulations, and accepts different levels of risk.

Banks, for example, face stringent regulatory requirements around data privacy and protection within the services provided by third party outsourcers. Their risk tolerance for errors, outages or breaches is near zero. Consumer goods manufacturers, by contrast, might accept greater operational risk in exchange for flexibility or speed, but can’t afford disruptions that affect critical delivery timelines.

Each organization’s risk tolerance defines how much uncertainty it’s willing to accept to achieve its goals, and in TPRM, that line moves constantly. That’s why off-the-shelf AI models rarely work. Applying a generic model in a space this variable creates blind spots instead of clarity –  creating a need for more purpose-built, configurable solutions.

The smarter approach to AI is modular. Deploy AI where data is strong and objectives are clear, then scale from there. Common use cases include:

  • Supplier research: Use AI to sift through thousands of potential vendors, identifying the lowest-risk, most capable, or most sustainable partners for an upcoming project.
  • Assessment: Apply AI to evaluate supplier documentation, certifications, and audit evidence. Models can flag inconsistencies or anomalies that may indicate risk, freeing analysts to focus on what matters most.
  • Resilience planning: Use AI to simulate ripple effects of disruption. How would sanctions in a region or a regulatory ban on a material impact your supply base? AI can process complex trade, geographic, and dependency data to model outcomes and strengthen contingency plans.

Each of these use cases delivers value when deployed intentionally and supported by governance. The organizations that see real success with AI in risk and supply chain management aren’t the ones that automate the most. They’re the ones that start small, automate with intention, and adapt frequently.

Building Toward Responsible AI in TPRM

As organizations begin experimenting with AI in TPRM, the most effective programs balance innovation with accountability. AI should strengthen oversight, not replace it.

In third-party risk management, success isn’t only measured by how fast you can assess a vendor; it’s measured by how accurately risks are identified and how effectively corrective actions have been implemented. When a supplier fails or a compliance issue makes headlines, no one asks how efficient the process was. They ask how it was governed.

That question, “how is it governed”, is quickly becoming global. As AI adoption accelerates, regulators around the world are defining what “responsible” means in very different ways. The EU AI Act has set the tone with a risk-based framework that demands transparency and accountability for high-risk systems. In contrast, the United States is following a more decentralized path, emphasizing innovation alongside voluntary standards like the NIST AI Risk Management Framework. Other regions, including Japan, China, and Brazil, are developing their own variations blending human rights, oversight, and national priorities into distinct models of AI governance.

For global enterprises, these diverging approaches introduce new layers of complexity. A vendor operating in Europe may face stringent reporting obligations, while one in the U.S. may have looser but still evolving expectations. Each definition of “responsible AI” adds nuance to how risk must be assessed, monitored, and explained.

Risk leaders need adaptable oversight structures that can flex with shifting regulations while maintaining transparency and control. The most advanced programs are embedding governance directly into their TPRM operations, ensuring that every AI-driven decision can be explained, traced, and defended — no matter the jurisdiction.

How to Get Started

Turning responsible AI into reality requires more than policy statements. It means putting the right foundations in place: clean data, clear accountability, and continuous oversight. Here’s what that looks like.

  • Standardize from the onset. Establish clean, consistent data and aligned processes before automation. Implement a phased approach that integrates AI step-by-step into your risk program, testing, validating, and refining each phase before scaling. Make data integrity, privacy, and transparency non-negotiable from the start. AI that can’t explain its reasoning, or that relies on unverified inputs, introduces risk rather than reducing it.
  • Start small and experiment often. Success isn’t about speed. Launch controlled pilots that apply AI to specific, well-understood problems. Document how models perform, how decisions are made, and who’s accountable for them. Identify and mitigate the critical challenges, including data quality, privacy, and regulatory hurdles, that prevent most generative AI projects from delivering business value.
  • Always govern. AI should help anticipate disruption, not cause more of it. Treat AI like any other form of risk. Establish clear policies and internal expertise for evaluating how your organization and its third parties use AI. As regulations evolve worldwide, transparency must remain constant. Risk leaders should be able to trace every AI-driven insight back to its data sources and logic, ensuring decisions hold up under scrutiny from regulators, boards, and the public alike.

There’s no universal blueprint for AI in TPRM. Every company’s maturity, regulatory environment, and risk tolerance will shape how AI is implemented and delivers value, but all programs should be built with intention. Automate what’s ready, govern what’s automated, and continuously adapt as the technology, and the rules around it, evolve.

Dave Rusher is Chief Customer Officer at Aravo, where he advises global organizations on third-party risk management and the responsible adoption of AI. He has more than 30 years of experience in the enterprise software industry and is passionate about helping customers solve critical business issues with solutions that support their long-term success and strategic objectives.