Reports
Manifest Report Reveals AI Readiness Gap as Enterprise Security Teams Struggle with Visibility and Governance

A new report from Manifest, “Beyond the Black Box: How AI Is Forcing a Rethink of the Software Supply Chain,” reveals a growing disconnect between executive confidence and operational reality when it comes to AI security readiness. Based on a survey of more than 300 security leaders and practitioners across the United States and EMEA, the study finds that while most executives believe their organizations are prepared for AI-driven supply chain risks, security teams on the ground report significant governance gaps, shadow AI usage, and limited visibility into the components powering modern software systems.
The findings highlight a central tension emerging in enterprise technology: AI adoption is accelerating rapidly across products and workflows, but the mechanisms required to track, govern, and secure these systems are not keeping pace.
AI Is Recreating Supply Chain Security Problems in New Forms
For more than a decade, organizations have worked to improve software supply chain security by tracking dependencies, monitoring vulnerabilities, and establishing governance frameworks. However, the Manifest report argues that AI is effectively reintroducing many of the same risks—now spread across models, datasets, agents, and third-party AI services.
AI components often operate as opaque systems. Enterprises frequently cannot fully explain how models were trained, what datasets were used, or which external services are embedded within their applications. As a result, organizations face a new class of supply chain risk: software systems they cannot reliably inspect, verify, or monitor over time.
The report emphasizes that visibility is already slipping. 63% of organizations report the presence of “shadow AI,” referring to AI tools or integrations adopted without oversight from security, procurement, or risk management teams.
Daniel Bardenstein, CEO and co-founder of Manifest, said the data reveals a widening gap between executive perception and operational reality: “Executive confidence in AI readiness does not match what AppSec teams are dealing with day to day. Leaders believe governance is in place, but practitioners are seeing unmanaged AI usage, unclear ownership, and blind spots in what is actually running across products and vendors.”
Executives Say They Are Ready, Security Teams Disagree
One of the most striking findings in the report is the divergence between leadership confidence and frontline security assessments.
Nearly 80% of security executives say their organizations have mature AI security practices, yet only about 40% of application security (AppSec) teams agree with that assessment.
AppSec teams are often the first to encounter operational failures in governance frameworks because they interact directly with the software supply chain. These practitioners report encountering high volumes of alerts, unclear ownership of security responsibilities, and fragmented tooling across development and security environments.
According to the report, 47% of respondents identified siloed teams and unclear ownership as the biggest obstacle to improving software supply chain security.
The result is an environment where organizations may believe they have strong security programs while critical gaps remain in visibility, accountability, and operational coordination.
The SBOM Paradox: Generated but Rarely Used
Another major insight from the study concerns Software Bills of Materials (SBOMs)—inventories of software components designed to help organizations track dependencies and vulnerabilities.
SBOM adoption has expanded significantly in recent years, particularly due to regulatory pressure and supply chain attacks. Yet the Manifest research suggests many organizations treat SBOM generation as a compliance checkbox rather than an operational capability.
The report highlights several key statistics:
- 60% of organizations generate SBOMs
- More than half do not actively manage or consume them in practice
- 79.6% use Software Composition Analysis (SCA) tools
- SBOM operational usage remains far lower at 41.8%
Without centralized intake, normalization, policy enforcement, and continuous monitoring, SBOMs become static artifacts rather than active risk management tools.
Security teams also express skepticism toward traditional Software Composition Analysis platforms. 56.3% of respondents say SCA tools create noise or delay development teams, while 46.4% doubt these tools meaningfully reduce real-world software risk.
This disconnect illustrates a broader maturity challenge: organizations can generate large volumes of security data but often lack the operational infrastructure to translate those signals into actionable risk reduction.
Transparency Data Improves Security and Deployment Speed
Despite these challenges, the research shows that organizations that achieve meaningful transparency across their software supply chains gain measurable benefits.
Nearly half of respondents (49.4%) report receiving verifiable transparency data—such as SBOMs, provenance records, or signed binaries—from vendors during procurement.
When this information is reliable and operationalized, the impact is significant:
- 64% report faster implementation of new technology
- 61.6% report quicker resolution of security issues
- 15.5% report reduced downtime
Organizations that lack such transparency pay what the report describes as a “transparency tax”—the additional time, cost, and risk associated with manually investigating opaque software components.
Highly regulated industries illustrate this challenge. Financial services and healthcare organizations report some of the lowest rates of receiving verifiable transparency data from vendors—14.3% and 19.5% respectively—despite having the greatest need for it.
AI Adoption Is Accelerating Across Enterprises
The study also highlights how quickly AI has become embedded across enterprise software ecosystems.
Virtually no organizations surveyed reported avoiding AI entirely. Instead, companies are experimenting across a range of approaches:
- 80.2% use approved commercial AI models internally
- 79.9% broadly use commercial tools such as ChatGPT or Cursor
- 56.7% train open-weight models on internal data
- 29.3% build custom AI models from scratch
Financial services and technology companies are leading adoption. Nearly 90% of financial services organizations report approved internal AI models, and 46.9% build custom models from scratch, far above the overall average.
These sectors have strong incentives to move quickly. In financial services, AI directly affects fraud detection, risk management, and revenue generation. In technology firms, AI increasingly sits at the core of product offerings and platform capabilities.
However, the rapid pace of adoption often outstrips governance.
Shadow AI Is Becoming a Widespread Problem
The research confirms that shadow AI—tools or models deployed without formal oversight—is already widespread.
Only 34.8% of respondents report having no shadow AI in their organizations, while the remainder acknowledge at least some unmanaged AI usage.
This pattern mirrors earlier waves of “shadow IT,” where employees adopted cloud services or SaaS tools outside official procurement processes.
Regional differences are also emerging. Organizations in EMEA report higher rates of operating without shadow AI (45.7%), likely due to stronger regulatory frameworks and stricter procurement processes compared with other regions.
Nevertheless, the report warns that traditional security tools were never designed to track AI models, datasets, and services across distributed development environments.
Licensing and Legal Risks Are Another Major Blind Spot
Beyond technical governance, the study also highlights legal and compliance challenges associated with AI adoption.
Understanding the licensing terms, intellectual property rights, and usage restrictions of AI models and datasets remains difficult for many organizations. The survey found:
- 93% of respondents say their organization has room for improvement in managing AI licensing and IP obligations
- 54.6% strongly agree this remains a major challenge
These risks become particularly acute when organizations train open-weight models on internal data or combine proprietary datasets with third-party AI components.
Without stronger governance frameworks, companies could inadvertently introduce licensing violations or compliance exposure into production systems.
Operational Alignment May Be the Real Challenge
While security tooling continues to evolve, the report suggests that the biggest barrier to effective AI supply chain security may not be technology itself.
Instead, many organizations struggle with fragmented ownership, disconnected workflows, and the absence of a shared system of record for software and AI components.
The most frequently cited constraints include:
- 47.3% organizational constraints
- 36.3% insufficient skills
- 35.7% budget limitations
- 34.8% lack of management understanding
- 32.6% staffing shortages
These operational gaps make it difficult for security signals to translate into consistent policy enforcement or measurable risk reduction.
Why AI Supply Chain Security Is Becoming a Strategic Priority
As AI becomes embedded in every layer of enterprise software, the concept of the software supply chain is expanding to include models, training datasets, inference services, and third-party AI platforms.
The Manifest report concludes that organizations must move beyond point-in-time visibility tools and build continuous, operational control over their AI supply chains.
This includes:
- Tracking all AI models used across development environments
- Verifying the provenance and licensing of training data
- Enforcing governance policies during development and deployment
- Maintaining continuous inventories similar to SBOMs for AI components
Without these mechanisms, the gap between AI adoption and AI governance will continue to widen.
And as the study makes clear, that gap already exists inside many enterprises today.












