Connect with us

Thought Leaders

The AI Risk No One Is Watching: Secrets Exposure in Enterprise Workflows

mm

Most discussions about enterprise AI risks begin with a familiar concern: employees pasting customer data into chatbots. Privacy and regulatory exposure dominate headlines and board briefings, and research from Deloitte shows that data privacy and security rank among the top AI risks organizations worry about.

Yet the data emerging from real-world enterprise usage tells a different story. The most common sensitive information flowing into AI tools is not personal data at all. It’s secrets and credentials.

API keys, access tokens, webhooks, and authentication artifacts now account for the largest share of sensitive data exposures observed in AI prompts. These disclosures rarely stem from carelessness or malicious intent and instead arise from routine work such as debugging a failed integration, troubleshooting automation, testing code, or resolving a customer issue. As AI becomes embedded into daily workflows, these moments occur constantly and often outside the visibility of traditional security controls.

The consequences are clear. As AI adoption expands, organizations are gaining a more accurate picture of where real risks emerge, and governance must evolve to address them.

An overlooked AI data exposure risk is hiding in plain sight

A recent AI usage analysis conducted by Nudge Security examined anonymized telemetry across enterprise environments to understand how AI tools are actually being used in the workplace. Instead of relying on surveys or self-reporting, the research analyzed observed AI activity, integrations, and prompt behavior across enterprise SaaS ecosystems.

The findings provide new insight into where AI risk is actually emerging in enterprise usage. Sensitive data exposures in AI prompts are dominated by operational credentials. Secrets and credentials account for approximately 48 percent of detected sensitive data events, compared with 36 percent for financial data and 16 percent for health-related information. These patterns suggest that the most significant AI data exposure challenge is not privacy leakage, but secrets sprawl.

The same research shows that AI adoption has moved beyond experimentation. AI tools are embedded into workflows, integrated with core business platforms, and increasingly capable of taking autonomous action. Core large language model providers are now nearly ubiquitous, with OpenAI present in 96 percent of organizations and Anthropic in 78 percent.

Research from McKinsey finds 88 percent of organizations report regular AI use in at least one business function, compared with 78 percent a year ago. Meeting intelligence tools, AI-assisted coding platforms, presentation generators, and voice technologies are widely deployed, reflecting how AI has expanded from chat interfaces into everyday workflows. This expansion matters because risk follows usage. As AI becomes embedded in developer environments, collaboration platforms, and customer support workflows, it gains proximity to sensitive systems and operational data.

Adoption has also been driven from the bottom up. A recent KPMG study found that 44 percent of employees use AI tools in ways their employers have not authorized, reflecting how quickly these tools enter daily workflows. Employees install browser extensions, connect assistants, and experiment with integrations to accelerate everyday tasks, often outside centralized procurement processes. Security analysts describe this pattern as shadow AI, in which tools operate inside browsers and SaaS workflows beyond traditional IT visibility. Because these tools can be deployed instantly and require little technical setup, governance programs built around vendor approval processes and acceptable use policies struggle to keep pace with how AI is actually introduced and used across the enterprise.

Why leaked secrets can create immediate operational risk

Personal data remains sensitive and regulated, but secrets carry immediate operational impact. A leaked API key can provide access to production systems. A compromised token can expose repositories. A webhook URL can enable unauthorized automation. Credentials frequently surface in AI prompts during routine workflows. Developers paste tokens into chat interfaces while troubleshooting authentication failures, engineers may share configuration snippets to diagnose integration issues. These actions are not unusual. Secrets are embedded within technical workflows and appear in logs, scripts, configuration files, and automation outputs. When teams are under pressure to resolve issues quickly, they may share these artifacts without pausing to consider what sensitive data they contain.

AI interfaces amplify this behavior. Prompts encourage context sharing. File uploads support richer troubleshooting. Integrative workflows make it easy to move data between systems. Nudge Security’s research found that 17 percent of prompts include copy and paste activity or file uploads. In this environment, sensitive credentials can be exposed in seconds.

Traditional governance misses behavioral risk

AI governance programs often focus on formal controls such as policies and approved tools. This approach assumes risk stems from misuse or model behavior. In practice, the most significant exposures occur during routine workflows carried out by well-intentioned employees.

The AI landscape is moving fast, with new technologies released daily. As your employees reach for the latest tool, they are able to bypass the traditional approach of network controls because they simply can’t keep up. The browser allows for direct observation of contextual behavior, which provides the flexibility needed to keep up with the constantly evolving landscape of modern work.

This disconnect explains why organizations can implement strong policies yet still experience sensitive data exposure. Policies establish expectations. Behavior determines outcomes. Effective governance requires visibility into how AI tools are actually used and guardrails that guide safer decisions at the moment data is shared.

Integrations and agents expand exposure scope

The risk profile of an AI tool is shaped by what it can access. Integrations create trusted pathways between systems. OAuth grants, API tokens, and service accounts enable AI tools to retrieve documents, update tickets, or interact with code repositories. Research into enterprise AI adoption highlights that integrations effectively define exposure scope. A misconfigured permission or compromised token can expose entire document repositories or development environments because trusted connections enable data movement at machine speed.

Agentic AI introduces additional complexity. Early deployments often prioritize functionality over least privilege. Permissions granted during experimentation may persist long after initial use cases evolve. Over time, these accumulated permissions create silent risk. Security teams must treat integrations and agent permissions as durable access decisions rather than temporary conveniences.

What security teams should do now

Reducing secrets exposure in AI workflows requires a change from reactive controls to governance that reflects how work actually happens. Security leaders can begin with practical steps that improve visibility, guide safer behavior, and reduce exposure without slowing productivity:

  • Map where AI interactions occur.
    Identify the environments where data enters AI tools, including browser extensions, developer environments, automation platforms, and chat interfaces. Continuous visibility into these touchpoints provides the foundation for effective governance.
  • Intervene at the moment decisions are made.
    Implement secrets scanning, redaction prompts, and just-in-time warnings that alert users when credentials or sensitive artifacts are about to be shared. Timely guidance reduces accidental exposure while preserving workflow speed.
  • Apply integration governance with the same rigor as OAuth apps.
    Review AI tools connected to email, documents, ticketing systems, and repositories. Enforce least-privilege scopes and conduct periodic permission reviews to reduce long-term exposure risk.
  • Create safer workflows for troubleshooting and support.
    Provide redacted templates, secure connectors, and internal tools for analyzing logs or configuration files so teams can use AI for problem solving without exposing live credentials.
  • Establish guardrails for agent-based automation.
    Require human approval for high-impact actions, log agent activity centrally, and use scoped access tokens to prevent permission sprawl and unintended automation.
  • Ground training in real workflows.
    Education is most effective when it reflects common tasks such as debugging integrations, reviewing logs, or uploading files. Practical examples help employees recognize risk at the moment it arises.

These measures align governance with daily work, enabling organizations to reduce secrets exposure while supporting the productivity gains that drive AI adoption.

From AI policy to AI behavioral governance

AI is evolving from a productivity tool into an operational layer woven into daily work, with research showing AI agents are now embedded across enterprise workflows and forecasts projecting task-specific agents inside a large share of enterprise applications. As adoption deepens, the dominant risks extend beyond privacy violations or model misuse. They arise from how people, permissions, and platforms intersect in real workflows.

Secrets exposure in AI prompts is a visible signal of this broader transformation. It highlights the limitations of perimeter-based controls and policy-only governance and reinforces the need for guardrails that operate where decisions are made. Organizations that adapt will move beyond reactive controls and toward governance models grounded in real behavior. They will treat integrations and permissions as enduring access relationships. They will guide employees at the moment of action rather than relying solely on policy enforcement.

AI is moving from tool to collaborator in modern work. Securing that collaboration requires governance that keeps pace, protecting critical data while guiding safer decisions and sustaining the speed and efficiency AI makes possible.

Russell Spitler is the co-founder and CEO of Nudge Security, the leader in SaaS and AI security governance. Russell has over 20 years of experience building products and startup companies that secure organizations worldwide. Before Nudge, Russell served in product, engineering, and strategy leadership roles at AT&T Cybersecurity, AlienVault (acquired by AT&T Cybersecurity), and Fortify Software. At AlienVault, he co-founded the Open Threat Exchange, the world’s largest open threat intelligence community with over 370,000 global participants today.