Entrevistas
Howard Ting, director ejecutivo de Opal Security – Serie de entrevistas

Howard Ting, CEO of Opal Security is a seasoned cybersecurity and technology executive currently leading Opal Security since November 2025. Prior to this role, he served as Executive in Residence at Greylock while spending over five years at Cyberhaven as both CEO and board director, guiding the company through its mission to protect data and enable secure innovation. His background includes strategic marketing leadership positions at Redis Labs and Zscaler, as well as senior marketing and product roles at Nutanix, Palo Alto Networks, Cisco (via Securent), Microsoft, RSA Security, and early experience in M&A at Banc of America Securities. This blend of operational leadership, go-to-market expertise, and deep cybersecurity domain experience positions him uniquely at the helm of a fast-growing security platform.
Seguridad Opal is a modern identity-centric access management company that provides enterprises with a centralized platform to govern and secure who has access to what across cloud, SaaS, and internal systems. The platform delivers unified visibility into identity and access paths, supports self-service and just-in-time access workflows, and automates access reviews to enforce least-privilege policies at scale, helping organizations reduce risk and improve compliance in dynamic environments that include human users, service accounts, and AI agents.
You recently stepped into the CEO role at Opal Security after leading Cyberhaven through significant scale and holding senior roles at companies like Palo Alto Networks, Nutanix, Cisco, RSA Security, Redis, and Microsoft. What drew you to Opal at this moment in your career, and how does your prior experience shape how you’re thinking about access, identity, and AI-native security today?
Managing access is getting harder everywhere. More identities, more machines, more automation—and the meaningful access increasingly originates inside engineering and cloud workflows, not traditional IT. Standard IAM and IGA tools weren’t built for any of this, and identity-driven threats aren’t waiting for them to catch up.
That’s what drew me to Opal. The scale of this problem is massive, and Opal is already aligned with where the market is heading. I’ve seen versions of this pattern before. At RSA, Palo Alto Networks and Cyberhaven, I watched the multi-factor, next-generation firewall and data lineage revolutions unfold in real time, and the dynamics here are strikingly similar: a category-defining problem accelerating faster than most vendors can respond, with a narrow window for the right team to own it.
Opal has that team. The engineering and product foundation is strong, and the customer roster, built through direct and focused work, speaks for itself. Every customer conversation I served reinforced the same signal: this problem is accelerating, and Opal is the one solving it.
What I’ve learned from building and scaling teams across enterprises is that execution gets radically simpler when product, engineering, and go-to-market share the same level of customer focus. When everyone sees the same problem and the same opportunity, you move fast without losing precision. Opal already has that foundation. My job is to build on it—and to make sure our customers feel the full force of this team behind them as we scale.
Having spent years working across identity, cloud infrastructure, and enterprise security, where do you think traditional access control models are breaking down as organizations adopt AI more deeply?
The honest answer is that most traditional access control models were built to solve authentication — and for humans, that problem is largely solved. IDPs and modern auth methods handle “who are you?” reasonably well. For non-human identities, API keys and secrets management get you part of the way there. But authorization — “what should you be allowed to do, and for how long?” — remains deeply unsolved for both humans and machines, and that’s where the real risk lives.
What’s compounding this is that engineering teams now operate through automation, infrastructure as code, and AI-assisted tooling that generates new permissions as part of everyday development. Access doesn’t change slowly through IT workflows anymore — it’s created, modified, and expanded programmatically, often without anyone reviewing what was just granted. The result is a growing sprawl of over-permissioned accounts, fragmented privilege tools, and governance that’s expensive, reactive, and largely blind to what’s actually happening.
And I’ll be direct about something else: there’s an enormous amount of AI washing in this market right now. Vendors are rushing to attach “AI” to legacy architectures, which is obscuring a critical reality for buyers — a feasible, validated security and governance framework for AI agents still doesn’t exist. The hype is outpacing the actual controls, and that gap is where real exposure builds.
That’s what makes Opal’s approach different. Rather than bolting governance onto workflows after the fact, Opal models access directly from the systems engineers already use, applies policy in real time, and gives security teams a way to guide authorization decisions without creating friction. When governance fits naturally into how engineering actually works, it stops being a blocker and becomes infrastructure you can trust.
Opal focuses on managing who and what can access sensitive systems in modern, cloud-native environments. What security problems are most underestimated by companies building with AI agents and automated workflows?
The most underestimated problems aren’t actually new. They’re the ones that have been quietly compounding for years around human identities. Basic governance hygiene like joiner/mover/leaver workflows, just-in-time access, and user access reviews has been overlooked or duct-taped together at most organizations for a long time. Compliance obligations like SOX haven’t gone away either; they’ve just gotten harder to satisfy as environments grow more complex. None of this is glamorous, but it’s exactly the foundation that breaks when you layer AI agents on top.
Organizations are integrating AI to streamline workflows and eliminate mundane tasks, but in doing so they’re introducing non-human identities that multiply access relationships in ways existing tools were never designed to handle. The result is a messy web where human access was already under-governed and machine access is now expanding with even less visibility. Access decisions need to be explainable, time-bound, tied to real usage, and continuously monitored, but most teams are still relying on static, one-size-fits-all systems that can’t deliver any of that. Meanwhile, coding agents are generating and deploying code with embedded permissions, interacting with infrastructure directly, and operating with access that no one is reviewing through a traditional security lens — a compliance and security exposure most organizations haven’t even begun to scope.
Companies building with AI agents tend to focus on the novelty of the technology while underestimating how quickly access sprawl and invisible permissions become a serious liability — especially when the human identity foundation underneath was already fragile.
You’ve seen security evolve across multiple generations of infrastructure. What fundamentally changes when machine identities and AI agents start to outnumber human users?
When machine identities outnumber human users, it becomes increasingly difficult to monitor who has access to what. Traditional IAM wasn’t designed for this reality and most HR and IAM platforms only provide partial visibility, especially as identity oversight expands beyond a single team. To successfully monitor these types of identities, we need to give AI agents clear ownership, scoped permissions, and full auditability from the start. Opal Security addresses this by modeling humans, services, and agents within a single framework, as opposed to treating them as separate entities.
Auditors now expect to review human and agentic behavior side by side when analyzing access reviews. Agents typically inherit the permission sets of the users deploying them, but certain use cases call for agents that identify as service accounts with custom (typically down-scoped) permission sets.
AI is increasingly both an attack surface and a defensive tool. From your perspective, where does AI meaningfully improve security outcomes today, and where does it still introduce new risks?
AI is an immensely powerful tool that is the basis of our product. Opal Security uses AI to monitor, detect and prevent irregular access across organizations. AI also changes identity security because it introduces actors that don’t just authenticate and execute tasks—they reason, adapt, and act autonomously. Traditional identity systems were built for people and static service accounts, where access changed slowly and intent was relatively predictable.
But AI agents break that model. The risk is loss of visibility and accountability. Agents can spin up and down dynamically, chain permissions across systems, and act on behalf of users, other agents, or themselves. No secured resources may be “breached,” yet sensitive actions can still occur because everything was technically authorized. That’s the new threat surface. The way to manage this is through a unified, intelligent platform that understands and secures all entity types through context, behavior, and continuous adaptation.
This unified view forms the foundation for security—you can’t protect what you can’t see or understand. At Opal, we believe in understanding every relationship. It’s not enough to know who’s in the system. You have to know who’s connected to what, and why.
As AI coding agents and automated systems gain broader permissions inside organizations, how should security teams rethink least-privilege access in practice, not just in theory?
Modern identity access needs to treat human and machine identities equally, so companies can have the visibility, automation and trust needed to scale safely in the AI era. In practice, this means granting permissions only when needed, and continuously reviewing and adjusting access as tasks or roles change. Automated monitoring and risk-based controls become essential, ensuring that AI tools can operate efficiently without creating unnecessary security exposure.
It’s helpful to enable JIT by default for agents, but make it frictionless: automatically approve specific resources where appropriate, or if the user operates in an environment where all data is sensitive, restrict usage to sandboxed VMs.
Although any new restrictions on agentic usage might have the potential to irk end users by slowing them down, it’s important to meet users where they are. With Opal, that means access requests can be submitted and approved in Slack, and IaaC strategies including Terraform can be used to automate access grants, revocations, and time-bound access.
Based on your experience scaling security companies, what signals tell you that an organization’s access controls are no longer aligned with how the business actually operates?
Organizations can tell their access controls are out of sync when access starts lagging behind reality. This can show up as excessive or outdated permissions, manual bottlenecks from legacy systems, or the adoption of AI systems without updated policies to track non-human identities. Another tell-tale sign is increased security incidents or near misses, where over-privileged accounts or poorly governed identities are the culprit.
Complex Sailpoint deployments, leaning on a team of consultants, as well as tracking access in spreadsheets wasn’t efficient in the era before agents, and moreover this legacy approach will simply collapse under agentic speed and complexity. Opal delivers a single platform that serves security, unblocks IT, and gives auditors what they need. We find that RBAC is brittle and rarely a lasting state of affairs, so successful teams start with JIT and then advance to persona-based access (with flexibility and tracking over time) so that Terraform configs can be versioned, deployed, or rolled back if necessary.
From a leadership standpoint, how do you balance innovation speed with the discipline required to build trust in security products?
I’ve learned that great companies stay disciplined about the fundamentals even when they’re growing quickly. These fundamentals include clear priorities, transparent communication, and a willingness to make hard tradeoffs go a long way. Innovation still matters, but it has to be grounded in rigor: strong defaults, thoughtful threat modeling, and accountability for outcomes. Making those decisions deliberately is what allows you to move quickly without compromising trust, and over time, that discipline actually becomes a competitive advantage.
What do you think security leaders misunderstand most about preparing for a future dominated by autonomous systems rather than human-driven access?
What security leaders most often underestimate is the complexity of managing different types of identities. Companies are so eager to get ahead and stay up-to-date with the latest technologies that security measures can often fall to the wayside. AI-driven workflows are often introduced without clear ownership, leaving organizations with blind spots where non-human identities quietly accumulate access, operate outside traditional controls, and create new opportunities for costly errors or threats. Preparing for this future requires treating identity as a dynamic, continuously governed layer.
As businesses grow, Security teams should own agentic use policies, and determine how, whether, and where agentic access is downscoped from the human calling the agent. Any processes that previously layered toil onto human teams will simply crumble under the scale and ephemeral nature of agents: automation is the only way to manage the request volume.
Looking ahead, what does “good security hygiene” look like in an environment where AI systems are constantly creating, modifying, and requesting access on their own?
Good security hygiene is about leveraging AI to scale security while maintaining enough monitoring and discipline to ensure nothing slips through the cracks. AI isn’t going anywhere, so whether a company uses a provider like Opal or manages internally, companies must accept that access is no longer static. When AI systems are constantly creating, modifying, and requesting access, security has to move from one-time approvals to continuous oversight. That means treating access as a continuous lifecycle versus a one-time checkbox.
Companies will also need to actively use AI as part of their defense. Automation can help teams keep pace with the volume and speed of access changes, but it has to be paired with clear guardrails, visibility into delegation chains, and human accountability when something goes wrong. It’s important to surface explainability signals: Shapley values and feature coefficients for traditional ML models, and additional context or detailed justifications from LLMs. Without these nuances, it’s an uphill battle to maintain any chain of trust for access to sensitive business resources.
Gracias por la gran entrevista, los lectores que deseen obtener más información deben visitar Seguridad Opal.












