Interviews
Jacob Ideskog, CTO of Curity – Interview Series

Jacob Ideskog is an Identity Specialist and CTO at Curity. Most of his time is spent working with security solutions in the API and Web space. He has worked with both designing and implementing OAuth and OpenID Connect solutions for large enterprise deployments as well as small startups.
Curity is a modern identity and access management (IAM) platform built around the Curity Identity Server, a standards-based solution designed to secure authentication and authorization for applications, APIs, and digital services at scale. It supports protocols such as OAuth 2.0 and OpenID Connect to centralize login flows, enforce fine-grained access policies, and issue secure tokens for both human users and machine clients, including APIs and services. The platform is designed for flexibility and scalability, allowing organizations to deploy across cloud, hybrid, or on-prem environments, integrate with existing systems, and deliver secure, seamless user experiences without relying on custom-built security infrastructure.
You’ve spent much of your career building identity and API security systems, from co-founding Curity to leading it as CTO through the rise of cloud and now AI. How has that journey shaped your view that AI agents should be treated as first-class digital identities rather than just another piece of software?
Across every field of technology I’ve worked through, one issue keeps resurfacing. Whether cloud computing or now AI, if software is acting on behalf of a person or another system, you have an identity problem.
With the mass adoption of agentic AI, this issue is compounded. Their behavior is no longer tightly scripted and they operate with a level of autonomy that enterprises have never seen before. AI agents make decisions, call APIs and chain actions across systems – often without direct human oversight. This behaviour creates identity and access challenges that are fundamentally different from traditional software.
Treating AI agents as first-class digital identities is the only way to address this properly. If organizations treat them as just another process or service account, they lose visibility and control very quickly – and that’s a recipe for a security crisis.”
Many enterprises are excited about agentic AI but remain stuck in experimentation. From what you’re seeing in real deployments, what are the most common identity and governance gaps preventing organizations from scaling agents safely?
Most experimentation happens in isolated sandboxes that ignore what happens at scale. During early pilots, teams often give agents broad API keys, shared credentials or blanket cloud permissions just to get things off the ground.
That approach falls apart the moment agents are deployed beyond pilots. This is because security teams can’t see what data an agent has accessed, its actions, or whether it can or has exceeded its intended scope; either accidentally or maliciously. These blind spots make it impossible to govern agents safely, which is why many organizations struggle to move beyond pilots.”
You’ve argued that strict guardrails are essential for agentic AI. What does “good” identity design look like for AI agents in practice, and where do companies typically get it wrong?
Good identity design starts with the principle of least privilege and permissions tied to explicit intent. Each AI agent should have its own identity, narrowly scoped permissions and clearly defined trust relationships (explicit rules for which systems it is allowed to interact with). Fundamentally, access should be purpose-bound, time restricted and easy to revoke.
Where companies get this wrong is by reusing existing service accounts or assuming that internal agents are safe by default. That assumption doesn’t hold up against real-world threats. Malicious actors actively look for exactly these weak spots, and AI agents dramatically increase the potential blast radius when identity design is sloppy.”
Curity has long worked with standards like OAuth and OpenID Connect. How critical are open identity standards for making agentic AI interoperable and secure across complex enterprise environments?
Open standards are absolutely critical. Enterprises already run complex identity fabrics spanning cloud platforms, SaaS services and internal APIs. Agentic AI only adds more complexity.
Without standards, every agent becomes its own integration and a permanent security exception. With standards like OAuth and OpenID Connect, agents can be authenticated, authorized and audited just like any other workload. This is the only approach that can facilitate secure scaling across real enterprise environments.”
Non-human identities are becoming more common, from service accounts to machine identities. What makes AI agents fundamentally different from previous non-human identities from a security perspective?
The key difference between modern AI agents and older non-human identities (NHIs) is autonomy. A traditional service account does exactly what its code tells it to do, bound strictly to its task. An AI agent interprets instructions, adapts its behavior and takes actions that were never explicitly scripted – all increasing the potential danger if there aren’t appropriate guardrails.
A small identity or access error can quickly turn into a catastrophe, because an agent can act at speed and across multiple systems. From a security perspective, this presents a major risk.
How important are audit trails and identity-based logging for governing agentic AI, especially in regulated industries?
Audit trails shouldn’t be “nice to have”. They need to be built in from the start. In regulated environments, organizations are expected to answer simple but critical questions: what did this agent access, when did it happen, and who authorized it?
Identity-based logging is the only reliable way to get that level of accountability. It also plays a key role in incident response. Without clear identity context, it’s almost impossible to know whether a problem came from a misbehaving agent, compromised identity, or simply a bad prompt.
What real-world risks do you see emerging when organizations deploy over-privileged or poorly monitored AI agents in production?
One common risk is silent data aggregation. An over-privileged agent can pull sensitive information from multiple systems (customer records, internal documents, logs) and then expose that data through prompts, summaries or external integrations.
Another risk is agents with administrative access making major changes at machine speed, causing far more damage than a human ever could in a short period of time. This can include modifying cloud resources, disabling security controls or triggering automated workflows without oversight.
These incidents may be malicious, but they don’t have to be. An over-privileged or poorly monitored agent could simply be operating on stale or incorrect assumptions, amplifying mistakes across multiple systems before anyone notices.
But, from an attacker’s perspective, a compromised agent identity is extremely valuable. It enables lateral movement across APIs and services, often with a level of access no human user would ever be granted. Without strong identity controls and monitoring, organizations often only discover these failures after real damage has been done.”
For companies moving from pilots to real agentic deployments, what identity and access decisions should be made early to avoid costly redesigns later?
Organizations should decide early on how agents are issued identities, how permissions are approved and how access is reviewed over time, defining identity boundaries upfront.
Bringing in identity controls retroactively is almost always problematic. Agents are often embedded deep into workflows using shared credentials or broad roles, so tightening access after the fact breaks assumptions the system relies on. This ultimately causes workflows to fail and undermines trust in the technology. It’s far cheaper, not to mention far safer, to design proper identities, scopes and access boundaries from the start.
Where does identity integration most often become a bottleneck when rolling out agentic AI, and what best practices help reduce friction?
Identity management can become a bottleneck but only when it’s treated as an afterthought. Teams focus on building impressive agent capabilities first just to later realize they need to be integrated with IAM systems, API gateways and logging platforms to be truly secure.
The best approach is to start with a clear understanding and proper implementation of identity platforms and then to design agents to fit within them. Organizations should reuse existing standards and infrastructure rather than bypassing them; cutting this corner will inevitably cause problems down the line. When identity is built in from the beginning, it accelerates deployment instead of slowing it down.
For security and engineering leaders who want to embrace agentic AI but are concerned about governance and risk, what advice would you give as they plan their roadmap?
Slow down just enough to get the foundations right. AI agents must be treated as identities and so you need to apply the same governance you expect for humans, and insist on visibility from the outset. If an organization does that, then scaling agentic AI becomes an exercise in security, not a blind and risky leap of faith.
Thank you for the great interview, readers who wish to learn more should visit Curity.












