Interviews
Sean Blanchfield, Co-Founder and CEO of Jentic – Interview Series

Sean Blanchfield, Co-Founder and CEO of Jentic, is a serial technology entrepreneur with decades of experience building large-scale software and infrastructure companies. Based in Dublin, he currently leads Jentic while also serving on Ireland’s AI Advisory Council, advising the government on artificial intelligence policy. Earlier in his career he co-founded DemonWare, a high-scale online services platform for major video game publishers that was later acquired by Activision Blizzard, and PageFair, a venture-backed startup focused on ad-blocking analytics that was acquired by Blockthrough. He has also founded or led multiple startups and continues to support Ireland’s startup ecosystem through initiatives such as Techpreneurs.
Jentic is developing a universal integration layer designed to help AI agents securely interact with enterprise systems and APIs. The platform enables organizations to connect AI models with internal tools, external services, and operational workflows while maintaining governance, authentication, and oversight. By transforming fragmented APIs into structured interfaces that AI agents can reliably use, Jentic aims to help enterprises deploy AI-driven automation at scale across complex software environments.
You’ve founded and led multiple technology companies, from DemonWare (acquired by Activision Blizzard) to PageFair and now Jentic, and you also serve on Ireland’s AI Advisory Council. What drew you back into building at the infrastructure layer again with Jentic, and what gap did you see in the emerging AI agent ecosystem that others were missing?
By the third time you notice a pattern, you have to take it seriously. At DemonWare, everyone talked about online multiplayer – but the hard problem was the network infrastructure underneath it. The same thing is happening with AI agents. The models are remarkable. The bottleneck is the integration layer – always has been. AI agents run on APIs, and those APIs were built for humans: documented for humans, secured for humans, and structured for humans. Point an autonomous agent at that infrastructure, and it falls apart fast. Enterprise AI pilots don’t fail because the model misunderstood the task; they fail because the agent couldn’t reliably connect to the systems it needed. Generative AI offers a new way to solve this – by treating integration as a knowledge problem, not a coding problem. That insight drew me in.
When you started Jentic in 2024, was agent security the primary thesis from day one, or did the focus sharpen as you observed how organizations were actually deploying autonomous agents in production?
The very first thread I pulled was credentials. I imagined agents proliferating, each needing credentials for dozens of systems, all those secrets flowing into LLM context windows, getting exfiltrated – a hot mess. The answer is the same as it would have been twenty years ago: centralise authentication and authorisation. But pulling that thread led straight to the next problem: if you centralise using traditional integration tooling, you’re back in the land of static connectors, and agents aren’t static. What cemented the vision was realising that capability discovery should be tightly coupled to access control – that an agent should only be offered a capability if it’s actually authorised to use it, and that the system providing discovery can also be the single point of enforcement and observability.
The recent exposure of large numbers of internet-facing agent instances has highlighted how orchestration and credentials often share the same trust boundary. From your perspective, what is the core architectural flaw in that model?
The flaw is simple: the agent – a system running prompts from an LLM – is also the system holding the credentials and making the API calls. Compromise the agent and you get everything it could ever do. It’s the same mistake we made in the early web era – application servers with superuser database access because it was convenient. Jentic sits as a layer between the agent and the APIs it calls. The agent never holds credentials. It issues requests through our managed execution layer, which injects credentials server-side, enforces policy, and logs every call. And when something goes wrong, there’s a single kill switch – one action stops that agent’s access across every connected system simultaneously.
You’ve spoken about separating orchestration from execution to contain blast radius. Can you explain in practical terms how that separation changes the risk profile when an instance is compromised?
In the flat model, the LLM reasons about what to do and directly calls APIs using the credentials it holds. Compromise the reasoning layer, and you control the execution layer. With separation, the LLM emits an intent – “call the Stripe billing API with these parameters” – a managed execution layer validates that request against policy, injects the credential server-side, and makes the call. The LLM never touches the credential. In practice: lateral movement gets much harder, the blast radius is bounded by what the execution layer permits for that specific agent identity, and you get a kill switch. One toggle and the agent’s access stops across every connected system. The agent can still be manipulated – but manipulation no longer automatically means full credential compromise.
In real-world enterprise deployments, what does centralized credential management and instant revocation actually look like, and how does it differ from how most teams are currently handling API keys and tokens for agents?
Today, most teams have a developer provisioning API keys, storing them in a .env file, and loading them at agent startup – often directly into the LLM’s context window. Nobody has a complete picture of which agents hold which credentials. When someone leaves, the keys they provisioned don’t get rotated. When an agent behaves oddly, there’s no audit trail to reconstruct what happened. With Jentic, the developer never handles raw credentials. They declare what access an agent needs, the platform provisions scoped access, and the agent calls through our execution layer without ever seeing the underlying key. That means you get instant per-agent revocation, the ability to pause access while you investigate, and a timestamped audit trail of every API call. The difference between that and “API key in a .env file” is substantial.
Many teams are experimenting with agent frameworks across sales, engineering, and data science. What are the most common security missteps you’re seeing as organizations move from experimentation to production?
The same patterns recur: overprivileged agents still running on the admin credentials they were prototyped with; credentials passed in prompts or context windows where they end up in logs, telemetry, and potentially training data; shared credentials across multiple agent instances so you can’t isolate a single bad actor; no kill switch to stop an agent without taking down the broader system; no audit trail worth the name; and prompt injection not taken seriously – even though any agent that reads emails, processes documents, or browses the web will encounter adversarially crafted content. The common thread is that these teams built for the happy path and are now discovering that production is mostly the unhappy pathways.
Jentic positions itself as a managed execution layer sitting between agent frameworks and external systems. How does that intermediary layer enforce governance without slowing down developers or reducing agent flexibility?
Instead of wiring an agent to fifty different APIs – each with its own authentication scheme, rate limits, and quirks – the developer connects to one endpoint. That endpoint exposes tools to search our entire catalog of API capabilities, load details, and execute any call. This maximises flexibility through a single unified interface to unlimited APIs, while enabling governance – which agents access which APIs, under what conditions, with what limits – all managed in the platform, not in client code. The execution layer is a pass-through; agents can still compose multi-step workflows, chain calls, and handle errors dynamically. Governance without friction is hard. The shortcut is to push the burden onto developers. Infrastructure should do the opposite — absorb that complexity so developers don’t have to.
With infostealer malware now actively targeting agent configuration files and stored credentials, do you see attackers shifting their focus toward AI infrastructure as a new high-value surface area?
Absolutely – and the logic is obvious. An agent configuration file is effectively a multi-service superkey: credentials for email systems, CRMs, billing platforms, internal APIs, and GitHub accounts. A single successful infostealer run yields months of access across an entire company’s external systems. That’s a dramatically higher return than targeting any one service in isolation. The other dimension is that agents running continuously in production are persistent, credentialed presences — not a user logging in and out. A compromised agent can serve as a long-term foothold, operating below detection thresholds. The uncomfortable reality is that the attack surface is evolving faster than the defensive tooling. Jentic can significantly reduce the credential attack surface, but we can’t prevent an agent from misusing the scopes it’s been granted. That harder problem needs to be solved at the model level, with guardrails and prompt injection detection.
Beyond any single framework, what broader security principles should organizations adopt if they want to deploy agentic AI safely at scale?
Most managed organisations cannot deploy non-deterministic systems into their most valuable business processes. A bank or insurer can’t point an autonomous agent at their billing system and say “go figure it out.” So how do you innovate without your risk posture becoming a handbrake? The answer is sandboxing. Create a digital twin of your API estate with the same structure and workflows, but without production credentials or consequences. Deploy agents there, let them explore, watch what happens. The successful paths get captured as structured, deterministic workflow automations using Arazzo, the open workflow specification developed within the OpenAPI Initiative – auditable, repeatable, and reviewable by any compliance team. This means you can move at AI speed in the sandbox and at enterprise speed in production, and those two modes coexist. The other principles still apply – least privilege, audit trails, kill switches, separation of orchestration from execution. But the sandbox is the structural answer to the question enterprise teams actually get stuck on: how do we experiment with non-deterministic AI without betting our compliance posture on it? You don’t deploy the non-determinism. You extract value from it under controlled conditions, and deploy only the deterministic outputs.
Thank you for the great interview, readers who wish to learn more should visit Jentic.












