Interviews
Hanah-Marie Darley, Chief AI Officer at Geordie AI – Interview Series

Hanah-Marie Darley, Chief AI Officer at Geordie AI, is a seasoned AI and security leader who co-founded the company to help enterprise IT, risk, and security teams adopt agentic AI with clarity and control. With nearly a decade supporting intelligence operations within the U.S. Federal Government and subsequent senior leadership roles at Darktrace, she combines deep expertise in threat intelligence, geopolitical analysis, and applied psychology with hands-on experience in AI strategy and product development. Her work centers on aligning autonomous systems with human intent, enabling enterprises to operationalize AI agents in a way that balances innovation, oversight, and real-world constraints.
Geordie AI is a London-based enterprise software company focused on securing and governing AI agents as they become embedded across corporate environments. Its platform provides visibility into agent activity, continuous monitoring of risk posture, and structured governance controls that allow organizations to deploy and scale AI systems confidently. By delivering observability, compliance support, and operational oversight tailored to the agentic era, Geordie aims to give enterprises the transparency and control required to integrate increasingly autonomous technologies without compromising security or accountability.
You’ve spent nearly a decade in U.S. federal threat intelligence and geopolitical analysis, later led threat research and AI strategy roles at Darktrace, and now lead AI and product strategy at Geordie AI. What experiences across government and enterprise security most influenced your decision to build Geordie, and what core problem were you determined to solve?
Across both government and enterprise security, I kept encountering the same structural tension. Organizations were investing heavily in AI, yet confidence in how those systems behaved lagged behind expectations for return on investment. The challenge was not capability. It was trust.
As AI moved from experimental tooling into operational workflows, that gap became more visible. Agents introduce autonomy, decision-making, and persistence across systems in ways traditional software never did. Businesses needed a way to understand how agents function, where they operate, and how risk emerges through their behavior. Geordie was built to close that clarity gap so organizations can adopt autonomy with confidence rather than hesitation.
In your view, what is the most misunderstood risk that autonomous or agentic AI systems present to enterprises, and how does the “chain effect” of contextual decision-making differ from traditional cybersecurity exposure models?
Silent failures remain the least understood. An agent can operate within approved permissions and legitimate access boundaries while still producing outcomes that diverge from intent.
This reflects the nature of agentic systems. They interpret context and make decisions in real time. Unlike deterministic software, behavior is shaped dynamically across sequences of actions. That shifts the security model. Exposure no longer hinges solely on access violations. It emerges through how decisions, tools, and context interact over time.
Geordie’s approach emphasizes behavioral observability, contextual risk assessment, and dynamic control over agent activities. How should organizations balance the need for that kind of real-time visibility with concerns around operational complexity or system performance?
Enterprises should not have to trade performance for oversight. Our architecture deliberately avoids inline proxies and gateways, allowing organizations to build and operate agents where it makes operational sense.
Visibility and control must scale with autonomy without introducing friction or latency. If governance mechanisms impede workflows, adoption stalls. Effective security enables ecosystems to expand safely rather than constraining innovation.
From your work with enterprise customers and risk leaders, which types of workflows or use cases are most susceptible to agentic drift into higher-risk activities, and how can early indicators be detected before they escalate into material incidents?
Risk tends to increase with complexity. The more choices an agent makes independently, the greater the potential for behavioral divergence.
Drift frequently appears through tool chaining, context reuse, and emergent workflows. Early indicators include unexpected tool invocation, unusual sequencing patterns, and shifts in data movement. Detecting these signals requires behavioral analysis rather than isolated event monitoring.
When AI agents reuse context and tools across tasks, what are the most subtle or underestimated failure modes that security teams should be paying closer attention to?
Context reuse remains underestimated. Exposure often arises not from excessive permissions but from how information persists and propagates across tasks.
Agents can legitimately access data within one context and inadvertently carry that state into another. Combined with tool chaining, this can produce unintended disclosure or transformation of sensitive information.
Many organizations still rely on traditional enterprise security tools such as Endpoint Detection and Response and Extended Detection and Response platforms. Where do these approaches fall short when managing autonomous AI systems that take multi-step actions?
EDR and XDR platforms remain essential, yet they were designed around human-centric threat models. Agents operate across decision layers that extend beyond endpoint and identity telemetry.
Understanding agent behavior requires insight into reasoning patterns, tool selection, and contextual decision flows. Without that layer, large portions of agent activity remain opaque.
For IT, risk, and security leaders who want to enable innovation but avoid runaway autonomy, what does “governed autonomy” actually look like in practice?
Governed autonomy begins with visibility. Organizations must understand where agents operate, how much decision authority they hold, and which risks their capabilities introduce.
Governance is most effective when embedded early, allowing experimentation within defined boundaries. This supports innovation while maintaining confidence in outcomes.
Explainability is often discussed at the model output level. How should enterprises think about explainability and auditability when the real risk may lie in the sequence of actions an agent takes over time?
Model explainability is only part of the equation. Enterprise risk increasingly resides in behavioral sequences rather than isolated outputs.
Auditability requires understanding how guardrails shaped interpretation, which tools were invoked, and how context influenced decisions. Behavioral observability becomes the foundation for accountability.
For organizations that recognize the need for stronger oversight of autonomous AI systems, what concrete steps should they take today to reduce agentic risk without slowing innovation, and how does Geordie AI specifically help enterprises operationalize that balance between control and capability?
Organizations benefit from starting early rather than waiting for perfect frameworks. Initial focus should center on inventory, behavioral visibility, and understanding how agents interact with systems.
Governance models that introduce latency often impede scalability. Oversight must align with operational speed. Geordie provides visibility into agent configuration, behavior, and risk dynamics while enabling corrective controls designed for autonomous systems.
Looking ahead, what will separate organizations that successfully scale agentic AI across workflows from those that experience setbacks due to unmanaged risk, and how should leadership prepare now?
Early differentiators will be clarity and measurement. Teams that understand agent capability, impact, and behavioral patterns will scale more confidently.
Longer term, competitive advantage will favor organizations that develop specialized, context-aware agent ecosystems. Precision, rather than generalization, becomes the driver of performance and resilience.
Thank you for the great interview, readers who wish to learn more should visit Geordie AI.












