Interviews
Sohrab Hosseini, Co-Founder of orq.ai – Interview Series

Sohrab Hosseini, Co-Founder of orq.ai, is a technology leader and entrepreneur based in the Amsterdam area with deep experience across SaaS, large scale systems, and applied AI. Since founding orq.ai in 2022, he has focused on building practical infrastructure that helps teams move large language models from experimentation into reliable production use. His background includes senior leadership roles as COO and CTO at Neocles, CTO of Future Technology at Transdev where he worked on autonomous routing and fleet management, and COO at TradeYourTrip. In parallel, he is active as an advisor and angel investor, supporting early stage AI companies with product direction, technical judgment, and execution strategy.
orq.ai is a generative AI collaboration and LLMOps platform built to help organizations design, operate, and scale AI powered products and agents in real world environments. The platform brings prompt management, experimentation, feedback collection, and real time visibility into performance and costs into a single workspace, while remaining compatible with all major large language model providers. By enabling close collaboration between technical and non technical teams, orq.ai helps companies shorten release cycles, improve governance and transparency, and reduce the complexity and cost of running AI systems in production.
You’ve held senior technical and operational roles across autonomous systems, fleet-management tech, and SaaS platforms before founding Orq.ai — how did that career path shape your decision to build an enterprise-grade control layer for AI agents in 2022?
Our backgrounds have always been about leading engineering teams and focusing on enablement platforms; things like cloud, DevOps, and data enablement, especially during our time as technology consultants. When the generative AI boom started, my co-founder and I asked ourselves: what kind of enablement will enterprises need not just to build AI, but to govern and control it properly?
We saw that the real need was for an enterprise-grade control layer for AI agents. This led us to build Orq.ai in the first place.
When you first launched Orq.ai, what did you see in the market that convinced you the real bottleneck wasn’t model quality but the inability to take agentic systems from demos into reliable production?
We’ve always believed that when you’re building innovative software, you have to build for the future. From the start, we assumed that large language models would just keep getting better and smarter over time. So, the real challenge we saw wasn’t the model quality itself, but all the control, governance, and lifecycle management issues that come up when you try to move from a demo to a real production environment.
In other words, even as models improve, the true value for our clients (and for us) is making sure these systems actually run reliably in production. And that’s really what we set out to solve.
Most teams can build impressive prototypes but struggle with runtime orchestration, governance, and monitoring. In your view, what’s the single biggest breakage point when engineering teams try to scale from a proof-of-concept environment into a live production agent?
The biggest breakage point is that teams often think it’s just a straight, linear path from starting to build an agent to having it finished. In reality, it’s a very iterative process.
You’re constantly adjusting your assumptions, testing them, moving things into production, and then monitoring what happens in the real world. You find edge cases, and then you start that cycle all over again.
The challenge is that it’s not just a one-and-done effort; it’s a continuous loop of refinement. And to build on that, it’s not just that it’s iterative, but it’s that there often isn’t enough tooling or scaffolding in place to support that process smoothly.
You need a way for domain experts, product managers, and engineers to collaborate without creating silos or expensive handovers that waste a lot of time. So that’s another big piece of the puzzle: making sure that all these stakeholders can iterate together efficiently. And that’s something we’ve really tried to solve as well.
Orq.ai positions itself as a unified control layer that spans experimentation, evaluation, observability, and runtime. Why did you believe an end-to-end architecture was essential, rather than offering isolated tools like many point solutions?
When you start out, it’s natural to pick a single tool that solves your biggest pain point at that moment, often that might be observability. But as your team evolves, you hit the next bottleneck and add another tool, for example an AI gateway. Before you know it, you’ve got five to seven different tools in your landscape. Data gets fragmented, people lose visibility, and you waste resources just maintaining all these integrations. You lose that unified view across your lifecycle.
We believed that as agent-driven enterprises emerge, you really need that end-to-end architecture. You need a unified view of what all your agents are doing across the organization, not just fragmented point solutions. That’s why we didn’t see any other way than to encompass those big parts of the workflow in a unified platform.
With the new Agent Studio and redesigned runtime, what major pain points were you trying to solve based on feedback from early customers across Europe and the US?
What we saw was that teams were using all sorts of open-source libraries to build their agents, even though the actual architecture of an agent can be quite clean and simple. They ended up with bloated libraries, a lot of overhead, and a big learning curve just to get even simple agents out there. With Orq, we wanted to offload that burden.
Instead of worrying about the architecture, the compute, the autoscaling, all that infrastructure, teams can just focus on configuring their agents and giving them the right tools and APIs. We handle the heavy lifting so they can concentrate on building their actual use cases. And on top of that, because we support the entire lifecycle, we’ve built specialized workbenches that let you really test your agents at scale.
That means you can find edge cases faster and harden your agents more effectively. It’s all about giving teams the tools not just to build agents easily, but to refine and toughen them up in real-world scenarios, without all the extra hassle.
As GDPR and the EU AI Act tighten requirements, how are these regulations influencing the ways enterprises design, monitor, and deploy agents — and how is Orq.ai adapting?
It’s not so much that these requirements are suddenly tightening, they’re just part of the law, and our clients have to adhere to them. What we’re doing is making sure that throughout the entire lifecycle, we give teams the right tools, evaluators, and guardrails so they can build compliance in from day one.
We make sure data residency, data privacy, all of that is baked in from the start. And with the geopolitical stresses and the push for tech and AI sovereignty in Europe, we’ve seen a big demand for that. Since we can run fully on-premises and help enterprises reduce dependency, we’re in a good position to help them stay in control of their own destiny.
Enterprises are increasingly asking for sovereignty-ready architectures and hybrid/on-prem deployments. What does this shift tell you about where enterprise AI infrastructure is heading?
Every enterprise and even each use case involves trade-offs. It’s a question of how ready-made versus how secure and on-prem something needs to be. We support every flavor along that spectrum. But what we’re seeing is a strong focus on sovereignty and data residency at the model layer.
Clients want clarity on where their data lives and the ability to reduce dependency on big cloud providers. Thanks to our AI gateway, which runs across all the major cloud platforms and on-prem, teams can easily make those trade-offs on a use-case basis. They get the flexibility to stay in control and move seamlessly between environments.
We see a huge surge in demand from larger enterprises and public sector institutions.
How do you see multi-agent workflows, safety guardrails, and more advanced reasoning systems evolving as enterprises move from experimentation to true agent industrialization in 2026?
As the use of agents really industrializes, we’re seeing new types of problems emerge, especially with multi-agent setups. You might have dozens or even hundreds of agents running around your organization at any time, just like employees.
The question is: how do you govern them all when you have this multi-dimensional set of issues, such as costs, data quality, data residency, correctness, hallucination metrics, and so on? You need a new governance layer to handle that, and you need safety guardrails that can be deployed top-down.
You also need top-down visibility and new aggregation layers so that your CFO, COO, CISO can see what’s going on and intervene with actionable insights. We really think that in 2026, this whole “agent department” concept and the tech to support it will become a much hotter topic.
Agent drift, quality regression, and unclear data flows are recurring issues in production AI. How does Orq.ai’s control layer tackle these long-standing gaps in versioning, evaluation, and ongoing monitoring?
Every agent really needs its own harness of evaluations. These evals basically define what’s right and wrong for that particular scenario. By spending time upfront setting these evaluation sets up properly, teams can do better offline experimentation to see how things behave before going live. And then by monitoring these same evals online, you can spot when models drift or when agent behavior starts to change over time. That way, you have a consistent set of quality metrics during offline tests, online monitoring, and guardrailing.
Looking forward, what do you think will define the next generation of enterprise-grade AI agents — and how is Orq.ai positioning itself to become the default operational platform for that world?
Looking ahead, I think what’s going to define the next generation of enterprise AI agents is that every vendor will be providing their own agents. In bigger enterprises, it’s going to be this broad landscape of first-party and third-party agents all working together and calling on each other.
It’s not going to be just one type of agent or one vendor; it’s a whole ecosystem that needs governance and compliance. And that’s where Orq comes in. We’re positioning ourselves as the agent control tower that gives different layers in the organization the right aggregated view and actionable insights to intervene at any stage.
Whether it’s building, scaling, operating, or even offboarding agents, different functions will need different views of that landscape. And we’re going to be the go-to provider for that capability.
Thank you for the great interview, readers who wish to learn more should visit orq.ai.












