Interviews
Onur Alp Soner, CEO and Co-Founder of Countly – Interview Series

Onur Alp Soner is the co-founder and CEO of Countly, a digital analytics and in-app engagement platform. A technologist and self-starter, he bootstrapped Countly from the ground up to give companies more control over how they understand and interact with their users. Under his leadership, Countly has grown into a trusted platform for enterprises worldwide that want to innovate quickly while keeping user privacy at the centre of their growth strategies
Take us back to the moment that led you to found Countly — what were you personally running into with existing analytics tools that convinced you the data ownership model was fundamentally broken?
Around 13 years ago, when mobile apps were starting to take off, the analytics tools available followed a very particular model. Many of them were free or fairly inexpensive, but the trade-off was that the platform collected and monetized your data, often feeding it into advertising ecosystems. At the time, that was widely accepted as the normal way things worked.
That, however, didn’t sit right with us. Even as a small company, the idea of handing over all our user data just to understand how our product was performing didn’t make sense.
Countly started as a response to that. We wanted to build analytics that companies could fully own and control, which is why we launched it as an open-source, self-hosted platform. The idea was simple: organizations should be able to understand and act on their data without giving it away. That principle is still at the core of Countly today.
Since founding Countly, AI has pushed data ownership from a niche concern into a strategic requirement. When did it become clear to you that this principle would matter far beyond analytics?
In the early years, most conversations around data ownership were framed through privacy or compliance. It was mainly banks, healthcare providers, and governments that cared deeply about where their data lived and who controlled it. For many others, analytics was still seen as a simple reporting tool, so the ownership question didn’t feel urgent.
That perspective started to shift as companies began relying more heavily on data to run their products, not just measure them. Once analytics moved from reporting into decision-making, powering personalization, product changes, and customer engagement, the importance of controlling that data became much clearer. Every digital-first company, from mobility to hospitality, effectively started competing on data, not just on their front-end experience.
AI has accelerated that realization dramatically. You can license or build an AI model, but you can’t buy the behavioral data that reflects how your own customers interact with your product. That data is unique to every organization.
Many organizations believe they are “AI-ready” because they have large volumes of data. From what you see inside real companies, what is usually missing beneath the surface?
Lack of data is usually not the problem. The real issue is the lack of usable data. Many organizations have huge volumes of information, but it is fragmented across different tools, teams, and systems. For example, marketing may have one dataset, product another, and engineering its own telemetry, often stored in different formats with little shared structure.
For AI to be useful, the data underneath needs to be clean, consistent, and contextual. It’s not enough to collect events or logs; you need to understand what those signals actually represent. Without that semantic layer, AI systems are essentially guessing.
Another issue is ownership. A surprising number of companies don’t actually control their own data because it sits inside third-party platforms. That makes it difficult to combine datasets, govern how they’re used, or safely apply AI models to them.
So when companies say they’re AI-ready because they have a lot of data, the real question is whether they have a coherent data foundation.
Why does first-party data create durable competitive advantage in AI systems while models themselves are becoming increasingly interchangeable?
What creates durable advantage is not the model itself, but the understanding of users that comes from first-party data. That data reflects how people actually interact with your product, and it is unique to each organization. Models, on the other hand, are increasingly becoming commodities. You can license them, fine-tune them, or switch between providers relatively easily. What you cannot replicate is the behavioral data generated by your own users interacting with your products over time.
That data captures patterns, context, and signals that reflect how customers actually behave. When it is structured and understood properly, it allows companies to build systems that learn continuously from real usage rather than generic datasets.
Where do modern analytics stacks quietly break down when they are repurposed for AI systems rather than reporting, dashboards, and KPIs?
They tend to break down at the point where data needs to move from observation to action. Traditional analytics stacks were designed primarily for reporting. They collect and aggregate data, then present it in dashboards that help teams understand what happened yesterday or last week.
AI systems, however, operate very differently. They require data that is structured, contextual, and available in real time so it can directly influence how a system behaves. When analytics pipelines are built around batch processing and delayed reporting, they struggle to support systems that need to react instantly.
How does lack of true data ownership show up operationally when teams attempt to move AI from experimentation into production?
It usually appears as a control problem. Ultimately, if you don’t have control over your data, you don’t have control over your AI. This becomes especially clear when teams move from experimentation to production. During experimentation, teams can often work with small datasets or temporary pipelines, but production systems require consistent access to reliable data across the organization.
Then, in many companies, the underlying data resides across different third-party platforms, such as analytics tools, marketing systems, or cloud services. That makes it difficult to combine datasets, apply governance rules, or move data between systems in a controlled way. This is one reason many AI projects remain stuck in pilot phases. Without structured, organization-wide data, it becomes difficult to deploy AI reliably in production.
It also makes it harder to trace how a model reached a decision or to reconstruct the exact data state behind it. Without that level of control, correcting errors or rolling back decisions becomes extremely difficult.
Why do poor data structure, semantics, and context undermine even the most capable AI models?
Even the most capable AI models are only as good as the data they receive. If the underlying data is poorly structured or lacks context, the model has very little understanding of what those signals actually represent.
In many systems, data is collected as isolated events or logs without a clear meaning attached to them. A model may see thousands of interactions, but without proper structure and semantics, it cannot distinguish between what is important and what is simply noise.
Context is equally important. AI systems need to understand how different pieces of data relate to each other over time. Without that context, models may still produce outputs, but they are often unreliable because the system is working with incomplete information.
What warning signs indicate a company is heading toward generic AI outcomes long before those experiences feel generic to customers?
The most basic warning sign is when companies rely on the same external AI models and tools but do very little to develop their own data foundations. If organizations are using the same models but not feeding them their own user and contextual data, the systems are essentially working from the same generic inputs. In that situation, the AI can only produce high-level or generic results. Over time, this leads to products that feel increasingly similar because the intelligence behind them is built on the same limited information.
Another warning sign is when organizations focus heavily on adopting AI models but pay little attention to the structure and quality of their data. AI amplifies what it receives. If the underlying data is messy, fragmented, or poorly structured, the system will simply produce a more sophisticated version of the same problem.
For organizations trying to build AI on top of their own data, what does Countly actually enable that traditional analytics and data platforms do not?
The key difference is how control is built into the platform. In many analytics products, data ownership is something that appears as an option or feature. With Countly, it sits at the core of the system. The platform was designed so organizations do not have to trade control of their data for advanced functionality.
In practice, this means companies can run Countly in their own environment, maintain full control over their data stack, and still access analytics, engagement, and automation capabilities at scale. This becomes especially important when organizations want to build AI on top of their own data. Many traditional analytics tools are built primarily for reporting, which means the data they collect often stays inside third-party dashboards instead of becoming a usable foundation for other systems. Countly takes a different approach by treating analytics as part of the underlying data infrastructure.
As AI systems become embedded in daily decision-making, how should the definition of ethical AI evolve when data ownership is treated as a core design principle rather than a policy checkbox?
Once data ownership becomes a design principle, ethical AI is no longer about auditing models after the fact—it’s about engineering systems where users retain agency over the data that trains them. Ethics becomes infrastructure.
Thank you for the great interview, readers who wish to learn more should visit Countly.












