Interviews
RamutÄ VarnelytÄ, CEO of IPXO – Interview Series

RamutÄ VarnelytÄ, CEO of IPXO, brings over 15 years of commercial leadership experience and was appointed to lead the company in early 2025. In her role, she drives strategic growth, operational excellence, and market positioning, ensuring IPXO remains at the forefront of innovation in IP address leasing and management.
IPXO, short for Internet Protocol Exchange Organization, was founded in 2021 to address the global IPv4 shortage by creating a dynamic leasing and monetization platform. It has grown into the worldās largest fully automated marketplace for IP address leasing. The platform supports millions of IP addresses globally, backed by strong relationships with Regional Internet Registries and a mission to build a unified, efficient IP ecosystem.
Weāll be discussing how IPXO uses behavioral-based risk profiling and machine learning to distinguish between benign AI automation and malicious bot activity. The conversation will explore the rise of AI agents in internet traffic, ISP cybersecurity challenges, and strategies for balancing security with operational efficiency.
What trends led to your conclusion that AI agentsārather than just scraping botsānow dominate traffic patterns in many organizations?
Bots are everywhere now, greatly surpassing human traffic. More than a third of the bot traffic is comprised of the so-called ābad botā share, which contributes to a lot of malicious activities. The trend is clear in various incident reports: when companies investigate unusual or harmful traffic, weird spikes in API usage, or other irregularities, evidence shows that many of these automated systems behave more like AI agents, because they pretend to be different browsers or devices (user-agent spoofing), quickly changing their IPs (rapid IP rotation), are inconsistent when compared to typical scrapping bot activity, and similar. Basically, the sheer volume of traffic, as well as the increased complexity of their actions,Ā points to AI agents, rather than just simple bots.
What behavioral signals are used to build IP risk profilesāsuch as request frequency, headers, or geographic patterns?
Itās possible to evaluate an IPās āreputation scoreā without even looking at the actual content of the requests, but just by observing how it behaves in the network. These irregular patterns are ācontent-agnosticā, which means you donāt need actual data. Metadata and specific behavior is enough to evaluate IPs risk profile. This also works at scale, as suspicious behavior is not hidden by modern encryption like TLS 1.3 or ECH. There is still quite a lot of observable data that simply doesnāt get encrypted.
How does your system distinguish between benign AI automation and malicious activity like scraping, botnets, or credential stuffingāespecially without inspecting payload content?
It is important to consider how the automation behaves over a certain period of time, not only what content it is requesting. The āgoodā bots are usually very predictable, have a certain structure and steps they follow. Malicious bots, on the other hand, often jump around unpredictably, use throwaway identifiers, keep a rigid āmachine-likeā pace, and spread requests across many servers to avoid detection. It boils down to extended behavior modeling, evaluating how it moves around the site over time.
Could you explain how existing ISPs struggle to handle growing proxy traffic while maintaining strong defenses against cyberattacks?
Modern protocols hide more connection details for increased privacy, so itās getting harder and more expensive for ISPs to evaluate the type of traffic as well as classify it correctly. Since itās harder to see whatās inside, ISPs rely more on the the behavior of traffic, not on content. The struggle lies in the forced shift of changing their ways from relying mostly on traffic inspection, which has upped operational costs, and integrating new threat intelligence tools to fit the new approach.
What are the cybersecurity trade-offs ISPs face if they loosen restrictions to accommodate AI agent traffic?
Loosening filters may reduce false positives and friction, to keep the agents running tasks smoothly. But this also might potentially increase the number of threats and attacks, as the ābad botā share already makes up for a significant amount of traffic. Looser filters equal higher risks. One way to compromise between the two would be the risk-weighted gating approach. For example, if thereās suspicious traffic, the first step could be slowing it down, rather than blocking it completely. More nuanced controls need to be set up.
What core technologies power your real-time risk scoringāare you using machine learning, threat intelligence feeds, or behavioral anomaly detection?
We use a healthy mix of live data monitoring and evaluation, machine learning, and various threat intelligence tools. The goal is to quickly spot any irregularities in behavior, understand the trafficās intent and if there are reasons to believe itās harmful; if yes – flag high risk cases and deal with it accordingly.
How does your system integrate with existing ISP or enterprise SOC workflowsācan it trigger alerts or mitigation automatically?
Yes, our system both sends alerts to existing workflows, as well as blocks certain threats at a network level. It can run in various modes, depending on risk tolerance.
Has this profiling system uncovered any surprising attack patterns or misuse cases during early testing or deployments?
Yes, there have been quite a few. One example across the industry could be āhallucination crawlsā, when LLMs invent non-existent URLs, and then try to fetch them. Or sudden surges of requests, repeating at exactly the same time intervals, like 15 minutes or an hour. All of these patterns are useful for us to further refine behavior tracking and reduce false positives.
What are the privacy and security implications of building behavioral-based profiles without inspecting encrypted contentāespecially under global data protection laws?
Behavioral metadata can be considered personal data, but there are exceptions, for instance, if the interest is legitimate, can be justified, and is necessary for security monitoring. That said, the interest in accessing and the amount of data retained must be proportionate to the risks being evaluated.
How do you prevent false positives from flagging benign automation, like research crawlers or whitelisted AI agents, as cyber threats?
We test new safeguards and rules before deploying them, evaluate multiple signals before taking any action, also use soft challenges first to test the traffic's legitimacy, rather than outright blocking it. Itās a multi-layered checklist, which is constantly being fine-tuned.
Thank you for the great interview, readers who wish to learn more should visit IPXO.