Connect with us

Interviews

Ramutė Varnelytė, CEO of IPXO – Interview Series

mm

Ramutė Varnelytė, CEO of IPXO, brings over 15 years of commercial leadership experience and was appointed to lead the company in early 2025. In her role, she drives strategic growth, operational excellence, and market positioning, ensuring IPXO remains at the forefront of innovation in IP address leasing and management.

IPXO, short for Internet Protocol Exchange Organization, was founded in 2021 to address the global IPv4 shortage by creating a dynamic leasing and monetization platform. It has grown into the world’s largest fully automated marketplace for IP address leasing. The platform supports millions of IP addresses globally, backed by strong relationships with Regional Internet Registries and a mission to build a unified, efficient IP ecosystem.

We’ll be discussing how IPXO uses behavioral-based risk profiling and machine learning to distinguish between benign AI automation and malicious bot activity. The conversation will explore the rise of AI agents in internet traffic, ISP cybersecurity challenges, and strategies for balancing security with operational efficiency.

What trends led to your conclusion that AI agents—rather than just scraping bots—now dominate traffic patterns in many organizations?

Bots are everywhere now, greatly surpassing human traffic. More than a third of the bot traffic is comprised of the so-called ā€œbad botā€ share, which contributes to a lot of malicious activities. The trend is clear in various incident reports: when companies investigate unusual or harmful traffic, weird spikes in API usage, or other irregularities, evidence shows that many of these automated systems behave more like AI agents, because they pretend to be different browsers or devices (user-agent spoofing), quickly changing their IPs (rapid IP rotation), are inconsistent when compared to typical scrapping bot activity, and similar. Basically, the sheer volume of traffic, as well as the increased complexity of their actions,Ā  points to AI agents, rather than just simple bots.

What behavioral signals are used to build IP risk profiles—such as request frequency, headers, or geographic patterns?

It’s possible to evaluate an IP’s ā€œreputation scoreā€ without even looking at the actual content of the requests, but just by observing how it behaves in the network. These irregular patterns are ā€œcontent-agnosticā€, which means you don’t need actual data. Metadata and specific behavior is enough to evaluate IPs risk profile. This also works at scale, as suspicious behavior is not hidden by modern encryption like TLS 1.3 or ECH. There is still quite a lot of observable data that simply doesn’t get encrypted.

How does your system distinguish between benign AI automation and malicious activity like scraping, botnets, or credential stuffing—especially without inspecting payload content?

It is important to consider how the automation behaves over a certain period of time, not only what content it is requesting. The ā€œgoodā€ bots are usually very predictable, have a certain structure and steps they follow. Malicious bots, on the other hand, often jump around unpredictably, use throwaway identifiers, keep a rigid ā€œmachine-likeā€ pace, and spread requests across many servers to avoid detection. It boils down to extended behavior modeling, evaluating how it moves around the site over time.

Could you explain how existing ISPs struggle to handle growing proxy traffic while maintaining strong defenses against cyberattacks?

Modern protocols hide more connection details for increased privacy, so it’s getting harder and more expensive for ISPs to evaluate the type of traffic as well as classify it correctly. Since it’s harder to see what’s inside, ISPs rely more on the the behavior of traffic, not on content. The struggle lies in the forced shift of changing their ways from relying mostly on traffic inspection, which has upped operational costs, and integrating new threat intelligence tools to fit the new approach.

What are the cybersecurity trade-offs ISPs face if they loosen restrictions to accommodate AI agent traffic?

Loosening filters may reduce false positives and friction, to keep the agents running tasks smoothly. But this also might potentially increase the number of threats and attacks, as the ā€œbad botā€ share already makes up for a significant amount of traffic. Looser filters equal higher risks. One way to compromise between the two would be the risk-weighted gating approach. For example, if there’s suspicious traffic, the first step could be slowing it down, rather than blocking it completely. More nuanced controls need to be set up.

What core technologies power your real-time risk scoring—are you using machine learning, threat intelligence feeds, or behavioral anomaly detection?

We use a healthy mix of live data monitoring and evaluation, machine learning, and various threat intelligence tools. The goal is to quickly spot any irregularities in behavior, understand the traffic’s intent and if there are reasons to believe it’s harmful; if yes – flag high risk cases and deal with it accordingly.

How does your system integrate with existing ISP or enterprise SOC workflows—can it trigger alerts or mitigation automatically?

Yes, our system both sends alerts to existing workflows, as well as blocks certain threats at a network level. It can run in various modes, depending on risk tolerance.

Has this profiling system uncovered any surprising attack patterns or misuse cases during early testing or deployments?

Yes, there have been quite a few. One example across the industry could be ā€œhallucination crawlsā€, when LLMs invent non-existent URLs, and then try to fetch them. Or sudden surges of requests, repeating at exactly the same time intervals, like 15 minutes or an hour. All of these patterns are useful for us to further refine behavior tracking and reduce false positives.

What are the privacy and security implications of building behavioral-based profiles without inspecting encrypted content—especially under global data protection laws?

Behavioral metadata can be considered personal data, but there are exceptions, for instance, if the interest is legitimate, can be justified, and is necessary for security monitoring. That said, the interest in accessing and the amount of data retained must be proportionate to the risks being evaluated.

How do you prevent false positives from flagging benign automation, like research crawlers or whitelisted AI agents, as cyber threats?

We test new safeguards and rules before deploying them, evaluate multiple signals before taking any action, also use soft challenges first to test the traffic's legitimacy, rather than outright blocking it. It’s a multi-layered checklist, which is constantly being fine-tuned.

Thank you for the great interview, readers who wish to learn more should visit IPXO.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics.Ā A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.