Connect with us

Interviews

Chaim Mazal, Chief AI & Security Officer of Gigamon – Interview Series

mm

Chaim Mazal is Chief AI and Security Officer at Gigamon, responsible for global security, information technology, network operations, governance, risk, compliance, internal business systems, and product security. Chaim also leads the company’s strategic AI program, driving governance, cross-functional adoption, and secure, responsible use of AI. Recognized by Security Magazine as one of The Most Influential People in Security 2025, he is a lifetime member of the OWASP Foundation and serves on advisory boards including Cloudflare, GitLab, and Rapid7. He previously held senior leaderships at several industry leaders, most recently as SVP of Technology and CISO at Kandji.

Gigamon is a cybersecurity and observability technology company that focuses on providing deep visibility into network traffic across hybrid and multi cloud environments. Its platform captures and analyzes data in motion, including packets, flows, and application metadata, to deliver actionable insights to security, cloud, and IT monitoring tools. This allows organizations to detect hidden threats, improve performance, maintain compliance, and reduce complexity by eliminating blind spots in increasingly distributed and encrypted systems. Trusted by large enterprises and government organizations, Gigamon helps secure and manage modern digital infrastructure at scale.

You’ve had a unique journey from spending time in hacking forums as a teenager to becoming Chief AI and Security Officer at Gigamon. How did those early experiences shape the way you think about modern AI-driven cyber threats?

I got my first computer at eight and learned by experimenting, namely figuring out DOS, reading manuals, and eventually teaching myself Visual Basic. As I got deeper into internet communities, I was fascinated with how software could be manipulated and where systems were fragile. That curiosity evolved into web application penetration testing and securing SaaS development lifecycles.

What’s interesting about modern threats is that AI doesn’t invent new weaknesses, but scales the discovery and exploitation of existing ones. Because of that early perspective, I approach AI security assuming adversarial use from day one, which helps me reverse engineer defenses for our customers at Gigamon.

You’ve observed that AI-generated phishing, ransomware, and malware campaigns are shrinking time-to-impact from weeks to hours. What concrete changes are you seeing in how these attacks are designed and deployed?

AI has fundamentally lowered the barrier to entry for cybercrime. Writing malware, crafting convincing phishing campaigns, and identifying vulnerabilities once required deep technical expertise. Now, these attacks can be accelerated and even fully automated with the help of AI tools. Hackers no longer need a strong technical background to launch sophisticated campaigns because AI can generate code, refine social engineering messages, and help operators troubleshoot in real time.

As a result of this increased access, the entire threat landscape has shifted. Without governance frameworks, compliance requirements, or ethical constraints to slow them down, attackers can experiment, adapt, and deploy at speed, and at minimal cost. As a result, time-to-impact has shrunk dramatically, and what once took weeks can now happen in hours. Meanwhile, many organizations are still in the early stages of adopting AI defensively, meaning some of the most effective AI use cases are currently being driven by threat actors.

What fundamentally separates an AI-powered cyberattack from traditional automated threats, and can you share an example that makes that difference clear?

What fundamentally separates an AI-powered cyberattack from traditional automated threats is autonomy and persistence. In the past, hackers would run a script and stop when it failed. The “time to live” was limited to how long it took to run the automation. With AI, agents are given an objective and, if they fail, they don’t stop. They continue iterating, looking for alternative paths to achieve the same goal. The time to live is effectively infinite.

For example, with a production build dependency, attackers can insert something as small as two lines of script while keeping the file size unchanged, meaning the bytes appear identical. When the file is compiled and executed, it launches a terminal instance that installs an autonomous agent into the corporate environment, which then performs a series of malicious tasks that can iterate indefinitely. In the past, binaries were viewed as post-deployment threat vectors. Now, they’re being manipulated constantly, including throughout the build process itself, to execute AI-enabled actions inside corporate environments.

Many organizations still rely on legacy security controls and established playbooks. Why are these approaches failing against AI-enabled attacks, and where do you see the most dangerous blind spots?

We’ve reached a point where if you’re not innovating and thoughtfully incorporating AI into your security solutions and operations, you’re already behind, and that applies to large companies and startups alike. Organizations that fail to integrate AI into their security strategy risk being outpaced by attackers who are moving faster and operating at greater scale.

That said, defending against AI-powered threats doesn’t necessarily require a complete reinvention of the tech stack. Many of the tools that enterprises need are already in place. The real shift is in how effectively those tools are used. It’s about reinforcing the fundamentals, applying existing technologies more intelligently, and adapting defenses to account for untrained but AI-enabled attackers.

Threat actors are using AI creatively and strategically. If enterprises don’t take a similarly strategic approach, they risk creating massive blind spots. Leaders must think differently about how they deploy the tools they already have, integrate AI into their operations, and evolve their playbooks to defend against the speed and scale of today’s attacks.

AI appears to be lowering the barrier to entry for cybercrime. How has this shift changed the profile of today’s attackers, and what risks does that create for enterprises?

AI has made it possible for almost anyone to become a hacker. With AI tools at their disposal, inexperienced actors, even teenagers, can now launch sophisticated phishing campaigns, deploy rootkits, and execute ransomware attacks that once required significant technical expertise.

This shift adds a new level of unpredictability to the threat landscape. Instead of facing a smaller number of highly sophisticated groups, enterprises now face a broader spectrum of actors who are experimenting rapidly, learning in real time, and collaborating across online communities.

For organizations, this means not just more attacks, but greater variability in how they’re executed. Enterprises must prepare for a threat environment where capability is no longerectly tied to experience, and where anyone can carry out campaigns that rival those of historically advanced groups.

Based on what you see in attacker communities, what kinds of AI-driven tools are gaining traction, and how fast are those capabilities improving?

It’s less about singular AI tools helping hackers and more about the sharing of resources. Instead of a single tool running a linear process, attackers are using decentralized agents that cross-reference data and share information collectively across different tooling. The result functions more like an attack mesh, or a swarm, rather than a one-off capability.

What’s most notable about this is the speed of improvement. These capabilities are changing by the day, and many of the tools being utilized were proof of concepts just weeks ago. The pace of innovation and iteration inside attacker communities is accelerating rapidly, with new techniques and tooling emerging almost in real time.

From a defender’s point of view, what signals suggest an organization is facing an AI-driven campaign rather than a more conventional attack?

From a defender’s point of view, one major signal is that reconnaissance no longer looks sequential or compartmentalized. Historically, attackers would follow clear steps – gathering information in phases, spacing activity out, and targeting one surface at a time. Now, all of those activities are happening in tandem. Email gateways, externally available services, account activity, detection and evasion techniques are all being exercised in unison rather than sequentially.

Another signal is the unification and coordination of activity. What used to be piecemealed is now condensed and shared collectively across tooling, acting more like a swarm than a one-off effort. These agents are continuously iterating, adapting and expediting decision-making around threat vectors, and they aren’t taking no for an answer. That level of simultaneous activity, coordination, and persistence strongly suggests an AI-driven campaign rather than a conventional attack.

You’ve argued that many current AI security tools are missing these threats entirely. What are they getting wrong, and which capabilities are most urgently needed?

Many current AI security tools are still built around the flawed assumption that prevention is the primary objective. Vendors continue to position AI as a better way to block threats at the perimeter, but attackers are faster, more adaptive, and increasingly patient, using AI, deepfakes, and advanced malware that can evade controls and remain undetected for months.

What’s urgently needed is comprehensive, real-time visibility and stronger detection and response capabilities. Organizations must implement continuous risk assessment, maintain visibility into encrypted traffic where many threats hide, and leverage network-derived telemetry and APIs to understand what’s running across their systems and how data is moving. Resilience today isn’t about keeping every threat out, it’s about seeing, stopping, and learning from threats before they escalate.

As AI accelerates both offense and defense, do you believe defensive AI can realistically keep pace, or are we entering a period where attackers will maintain a structural advantage?

AI is accelerating both sides of the equation, but in the near term, I think that attackers do have the advantage. They face no regulatory constraints, no compliance requirements, and no ethical guardrails. This means they can experiment freely and iterate at an elevated pace. On the other hand, enterprises have to balance innovation with governance, privacy, and operational risk, which naturally slows AI adoption and implementation.

That said, defensive AI can absolutely come out on top if organizations and leaders shift their approach. Successful security tactics won’t come from using AI purely for prevention. It will require integrating AI into detection, investigation, and response workflows that are backed by real-time visibility. The advantage isn’t permanently on the attacker’s side, but it will remain there until organizations stop relying on prevention. The real damage occurs when an attacker breaches the network, often living off the land and waiting to exfiltrate data. Leaders must shift from “can we prevent a breach” to “how quickly can we detect and remove intruders?”

Looking ahead six to twelve months, which AI-powered attack techniques do you expect to become widespread, and what should security teams be doing now to prepare?

Six to twelve months is an immense amount of time in this environment. Things are realistically changing every six to twelve weeks. The rapid progression of AI makes it difficult to predict specific techniques hackers will use moving forward, so the focus should be less on guessing what is next and more on how to prepare.

Defenders need to leverage the same AI-driven technology attackers are using, with a strong emphasis on defense in depth. That means using AI to cross-reference constant streams of data across endpoints in real time, relying on immutable network telemetry to identify transactional behavior across private cloud, public cloud, and on-prem environments, and feeding that telemetry into the appropriate tooling, so security teams are actively aware when their organization is being canvassed.

At the same time, the perimeter-first defense mindset needs to be retired. It is not a question of if an AI-powered breach happens, but when. The priority now is early identification, reducing impact, and timely remediation. That includes having a solid incident response plan, applying zero trust principles, enforcing network segmentation, maintaining access management with continuous reviews, and making dynamic adjustments as needed to protect customer data and limit impact.

Thank you for the insightful interview, readers interested in deep observability and AI-driven network security can visit Gigamon.

Antoine is a visionary leader and founding partner of Unite.AI, driven by an unwavering passion for shaping and promoting the future of AI and robotics. A serial entrepreneur, he believes that AI will be as disruptive to society as electricity, and is often caught raving about the potential of disruptive technologies and AGI.

As a futurist, he is dedicated to exploring how these innovations will shape our world. In addition, he is the founder of Securities.io, a platform focused on investing in cutting-edge technologies that are redefining the future and reshaping entire sectors.