Connect with us

Thought Leaders

AI vs AI: The New Cybersecurity Reality

mm mm

AI-powered cyberattacks have arrived, confirming what was once only a theoretical, yet anticipated, risk and they’re heralding a new era in the threat landscape. With AI agents now able to launch complex, end-to-end campaigns in minutes, cybercriminals can pinpoint and exploit enterprise vulnerabilities at machine speed. The rules of engagement have changed: yesterday’s cyber defense can no longer keep pace with today’s AI-driven offense.

As AI supercharges cyber threats, enterprises need to deploy their own AI-powered defensive capabilities to keep pace in terms of speed, scale, and precision. For now, cyber adversaries still follow familiar tactics, techniques, and procedures (TTPs), but AI is accelerating and upgrading their playbook. Moreover, enterprise AI models themselves are also a new target via model poisoning and language-driven social engineering, requiring cyber teams to properly secure their AI capabilities. Therefore, security should be a core part of an organization’s larger AI strategy.

What Differentiates AI-powered Cyberattacks?

In a digital world already saturated by virtually non-stop cyber intrusion attempts, the barrier to entry for a malicious actor has been reduced, increasing not only the volume but also the complexity of attacks. AI-powered attacks can now leverage AI or machine learning (ML) to automate, accelerate, or enhance any and each phase of the attack lifecycle, from reconnaissance and information gathering through exploitation and exfiltration of sensitive data.

Relying exclusively on existing security processes and technologies without deploying the same advanced technology that adversaries are leveraging grants the adversary an advantage that’s nearly impossible to scale without defensive AI. Organizations are already overwhelmed with monitoring their enterprise from traditional threat actors. AI-powered attacks will exacerbate the alert fatigue. This necessitates an evolution from human-managed processes to a hybrid cyber human/digital workforce.

Map Every Digital Asset

In today’s world, understanding the potential entry points across an enterprise environment is table stakes, yet still out of reach for many large enterprises. Beyond just digital systems and assets is the need for a comprehensive mapping of all identities and access within an organization. Digital identity, which ties the physical and behavioral characteristics back to an individual, is well understood with the existing controls needed to manage digital access.

However, understanding the enterprise-wide landscape of non-human identities (NHIs) is essential to securing digital assets. Realizing the potential value of AI within an organization means giving AI agents access and permission to autonomously complete a business process. Much like cloud environments of the prior security era, today’s agentic-powered workflows spin up and down NHIs at scale, necessitating an advanced ability to manage and track where in the environment they operate and how they are achieving intended missions or roles.

Enterprises should extend identity governance across the agent identity lifecycle and monitor agent actions, just as they monitor human user access today for insider risk behaviors or account compromise from threat actors. As agentic capabilities take on more autonomy and business critical work, understanding what their identity footprint and access pattern looks like is critical to implementing the access and monitoring controls required to protect them and the enterprise from misuse.

Lock Down AI Models

AI capabilities have the promise of unlocking efficiency and productivity but also the potential to be turned against organizations. Introducing AI into every aspect of business operations is no longer a “nice to have” – it’s a requirement for success in today’s competitive landscape. Therefore, securely deploying AI is a necessary component to safely realize the full business potential and outcomes.

The shift to AI-empowered cyber defense is ongoing, while AI models themselves have become targets. Adversaries may attempt to poison the data feeding these models and manipulate them into taking unintended actions or even disclosing sensitive information.

Adversarial attacks against AI can come in different forms, such as poisoning attacks, prompt injection attacks, and others. To appropriately protect AI from potential manipulation, organizations should leverage the Cyber Infrastructure and Security Agency (CISA) philosophy of Secure by Design. This starts with the data that feeds the model’s training.  Locking down inputs early in the development process sets the foundation for reliable outputs in deployment.

Understanding the inputs that go into developing the models and capabilities is possible through intentional data governance along with control of access to the models themselves. Critical controls that allow for validating desired outputs come in the form of data loss prevent (DLP); policy and safety enforcement; grounding via verifiable sources; and strict approval controls for high-impact actions, auditability, and continuous testing, including penetration testing of models to battle-harden them.

Embed AI Into Security Operations

Until now, applying AI to cyber defense has been tactical, with many organizations bolting AI onto legacy, human-centric processes, rather than strategically re-thinking how to build the AI-centric processes.  This is like trying to put a V8 engine onto a bicycle.  The next evolution of AI-enablement will be designing processes from the ground up, with agentic AI and automation native in the design. Tried and true security operations methods and processes, such as threat detection, threat hunting, and detection engineering, are still the critical items that are imperative to organizational security in the AI era. Transforming current security processes to enable AI augmentation can lay a strong foundation for the future AI-empowered security operations center (SOC).

Additionally, automation is not a new concept unique to the emergence of AI; security workflows and processes have been automated for years, with mature organizations having very sophisticated orchestration and automation capabilities. However, AI enhances existing rule-based automation and evolves it further, allowing for dynamic, adaptable, context-rich automated workflows that can help address the speed and agility required for these emerging AI risks.

What’s next?

Security needs to continue to be top of mind for business and IT to keep their critical assets and users safe from malicious exploitation. Secure by design AI is a necessity given the speed by which AI models can be deployed and asked to perform more critical activities. Cyber teams should think strategically about transforming processes from the ground up to appropriately implement new AI capabilities to keep pace in the cyber cat and mouse game.

Security leaders can take several concrete steps in the near term: conduct a comprehensive review of current security processes to pinpoint gaps, modernization opportunities, and areas ripe for transformation. Stay on top of AI emerging threats, trends, and technologies. Above all, remain anchored to the foundational security principle of defense in depth that has proven itself over time – layered protection and validation from identity, to endpoint, to network, to data, with robust monitoring and response capabilities that are always being tested.

The transformational potential of AI promises too great an upside to shun its adoption, and the genie is out of the bottle for the threat actors. Therefore, the role of security leaders in this next age is the same as it’s always been: supporting your organizations in achieving their business objectives with thoughtful risk mitigation – just now at machine-speed with the usage of AI.

Kevin Urbanowicz is a principal at Deloitte & Touche LLP and serves as Deloitte’s US Cyber Security Operations leader.

Mark Nicholson is a principal at Deloitte & Touche LLP and serves as Deloitte’s US Cyber AI leader.