Connect with us

Thought Leaders

The Future of Cybersecurity: AI, Automation, and the Human Factor

Updated on

In the past decade, along with the explosive growth of information technology, the dark reality of cybersecurity threats has also evolved dramatically. Cyberattacks, once driven primarily by mischievous hackers seeking notoriety or financial gain, have become far more sophisticated and targeted. From state-sponsored espionage to corporate and identity theft, the motives behind cybercrime are increasingly sinister and dangerous. Even as monetary gain remains an important reason for cybercrime, it has been overshadowed by more nefarious aims of stealing critical data and assets. Cyberattackers extensively leverage cutting-edge technologies, including artificial intelligence, to infiltrate systems and carry out malicious activities. In the US, the Federal Bureau of Investigation (FBI) reported more than 800,000 cybercrime-related complaints filed in 2022, with total losses exceeding $10 billion, shattering 2021’s total of $6.9 billion, according to the bureau’s Internet Crime Complaint Center.

With the threat landscape evolving rapidly, it’s time for organizations to adopt a multi-pronged approach to cybersecurity. The approach should be to address how attackers gain entry; prevent initial compromise; swiftly detect incursions; and enable rapid response and remediation. Protecting digital assets requires harnessing the power of AI and automation while ensuring skilled human analysts remain integral to the security posture.

Protecting an organization requires a multi-layered strategy that accounts for the diverse entry points and attack vectors employed by adversaries. Broadly, these are under four main categories: 1) Web and network attacks; 2) User behavior and identity-based attacks; 3) Entity attacks targeting cloud and hybrid environments; and 4) Malware, including ransomware, advanced persistent threats, and other malicious code.

Leveraging AI and Automation

Deploying AI and machine learning (ML) models tailored to each of these attack classes is critical for proactive threat detection and prevention. For web and network attacks, models must identify threats such as phishing, browser exploitation, and Distributed Denial-of-Service (DDoS) attacks in real time. User and entity behavior analytics leveraging AI can spot anomalous activities indicative of account compromise or misuse of system resources and data. Finally, AI-driven malware analysis can rapidly triage new strains, pinpoint malicious behavior, and mitigate the impact of file-based threats. By implementing AI and ML models across this spectrum of attack surfaces, organizations can significantly enhance their capability to autonomously identify attacks at the earliest stages before they escalate into full-blown incidents.

Once AI/ML models have identified potential threat activity across various attack vectors, organizations face another key challenge—making sense of the frequent alerts and separating critical incidents from the noise. With so many data points and detections generated, applying another layer of AI/ML to correlate and prioritize the most serious alerts that warrant further investigation and response becomes crucial. Alert fatigue is an increasingly critical issue that needs to be solved.

AI can play a pivotal role in this alert triage process by ingesting and analyzing high volumes of security telemetry, fusing insights from multiple detection sources including threat intelligence, and surfacing only the highest fidelity incidents for response. This reduces the burden on human analysts, who would otherwise be inundated with widespread false positives and low-fidelity alerts lacking adequate context to determine the severity and next steps.

Although threat actors have been actively deploying AI to power attacks like DDoS, targeted phishing, and ransomware, the defensive side has lagged in AI adoption. However, this is rapidly changing as security vendors race to develop advanced AI/ML models capable of detecting and blocking these AI-powered threats.

The future for defensive AI lies in deploying specialized small language models tailored to specific attack types and use cases rather than relying on large, generative AI models alone. Large language models, in contrast, show more promise for cybersecurity operations such as automating help desk functions, retrieving standard operating procedures, and assisting human analysts. The heavy lifting of precise threat detection and prevention will be best handled by the highly specialized small AI/ML models.

The Role of Human Expertise

It is crucial to utilize AI/ML alongside process automation to enable rapid remediation and containment of verified threats. At this stage, provisioned with high-confidence incidents, AI systems can kick off automated playbook responses tailored to each specific attack type—blocking malicious IPs [internet protocol], isolating compromised hosts, enforcing adaptive policies, and more. However, human expertise remains integral, validating the AI outputs, applying critical thinking, and overseeing the autonomous response actions to ensure protection without business disruption.

Nuanced understanding is what humans bring to the table. Also, analyzing new and complex malware threats requires creativity and problem-solving skills that may be beyond machines’ reach.

Human expertise is essential in several key areas:

  • Validation and Contextualization: AI systems, despite their sophistication, can sometimes generate false positives or misinterpret data. Human analysts are needed to validate AI outputs and provide the necessary context that AI might overlook. This ensures that responses are appropriate and proportionate to the actual threat.
  • Complex Threat Investigation: Some threats are too complex for AI to handle alone. Human experts can delve deeper into these incidents, utilizing their experience and intuition to uncover hidden aspects of the threat that AI might miss. This human insight is critical for understanding the full scope of sophisticated attacks and devising effective countermeasures.
  • Strategic Decision Making: While AI can handle routine tasks and data processing, strategic decisions about overall security posture and long-term defense strategies require human judgment. Experts can interpret AI-generated insights to make informed decisions about resource allocation, policy changes, and strategic initiatives.
  • Continuous Improvement: Human analysts contribute to the continuous improvement of AI systems by providing feedback and training data. Their insights help refine AI algorithms, making them more accurate and effective over time. This symbiotic relationship between human expertise and AI ensures that both evolve together to address emerging threats.

Optimized Human-Machine Teaming

Underlying this transition is the need for AI systems that can learn from historical data (supervised learning) and continuously adapt to detect novel attacks through unsupervised/reinforcement learning approaches. Combining these methods will be key to staying ahead of attackers’  evolving AI capabilities.

Overall, AI will be crucial for defenders to scale their detection and response capabilities. Human expertise must remain tightly integrated to investigate complex threats, audit AI system outputs, and guide strategic defensive strategies. An optimized human-machine teaming model is ideal for the future.

As massive volumes of security data accumulate over time, organizations can apply AI analytics to this trove of telemetry to derive insights for proactive threat hunting and the hardening of defenses. Continuously learning from previous incidents allows predictive modeling of new attack patterns. As AI capabilities advance, the role of small and specialized language models tailored to specific security use cases will grow. These models can help further reduce ‘alert fatigue’ by precisely triaging the most essential alerts for human analysis. Autonomous response, powered by AI, can also expand to handle more Tier 1 security tasks.

However, human judgment and critical thinking will remain indispensable, especially for high-severity incidents. Undoubtedly, the future is one of optimized human-machine teaming, where AI handles voluminous data processing and routine tasks, enabling human experts to focus on investigating complex threats and high-level security strategy.

Anand Naik, Co-founder and CEO, Sequretek,  has worked in the corporate world for over 25 years with companies such as Symantec where he was the MD for South Asia, and previously with IBM and Sun Microsystems in technology roles.

Anand is a subject matter expert in Cyber Security. He has worked with several global giants in helping them define their IT security strategy, architecture, and execution models. He is among the top thought leaders in Cyber Security and has participated in various policy programs with Government of India and other industry bodies. He is responsible for product vision and operations at Sequretek.