stub Cybersecurity Experts Defend from AI Cyberattacks - Unite.AI
Connect with us

Cybersecurity

Cybersecurity Experts Defend from AI Cyberattacks

mm
Updated on

Not everybody with good intentions is set to use the advantages of artificial intelligence. Cybersecurity is certainly one of those fields where both those trying to defend a certain cyber system and those trying to attack it are using the most advanced technologies.

In its analysis of the subject, World Economic Forum (WEF) cites an example when in march 2019, “the CEO of a large energy firm sanctioned the urgent transfer of €220,000 to what he believed to be the account of a new Eastern European supplier after a call he believed to be with the CEO of his parent company. Within hours, the money had passed through a network of accounts in Latin America to suspected criminals who had used artificial intelligence (AI) to convincingly mimic the voice of the CEO.” For their part, Forbes cites an example when “two hospitals in Ohio and West Virginia turned patients away due to a ransomware attack that led to a system failure. The hospitals could not process any emergency patient requests. Hence, they sent incoming patients to nearby hospitals.”

This cybersecurity threat is certainly the reason why Equifax and the World Economic Forum convened the inaugural Future Series: Cybercrime 2025. Global cybersecurity experts from academia, government, law enforcement, and the private sector are set to meet in Atlanta, Georgia to review the capabilities AI can give them in the field of cybersecurity. Also, Capgemini Research Institute came up with a report that concludes that building up cybersecurity defenses with AI is imperative fro practically all organizations.

In their analysis, WEF, indicated four challenges in preventing the use of AI in cybercrime. The first is the increasing sophistication of attackers – the volume of attacks will be on the rise, and “AI-enabled technology may also enhance attackers’ abilities to preserve both their anonymity and distance from their victims in an environment where attributing and investigating crimes is already challenging.”

The second is the asymmetry in the goals – while defenders must have a 100% success rate, the attackers need to be successful only once. “While AI and automation are reducing variability and cost, improving scale and limiting errors, attackers may also use AI to tip the balance.”

The third is the fact that as “organizations continue to grow, so do the size and complexity of their technology and data estates, meaning attackers have more surfaces to explore and exploit. To stay ahead of attackers, organizations can deploy advanced technologies such as AI and automation to help create defensible ‘choke points’ rather than spreading efforts equally across the entire environment.”

The fourth would be to achieve the right balance between the possible risks and actual “operational enablement” of the defenders. WEF is the opinion that “security teams can use a risk-based approach, by establishing governance processes and materiality thresholds, informing operational leaders of their cybersecurity posture, and identifying initiatives to continuously improve it.” Through their Future Series: Cybercrime 2025 program, WEF, and its partners are seeking “to identify the effective actions needed to mitigate and overcome these risks.”

For their part, Forbes has identified four steps of direct use of AI in cybersecurity prepared by their contributor Naveen Joshi and presented in the bellow graphic:

In any case, it is certain that both defenders and attackers in the field of cybersecurity will keep on developing their use of artificial intelligence as the technology itself reaches a new stage of complexity.

 

Former diplomat and translator for the UN, currently freelance journalist/writer/researcher, focusing on modern technology, artificial intelligence, and modern culture.