AI making cyberattacks more effective


In a recent statement, Marijus Briedis, the Chief Technology Officer of NordVPN, has highlighted that the rapid progress in AI technology, coupled with the accessibility of automation tools like OpenAI's ChatGPT, has bolstered the effectiveness of cyberattacks by hackers. Briedis pointed out that generative AI's capability to forge authentic-seeming forms, documents, and emails in alignment with a company's style is amplifying the challenge of distinguishing malicious content from genuine communications.

Since OpenAI unveiled ChatGPT in November of the previous year, both corporations and individuals have been eager to harness the potential of this versatile AI. Tech giants such as Microsoft, Google, and Salesforce have seamlessly integrated it into their products. Additionally, the cybersecurity sector is leveraging ChatGPT to monitor network activities and develop early detection systems.

A comprehensive report from IBM revealed that security tools infused with AI have significantly curtailed the financial and operational consequences of data breaches. The study demonstrated that an AI-assisted breach costs an average of £1.8 million, as opposed to the staggering £3.4 million incurred by a non-AI breach.

However, Briedis, NordVPN's CTO, cautioned that hackers possess access to the same AI tools and other models. He noted a doubling of detected cyberattacks since ChatGPT's introduction in November, attributing this surge to the incorporation of AI, which has lent a new layer of sophistication to hacking endeavours.

Briedis emphasised: "Hackers have adeptly harnessed AI to bolster their capabilities, streamlining their operations and amplifying their efficacy". He further explained that AI tools have mechanised a significant fraction of phishing attacks. Looking forward, NordVPN's CTO predicted a surge in the frequency and severity of breaches as the utilisation of AI escalates.

AI is playing a pivotal role in the hackers' strategies, with two major avenues being the creation of highly personalised phishing content and the formulation of malware code fine-tuned to specific target systems. Additionally, hackers are utilising large language models (LLMs) to produce documents and emails that incorporate genuine company data in real time, rendering these materials considerably harder to discern as fraudulent.

Nevertheless, the unchecked input of sensitive company data into AI systems like ChatGPT could yield unintended consequences. Briedis stated: "The proliferation of AI systems heightens the risk of mishandling or misappropriation of confidential information". Should an employee employ a public AI tool to generate a report from confidential data, that data could potentially contribute to the refinement of the AI model itself, thereby aiding hackers in crafting more sophisticated cyberattacks.

A study conducted by Cyberhaven earlier this year found that 11% of data fed into ChatGPT by employees contained confidential corporate information, thereby posing a future threat if accessed by malicious actors. Briedis warned that falling prey to phishing emails containing confidential data is increasingly likely.

To counteract these threats, both OpenAI and other AI laboratories offer enterprise solutions that prohibit the utilisation of input data for model retraining. Additionally, several enterprise-focused solutions, including those from Databricks and IBM, are tailored to company-specific data, restricting access solely to employees. This approach addresses concerns about public exposure of confidential data, yet hackers are exploring other AI applications.

Beyond textual content, images, and reports, hackers have also harnessed AI, particularly large language models, to refine their code for ransomware attacks or system shutdowns. This facilitates rapid adaptation and personalisation of malware, automating tasks like reconnaissance and scaling up attacks through automated botnets.

In response, NordVPN's CTO underscores the importance of employee vigilance. Companies are urged to educate employees to verify senders, scrutinise URLs, and exercise caution when opening files or clicking links. Maintaining up-to-date software and robust security measures which include a reliable VPN service is paramount in the face of these escalating AI-driven threats.