Phishing, AI/ML

AI Now a Staple in Phishing Kits Sold to Hackers

Share
SASE

A growing number of phishing toolkits available on the dark web are offering access to AI technologies, part of a larger trend in the cybercrime world of making it easier for even lesser-skilled bad actors to launch sophisticated attacks, according to Egress Software.

About 82% of phishing toolkits that Egress researchers found being advertised on forums and other parts of the dark web marketplace mentioned deepfakes and 74.8% referenced Al, according to a recent report by the software maker on the state of phishing.

One such toolkit illustrated in the Phishing Threat Trends Report was being offered for $300 and included a feature that said the buyer could generate a deepfake in five minutes. It’s an example of how such powerful technologies can be had fairly cheaply by hackers.

“While the use of AI by cybercriminals has dominated the headlines since November 2021, in reality so far, access to these tools has required financial resources,” the authors of the report wrote. “However, this is changing fast.”

Egress’ report also talked about a 28% year-over-year surge in phishing emails sent in the first half the year, the overwhelming nature of commodity attacks, and the continuing threat of impersonation tactics. Hovering over that is AI as the new and powerful weapon being wielded by attacks. It not only makes phishing emails harder to detect but also adds the elements of deepfake videos and audio.

The introduction of generative AI has escalated what already was an ongoing struggle between cybersecurity pros and threat groups, including with phishing and similar social engineering scams. The telltale signs of a phishing email – such as awkward grammar, spelling mistakes, and incorrect email addresses and domain names – can be smoothed over with generative AI chatbots like OpenAI’s ChatGPT and Google’s Gemini.

It’s become such a threat that the FBI issued a warning about it in May during the RSA security conference and myriad other cybersecurity vendors are banging the drum about the dangers of the convergence of AI with these phishing-as-a-service kits.

Jack Chapman, Egress’ senior vice president of threat intelligence, said in a statement that “one of the most troubling findings is the rapid commoditization of AI in phishing toolkits, which is putting advanced threats into the hands of less sophisticated cybercriminals.”

The As-a-Service Trend

As has happened in IT, threat groups are offering most cyberthreats – from ransomware and distributed denial-of-service (DDoS) to malware and exploits – as a service, letting others buy or lease their technologies and getting a cut of any financial return. Phishing is no different, with the introduction of generative AI making it more difficult to detect and more dangerous.

The phishing toolkits include everything an affiliate needs to run a campaign, including website development software, email templates, sample scripts, and fake login pages, all aimed at tricking victims to hand over sensitive information like passwords and credit card numbers or to give up their money.

As cybercriminals adopt AI to sharpen their attacks and make them more difficult to discover, organizations and MSSPs need embrace it to counteract the dangers. A study in March by Microsoft and Goldsmiths University in London found that 87% of UK organizations are vulnerable to attack but that only 27% are using AI in their defenses.

Paul Kelly, director of the Security Business Group at Microsoft UK, noted that well-funded threat groups are leveraging AI to increase the sophistication and intensity of their attacks, adding that “the same AI technologies can help leaders better secure their organization and tip the balance back in their favor.”

Fighting AI With AI

Nicole Carignan, vice president of strategic cyber AI at cybersecurity firm Darktrace, told MSSP Alert that organizations and MSSPs, which serve as the cybersecurity arm of many small and midsize enterprises, need to embrace AI in their operations, adding that it can “act as a force multiplier, augmenting human teams by performing autonomous investigations to lower triage time and accelerate detection of an incident.”

“It is critical that MSSPs and other organizations focus on implementing AI techniques that drive accuracies of detection and data analysis to help uplift teams, enabling security teams to prioritize higher-level strategic efforts, like improving cyber resilience,” Carignan said.

Amit Zimerman, co-Founder and chief product officer at Oasis Security, told MSSP Alert that MSSPs an bring AI into their services through platforms that use machine learning techniques for threat detection, automated response, and predictive analytics.

“They can offer AI-powered email security solutions that provide real-time protection against phishing, business email compromise, and other advanced threats,” Zimmerman said. “Additionally, MSSPs can use AI to enhance their threat intelligence capabilities, allowing them to stay ahead of emerging attack vectors and provide more proactive security measures to their clients.”

AI Now a Staple in Phishing Kits Sold to Hackers

AI is now a staple in the phishing kits sold to cyberattackers.