Abnormal Security, provider of a behavioral artificial intelligence (AI)-based email security platform, has launched CheckGPT, a new tool capable of detecting AI-generated attacks.
The capability determines when email threats, including business email compromise (BEC) and other socially engineered attacks, are likely to have been created using generative AI tools.
Disrupting the Attack Path
Cybercriminals are using tools like ChatGPT or its malicious cousin WormGPT to write what appear to be legitimate emails, scaling their attacks in both volume and sophistication, Abnormal said. In its latest research report, Abnormal uncovered a 55% increase in BEC attacks over the previous six months. In addition, the findings included:
Explaining AI’s use in email fraud, Abnormal Security CEO Evan Reiser said:
“As the adoption of generative AI tools rises, bad actors will increasingly use AI to launch attacks at higher volumes and with more sophistication. Security leaders need to combat the threat of AI by investing in AI-powered security solutions that ingest thousands of signals to learn their organization’s unique user behavior, apply advanced models to precisely detect anomalies, and then block attacks before they reach employees. While it’s important to understand whether an email was generated by a human or AI to understand and stay ahead of evolving threats, the right system will detect and block attacks no matter how they were created.”
What's Different About Abnormal's Approach?
Abnormal’s approach to stopping advanced email attacks is different from traditional methods, the company said. Here’s how it works (per Abnormal):