Artificial intelligence-powered tools such as ChatGPT and Google’s Bard, and to a degree Microsoft’s Security Copilot, have given rise to a new level of hackers’ stealing credentials and phishing for sensitive information, according to Password Manager.
In a survey of 1,000 cybersecurity professionals, Password Manager sought to learn how much of a threat AI-run tools present to the “average American.”
AI Raises Hacking Concerns
Key findings from the report include:
Commenting on the findings, Marcin Gwizdala, chief technology officer at Tidio (via Password Manager), said:
“One of the threats that appeared by using AI, in general, is phishing scams. ChatGPT can be easily mistaken for an actual human being because it can converse seamlessly with users without spelling, grammatical, and verb tense mistakes. That’s precisely what makes it an excellent tool for phishing scams.”
Additionally, the study found that 52% of cybersecurity professionals say AI tools have made it "somewhat" or "much easier" for people to steal sensitive information.
“The threat of AI as a tool for cybercriminals is dire,” says Steven J.J. Weisman, a leading authority on scams, identity theft and cybersecurity, told Password Manager.
Weisman, in the report, explained that with AI, phishing scams can now become more viable:
“In particular, many scams originate in foreign countries where English is not the primary language, and this is often reflected in the poor grammar and spelling found in many phishing and spear phishing emails and text messages coming from those countries. Now, however, through the use of AI, those phishing and spear phishing emails and text messages will appear more legitimate.”
Five Recommendations to Guard Against AI Tricks
Daniel Farber Huang, Password Manager’s subject matter expert, offered five recommendations in the blog for individuals and businesses to not get victimized by AI-powered ruses: