Generative AI, AI benefits/risks

Trend Micro Advances Deepfake Detection — Tools MSSPs Can Use

Share
Credit: Getty Images

It’s election season in the U.S. and numerous other countries, and that means the threat of deepfake technology creeping into the public discourse around candidates and their campaign messaging is very real.

To combat deepfakes, where digitally manipulated video is used to convincingly replace one person's likeness, Trend Microa cybersecurity company that partners with MSSPs and MSPs — is releasing advanced technology designed to protect all environments from the rapidly growing threat of AI-based attacks and fraud.

Recent Trend Micro research shows a dramatic increase in AI-based tools available on the criminal underground, enabling them to launch attacks more easily at scale. 

Available soon in the Trend Vision One platform, the new deepfake detection technology, the technology is already available to consumers via Trend Micro’s new Deepfake Inspector.

Eric Skinner, Trend Micro’s vice president of Market Strategy said the timing of the release is tied to observations related to major "opportunistic" events that threat actors tend to exploit.

"While not exclusively linked to elections, the general emergence and growing concern about deepfakes — whether used in consumer scams or targeting enterprises for fraudulent purposes — make this a critical issue to address now," Skinner told MSSP Alert. "Deepfakes can potentially be leveraged in various contexts, including elections, to manipulate public opinion and spread misinformation, which again highlights the importance of our research and solutions in this area."

How Trend Micro is Defeating Deepfakes

Trend Micro said its new technology is going beyond common techniques currently employed, such as image noise analysis and color detection. The Trend Vision One platform is now adding user behavioral elements “to provide a much stronger approach to detecting and stopping deepfakes.” Upon detection, Trend immediately alerts enterprise security teams, enabling them to learn, educate and take proactive measures to prevent future attacks.

Deepfake Inspector can help verify if a party on a live video conversation is using deepfake technology, alerting users that the person(s) with whom they are conversing they may not be who they appear to be, the company said. Analysis takes place in real time and locally, ensuring users' data and privacy are protected at all times.

Trend Micro said that AI technology is not only being abused to bypass human verification but also biometric security measures, such as facial recognition. 

How Trend Vison One Helps MSSPs, MSPs

Trend Micro's solution is being used by its service partners, including MSSPs and MSPs, as the capability is built into Trend Vision One endpoints. This integration allows their partners to take full advantage of Trend Micro's advanced security measures against deepfake threats.

"Trend Vision One helps MSSPs by providing them with tools to detect and mitigate the risks posed by deepfakes," said Shannon Murphy, global security and risk strategist at Trend Micro. "This is important for protecting both their infrastructure and their clients' data from sophisticated fraud schemes and misinformation campaigns, enhancing their overall security posture and service offerings."

Cheap AI Tools Generate Deepfakes, Misinformation

Trend Micro recently released research showing that cybercriminals are catching on to the explosion of enterprise AI use. This has resulted in a dramatic increase in AI-based tools available on the criminal underground. These AI tools are cheaper and more accessible than ever, enabling criminals at any skill level to launch attacks more easily at scale. Deepfakes can mislead victims for purposes of extortion, identity theft, fraud or misinformation, such as that associated with a political organization or individual candidate.

Trend Micro research has also revealed a growing preference for exploiting existing large language models (LLM) models through innovative jailbreaking techniques rather than developing bespoke criminal AI tools.

Trend Micro Chief Operating Officer Kevin Simzer said in a statement that several new deepfake tools make it easy for cybercriminals at all skill levels to launch damaging scams, social engineering and security bypass attempts.

Trend Micro cited Gartner analyst Dan Ayoub, who said that readily available, high-quality GenAI applications are now capable of creating photo-realistic video content that can deceive or mislead an audience.

“Given the low barriers to entry in using these tools and their increasing sophistication, developing a methodological approach to detecting GenAI deepfake content has become necessary,” Ayoub said in a statement.

The Damage Deepfakes Cause

Deepfakes pose a significant risk to both businesses and individuals, including financial impacts, job losses, legal challenges, reputation damage, identity theft and potential harm to mental or physical health.

In fact, Trend Micro researchers found that 36% of consumers reported experiencing a scam attempt using a deepfake. The FBI previously warned of deepfake technology being used in conjunction with video calls to carry out business email compromise attacks and to fraudulently apply for remote working positions, according to Trend.

Given the ease with which AI tools can generate deceptive but convincing narratives, adversaries will likely use such tools in 2024, according to CrowdStrike, which avails its Cybersecurity & Election Security Resource Center to voting districts.

These issues were already observed within the first few weeks of 2024, as Chinese actors used AI-generated content in social media influence campaigns to disseminate content critical of Taiwan presidential election candidates, CrowdStrike informed MSSP Alert for an April 2024 article on election security.

Underscoring the threat from AI, Yubico, a security authentication specialist, partnered with Defending Digital Campaigns (DDC) to conduct a joint study on the election security environment, surveying both Democrat and Republican party members in the U.S.

“Given the sudden advancement and uncertainty of AI technology, it’s not surprising that over 78% of respondents are concerned about AI-generated content being used to impersonate a political candidate or create inauthentic content, with Democrats at 79% and Republicans at 80%,” said David Treece, vice president of solutions architecture at Yubico.

The Election Security Risk Profile Tool, developed by the Cybersecurity and Infrastructure Security Agency (CISA) and the U.S. Election Assistance Commission, can help state and local election officials understand the range of risks they face and determine whether they should retain the security services of an MSSP or MSP. Security Election Infrastructure Against the Tactics of Foreign Malign Influence Operations is another CISA election resource.

More Trend Vison One Features

In support of a zero trust strategy, Trend also recently released new features for Trend Vision One designed to:

  • Centralize management of employees' GenAI access and usage
  • Inspect prompts to prevent data leaks and malicious injections
  • Filter GenAI content to meet compliance requirements
  • Defend against LLM attacks
Jim Masters

Jim Masters is Managing Editor of MSSP Alert, and holds a B.A. degree in Journalism from Northern Illinois University. His career has spanned governmental and investigative reporting for daily newspapers in the Northwest Indiana Region and 16 years in a global internal communications role for a Fortune 500 professional services company. Additionally, he is co-owner of the Lake County Corn Dogs minor league baseball franchise, located in Crown Point, Indiana. In his spare time, he enjoys writing and recording his own music, oil painting, biking, volleyball, golf and cheering on the Corn Dogs.

Related Terms

Algorithm