MSSP, Vertical markets, Managed Security Services

How Deepfake Tech Elevates Risk in Banking (and How MSSPs Can Help)

Virtual bank and digital banking

COMMENTARY: If there’s one industry that understands risk, it’s banking. A defining trait of any reputable financial institution is its ability to safeguard assets against threats. As banking has shifted into the digital realm, managing cyber risks has become a critical priority, and MSSPs are stepping up to the challenge.

For years, banks have successfully kept threat actors from compromising their digital infrastructure. While occasional breaches certainly happen, investments in advanced cybersecurity, managed security services, and employee training have significantly reduced risks for these institutions. However, the rise of deepfake technology has dramatically shifted the landscape.

Thanks to the widespread availability of artificial intelligence (AI) models and the rise of nefarious programs on the deep web, banks are now more vulnerable to phishing schemes using deepfake technologies. To maintain their expected level of cybersecurity, banking leaders must understand the tricks and techniques threat actors are using to exploit employees and gain access to secure systems.

MSSPs can play a critical role in helping banks and financial services organizations ensure security, handling complex cybersecurity needs and compliance requirements. Whether implementing a pen test, needing documentation to show an examiner, or enhancing your cybersecurity measures (including defending against deepfakes), an MSSP can help.

What is Deepfake Technology?

Deepfake technology uses AI to create highly realistic content, including videos, audio, and static images. While most commonly associated with video, the technology can convincingly replicate voices and faces with shockingly high accuracy. By feeding AI models with just a few samples, users can generate realistic simulations of real people that mimic their appearance, mannerisms, and speech.

The term “deepfake” has been around for some time now. It was originally coined by a Reddit user back in 2017 to describe using technology to swap the faces of celebrities in real videos. The technology has advanced quite dramatically; while generally convincing in its early days, current deepfake videos can appear highly realistic and can easily convince anyone that the content is genuine.

There have been multiple examples of famous individuals whose likeness was used for deepfake videos or images. During the 2024 presidential election cycle, both Donald Trump and Kamala Harris were impacted by AI-generated content used for misinformation campaigns. President Joe Biden was also the victim of deepfake technology after a telecom company illegally used the president’s voice for robocalls across New Hampshire.

While deepfakes are regularly used for malicious purposes, it’s worth noting that the technology isn’t inherently malicious. Deepfakes also are widely used across the internet for humorous videos and good-natured content, as seen on platforms like YouTube, TikTok, and Snapchat. It’s also used regularly in movies and TV, often for de-aging purposes or for non-human characters. What’s important is that the technology has become so sophisticated that it’s incredibly convincing–and will only get better as the years go on.

How Deepfakes Impact Banks

The increasing realism of deepfakes has inspired countless threat actors to use the technology for criminal purposes, the most common being phishing attacks. Pairing phishing -- a common exploit used by threat actors impersonating real people -- with cutting-edge deepfakes, attackers aim to trick individuals into sharing sensitive or confidential information—financial credentials, passwords, MFA authentications, and personal information.

Phishing has long been a common tactic in digital communications. For instance, a threat actor might send an email to a bank employee asking for login credentials for a company account. They might create a fake email address using the company’s signature or access an actual employee account to impersonate them. The unknowing recipient of the email might believe the person they’re talking to is a legitimate employee and provide them with exactly what they requested without thought.

Deepfake phishing elevates this form of attack to a more dangerous level. Instead of merely using text-based communications to persuade others to give out information, threat actors can use the technology to impersonate others using their face and voice. For example, a threat actor might use a public video of a CEO speaking to create a deepfake by editing it for their purposes. They can then contact any employee of the company they represent directly and ask them for any information they need as though they were a company official, potentially leading an employee to respond without thinking twice.

This form of attack is extremely troubling for banks for multiple reasons. Banking employees can easily fall into situations where they’re manipulated into giving out sensitive company information. Another example: A customer service agent might take a request from a threat actor impersonating a real employee, using stolen credentials and bypassing biometric safeguards. They could easily ask representatives to perform fraudulent activities that steal money from legitimate customers.

A single breach from a successful deepfake attack can have profound consequences for banks, with financial theft being the most obvious. Attackers might have other ideas in mind, such as crippling IT systems to create chaos, deploying ransomware, or stealing sensitive information. If threat actors manage to complete their goals, banks are put into the spotlight of regulators and valued customers. Governing bodies like the FTC will scrutinize banks for failing to safeguard their operations, which can put banks at risk of penalties and severely damage their reputation.

Shielding Banks From Deepfakes

While deepfakes pose a major risk to the financial sector, there are multiple steps MSSPs can take to help banks ensure they’re protected from threats:

Invest in detection tools - Deepfakes have become so realistic that employees may not be able to spot them manually. Automated tools can analyze pixelation, framerates, sound bites, and other elements of audio and video to determine if deepfake technology is being used.

Use alternative authentication methods - Biometric authentication methods, such as face and voice recognition software, have become increasingly insecure for validating the authenticity of banking customers. Banks should consider other forms of authentication, such as behavioral biometrics or passwords backed with multi-factor (MFA) authentication.

Protect sensitive data - Banks should always ensure that sensitive data, especially belonging to customers, is safeguarded in the event of a successful deepfake attack. They must protect and back up all sensitive data so that threat actors can’t leverage it for financial gains.

Regularly update systems - Even if threat actors don’t get all the information they want through a deepfake attack, they still have the ability to exploit vulnerabilities in outdated software. It is therefore essential to ensure all software across the company is promptly patched and kept up-to-date.

Train employees - Employees should have a firm understanding of the current threat landscape so they can do their part to keep operations safe. Ongoing company-wide training is a must to provide employees with the skills to spot suspicious activity and act quickly before situations escalate.

The growing sophistication of deepfake technology poses major challenges for the banking industry. However, with a proactive approach to cybersecurity and regular training, all of which are crucial services and technology MSSPs can provide, banks can do their part to mitigate deepfake risks and work confidently with digital technology.

MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to [email protected].

Andy Syrewicze

Andy Syrewicze is a 20+ year IT pro specializing in M365, cloud technologies, security, and infrastructure. By day, he’s a security evangelist for Hornetsecurity, leading technical content. By night, he shares his IT knowledge online or over a cold beer. He holds the Microsoft MVP award in Cloud and Datacenter Management.

You can skip this ad in 5 seconds