COMMENTARY: As we move forward in an era of evolving cybersecurity threats, the time has come to evaluate the potential security implications of using GenAI tools like ChatGPT and the market’s newest entrant, DeepSeek. It’s bad enough that large language models hand bad actors the tools to create more destructive malware, viral code, and imposter phishing schemes. Now, we have to face the fact that the simple use of some of these tools in and of themselves can create a frightening security risk.
Even as DeepSeek immediately replaced ChatGPT as the most-downloaded free application in Apple’s app store, it’s been quickly established that this platform especially epitomizes risk by sending data back to its servers in China. The DeepSeek developers are quite up-front about it. Their privacy policy states that DeepSeek “…may collect your text or audio input, prompt, uploaded files, feedback, chat history, or other content that you provide to our model and services.” (Forbes)
Businesses that use DeepSeek unwittingly offer unfettered access to their sensitive data, and present a nightmare for MSSPs, technology providers and IT directors who are looking to keep networks protected. Various public sector agencies have already begun to curb the use of DeepSeek. As of now, entities like the U.S. Congress, NASA, and the Pentagon have quickly mandated that their employees refrain from using DeepSeek. The states of New York, Texas, and Virginia have banned their employees from using DeepSeek, as well. And here in Florida, State CFO Jimmy Patronis recently issued a similar directive, forbidding staff in Florida’s Department of Financial Services from using the GenAI engine.
Rather than wait for their data to find its way into China, MSSPs should be directing their business customers to proactively discourage the use of DeepSeek by their employees. Not only does DeepSeek present the obvious risk of transporting sensitive data to a hostile nation, but it also integrates with the Wild West of unvetted external data sources, which opens the door to intrusions of malware and malicious viral code. This risk of intrusive malware extends to native OpenAI-based software like ChatGPT, as well.
MSSPs serving vertical sectors like healthcare and financial services have even more at stake in the instance of a cybersecurity attack or a sensitive data breach. Organizations in these sectors have a responsibility under FINRA and HIPAA regulations to protect sensitive data. Any breach can result in exorbitant fines and strict penalties. SMB companies are also especially vulnerable when faced with a breach, since organizations of this scale typically have fewer internal and financial resources to withstand the downtime and cost associated with mitigation.
DeepSeek exacerbates the threats associated with securing sensitive data. Even under the guise of a more cost-effective and beneficial AI engine, it introduces unprecedented risk that could be cataclysmic for any size business and for the MSSPs that serve them.
As AI matures and more automation is incorporated into business operations, companies must remain vigilant. They need to ensure that any threats these applications might introduce to companies and government agencies are aggressively mitigated, even as these tools promise to boost productivity or alleviate the burden of composing content. Our organizations should use GenAI tools and enjoy the profound benefits these technologies offer. But DeepSeek, frankly, must be closely monitored—or further banned—to see that mission-critical data remains in safe hands.
MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to MSSPAlert.perspectives@cyberriskalliance.com.