MSSP, AI/ML, Phishing

Zscaler: As AI Use Increases, So Do AI Risks

AI with young man in the night

Threat researchers for Zscaler’s ThreatLabz unit are seeing what everyone else is: The rapid adoption of AI by enterprises and small companies alike for their business operations. Executives are using emerging AI technology to create greater efficiencies, inform decision-making, and improve customer service.

That may be true, but it also comes with an expanded cyberthreat landscape, the researchers wrote in Zscaler’s 2025 AI Security Report, in which ThreatLabz reported seeing a 3,000% year-over-year jump in 2024 in the use of AI and machine learning tools.

AI adoption brings serious security risks, from unsanctioned usage (“shadow AI”) to data exposure,” the researchers wrote in the report. “Even more concerning, threat actors seem to have the upper hand as they weaponize these same tools to amplify attacks. What once required skill now takes minimal effort. What once took hours now takes seconds.”

They also noted the dual role AI now takes in cybersecurity, writing that “AI isn’t just enabling attacks – it’s also now a critical line of defense, powering the fight against these attacks.”

Zero-Trust and MSSPs

Given that, the San Jose, California-based company is urging organizations to adopt approaches like zero-trust, which assumes that any user or device trying to connect to the network is untrustworthy until verified.

The shift to zero-trust architectures and the integration of AI into operations are things that MSSPs and MSPs will have to help companies with, according to Zscaler Chief Security Officer Deepen Desai.

“As customers are embracing a zero-trust architecture, they are eliminating many point products and simplifying the network,” Desai told MSSP Alert. “This shift in cybersecurity architecture creates a tremendous opportunity for [channel partners] to build service offerings that support customers on this transformation journey.”

Organizations will need such services to help simplify their networks and improve security, data protection, and AI-related initiatives, he said.

ChatGPT is the Most Used AI App

ThreatLabz researchers analyzed 536.5 billion transactions from across the Zscaler Zero Trust Exchange cloud security platform between February and December 2024 and saw a rapid rise in the use of AI tools, with OpenAI’s ChatGPT chatbot being both the most popular application – accounting for 45.2% of the AI and machine learning transactions observed – and the most blocked AI application. Grammarly and Microsoft Copilot rounded out the top three most blocked apps.

Organizations also are sending a lot of data – 3,624 TB – to AI tools, risking its exposure and loss.

“AI tools process and store vast amounts of data, raising concerns about where that data goes,” Desai said. “Some AI providers retain inputs for training, share data with third parties, or use it for advertising, for example, leading to privacy concerns and compliance issues under regulations like GDPR.”

He added that not all AI vendors have strong security controls or standards, “leaving AI tools more vulnerable to data leaks or unauthorized access. This risk is amplified by advancements like open source AI vendor DeepSeek that lack adequate security guardrails.”

Other risks arise from such issues as the quality of the data being inputted into AI models and the exposure of IP and non-public information.  

Open Source AI a Benefit and Risk

ThreatLabz also got a good look at how bad actors are using AI in their malicious campaigns, from AI-powered phishing attacks to fake AI platforms and leveraging tools like deepfakes, open-source AI models, and autonomous attack automation. Open source democratizes the use of AI for businesses but also does the same for cybercriminals.

“Open source AI like DeepSeek can easily be exploited for malicious purposes, such as creating a phishing login page in five prompts,” he said. “AI enables adversaries to scale and personalize their attacks with new precision and speed, making it more difficult for traditional security tools to detect and counter threats.”

DeepSeek is the Chinese AI-powered chatbot released in January that claimed to have the same functionality as ChatGPT but at a significantly lower cost. Open-source models like DeepSeek and xAI’s Grok 2 models allow more organizations to develop and leverage AI for their businesses but also make it easier for threat groups to use the technology.

Agentic AI's Influence in 2025

DeepSeek is one of several emerging developments in AI that will influence AI in 2025 and beyond, joining agentic AI and the ever-changing regulatory environment. AI agents are software that can work autonomously with little to no human interaction, performing complex tasks, making decisions, pulling in needed data from the internet or other sources, adapting their behavior, and collaborating with other AI agents.

Right behind it is reasoning AI, which can use deduction and induction to generate conclusions, essentially reaching thoughtful conclusions on their own.  

“The growing autonomy of AI systems suggests that security teams will face numerous challenges and risks, emerging in both enterprise adoption of AI agents and their use by attackers,” the researchers wrote.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

You can skip this ad in 5 seconds

Cookies

This website uses cookies to improve your experience, provide social media features and deliver advertising offers that are relevant to you.

If you continue without changing your settings, you consent to our use of cookies in accordance with our privacy policy. You may disable cookies.