AI/ML, AI benefits/risks, MSSP

Addressing AI Security Blindspots: Critical Insights for MSSPs

(Adobe Stock)

COMMENTARY: We stand at the threshold of a revolution that is driven by AI. A recent test conducted at UC, San Diego in March 2025 showed AI has passed the Turing test 73% of the time, more often than humans in the same test. Innovation is now happening at a pace that even experts are struggling to keep up with. And this is only the beginning; AI is not even close to reaching its full potential across verticals.

The generative AI market hit $36.06 billion in 2024 and is projected to grow at an enormous rate, potentially reaching $356 billion by 2030. While these developments prompt businesses to rush into implementing AI, security considerations are often overlooked.

This rapid adoption has opened a door to critical vulnerabilities that threat actors are increasingly exploiting. For MSSPs, understanding these AI security blind spots represents both a significant challenge and a strategic opportunity to deliver enhanced value to clients. Organizations that applied AI and automation to security prevention reduced breach costs by an average of $2.22 million compared to those that didn't – a compelling statistic that forward-thinking MSSPs can leverage to differentiate their service offerings.

Shadow AI Vulnerabilities

Shadow vulnerabilities represent a new type of security flaw in open-source AI libraries and models. Unlike traditional vulnerabilities, they often lack CVE identifiers, thereby going undetected by conventional scanning tools. This invisibility makes them particularly dangerous for MSSP clients.

When organizations prioritize rapid deployment over security, they unknowingly create blind spots that sophisticated attackers can exploit. For MSSPs, this presents a critical opportunity to provide specialized monitoring and detection capabilities that conventional security tools miss.

Shadow vulnerabilities have led to significant security incidents across various AI frameworks such as PyTorch, Keras, Ray, and Jinja2.

Top AI Security Risks

The AI security landscape presents several high-impact risks that MSSPs must address in their service portfolios. These include data breaches and information exposure, resource hijacking, social engineering, and supply chain risks.

Data Breaches and Information Exposure

AI systems are particularly vulnerable to adversarial inputs and API manipulation. Attackers can exploit these weaknesses to extract sensitive data from exposed AI endpoints. One such exposure happened in March 2025, when security researchers discovered that numerous AI chatbots, particularly those designed for fantasy and role-playing, were leaking user prompts  due to misconfigured systems. Of the 400 systems discovered, 117 were found to be leaking data, including conversations that detailed sensitive content.

Data breach scenarios linked to AI systems include data scraping, unsecured training data, and model inversion attacks.

Recommended Action: MSSPs should regularly test and validate exposed AI endpoints for potential misconfigurations,  leakage, and unintended output exposure, as well as extend existing DLP capabilities to monitor interactions with LLMs and AI services.

Resource Hijacking (LLMjacking)

LLMjacking occurs when attackers subvert AI infrastructure for unintended purposes. This leads to an increase in operational costs, degradation in system performance, or even security blindspots. Examples include cryptomining operations piggybacking on AI compute, model hijacking to misuse inference capabilities, exploitation of zero-day vulnerabilities in orchestration tools or model servers

Recommended Action: MSSPs should monitor unusual API call volumes and unexpected usage spikes. Building a baseline for normal AI model behavior and Integration of telemetry from GPU/TPU usage could be helpful.

AI-Enabled Social Engineering

Between 2023 and 2024, AI-powered phishing attacks increased by 1200%. Long gone are the days when it was easy to spot a phishing e-mail because of awkward grammar or syntax. Another leap forward is the deepfake technology, where attackers can create convincing video impersonations. A documented case resulted in a $25 million fraudulent transfer after a finance worker participated in what appeared to be a legitimate video call with company executives.

Recommended Action: MSSPs can introduce security awareness training modules to educate employees on deepfake video and audio and AI-crafted phishing tests.

Supply Chain Risks in AI

Dependency on third-party AI components introduces risk, particularly if those components have undetected flaws or malicious backdoors. Attackers often target widely adopted frameworks, datasets, and plugin ecosystems, since a single compromise can often ripple across multiple organizations and industries. In 2023, researchers spotted some models uploaded to HuggingFace as public models included backdoors for data exfiltration. Supply chain attack examples impacting AI reliability include compromised datasets, framework vulnerabilities (libraries like TensorFlow, PyTorch, or Ray), and insecure third-party AI plugins.

Recommended Action: MSSPs can curate and monitor software bills of materials (SBOMs) for AI pipelines, covering models, plugins, and libraries.

Strategic Recommendations for MSSPs

  1. AI-Specific Runtime Monitoring with Telemetry: Deploy solutions that continuously monitor AI systems for anomalous behavior to detect and respond to attacks, even without a CVE identifier. MSSPs need to train detection logic on AI-specific telemetry.
  2. Bonus: Consider offering LLM firewalls and prompt behavior analytics as a managed service.

    1. Supply Chain Security Assessment: Develop capabilities to evaluate the security of third-party AI components, libraries, and frameworks in client environments. MSSPs should implement sandboxing for open-source libraries to limit potential damage.
    2. Bonus: Provide “trust scores” for third-party models and plugins using a mix of threat intel, static analysis, and usage telemetry.

      1. AI Incident Response: Create specialized incident response playbooks for AI security incidents. MSSPs are uniquely positioned to leverage their cross-client visibility to identify emerging AI attack patterns before they become widespread.
      2. Bonus: Offer clients AI-specific tabletop exercises for executive and technical teams.

        1. Continuous Threat Intelligence: Establish dedicated AI security research teams to track emerging threats and vulnerabilities in popular AI frameworks.  Monitor open-source AI platforms (e.g., Hugging Face, GitHub) for rogue or backdoored models. This intelligence should inform both proactive hunting and detection engineering.
          1. Regulatory Navigation: Position as trusted advisors helping clients navigate the regulatory landscape for AI security, including the EU AI Act and NIST AI RMF. Help clients align AI model documentation with these frameworks.
          2. The MSSP Advantage

            AI’s complexity presents a clear opportunity for MSSPs to step in where most organizations lack in-house expertise. While nearly 80% of IT leaders believe they are prepared for AI risks, only about half of practitioners share this confidence.

            Forward-thinking MSSPs should position AI security as a premium service offering, using their unique cross-client visibility to detect emerging threats, address shadow vulnerabilities, and guide secure AI adoption.

            The global AI cybersecurity market is projected to grow from $22.4 billion in 2023 to over $133.8 billion by 2030.  Those who invest now in specialized AI capabilities will not only protect their clients — they'll define the next generation of cybersecurity leadership.


            MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to MSSPAlert.perspectives@cyberriskalliance.com.

            An In-Depth Guide to AI

            Get essential knowledge and practical strategies to use AI to better your security program.
            Gökçen Tapkan

            Gökçen Tapkan is the director of data research at Black Kite. Gokcen leads cutting-edge AI and ML projects, combining her experience in cybersecurity, compliance, and risk. Her background is in cryptography, cybersecurity, and compliance, and she brings 10+ years of expertise in the government sector, particularly in military-level information security. Since 2012, she has been an Expert Evaluator for European Projects. She holds a Master’s degree in computer engineering from Bosphorus University. Gokcen is dedicated to her professional role and is actively engaged in a charitable AI project focused on supporting children on the autism spectrum. Her commitment to leveraging technology for social good is a testament to her passion for making a positive impact.

            You can skip this ad in 5 seconds

            Cookies

            This website uses cookies to improve your experience, provide social media features and deliver advertising offers that are relevant to you.

            If you continue without changing your settings, you consent to our use of cookies in accordance with our privacy policy. You may disable cookies.