AI/ML, MSSP

Leveraging AI in Security: What MSSPs Need to Know Before They Commit

AI Artificial Intelligence technology for data analysis, research, planning, and work generate. Man uses a laptop and AI assistant dashboard. Technology smart robot AI agents and agentic workflows.

COMMENTARY: MSSPs have long sought to balance automation with human expertise, and with AI-driven security tools gaining popularity, we understand more than ever that convenience can come at a cost. As we find strategies to harness AI, we must be simultaneously focused on upholding and strengthening security.  

Despite the hype, AI isn’t a standalone solution. It has limitations in accuracy, reliability, and cost and isn’t yet capable of enhancing workflows beyond surface-level decision-making.

Also, while the emergence of lower-cost AI models allows for greater adoption, it has increased concerns about security and intellectual property implications. To effectively leverage AI without overly relying on it, MSSPs must address the current gaps in AI-driven security, why human expertise remains irreplaceable, and the business case for adopting AI in a way that delivers real security value.

The Benefits and Limitations of AI in Cybersecurity

Today, AI excels in enriching data, reducing noise, and improving client communications. However, where it falls short is in making real decisions and creating automated workflows. While AI can suggest workflows, human validation is still required to ensure these workflows function correctly before they can be fully automated. 

For instance, AI can automatically kill low-risk threats, such as adware. However, in more severe cases like ransomware, AI interpretations aren’t nuanced enough to independently decide whether to isolate an infected workstation or escalate the issue before isolating an infected server. It still requires human intervention for business-impacting decision-making.

For large MSSPs that support multiple SIEM platforms, AI plays a role in streamlining investigations. It helps consolidate alerts from various products into a cohesive set of investigation types, maps them to common triage methods, and even assists in initial triage.

Inconsistencies in how different vendors label the same security issue are a significant challenge in this industry. Each vendor has its own terminology - what one calls a threat, another may call a risk. AI helps bridge these gaps by consolidating disparate detections into a more uniform and categorized methodology; however, again, AI is not making decisions - it’s simply recognizing different alerts and patterns and is standardizing processes to support human analysis.

AI Misidentification and Decision-Making Errors

AI tools have also been known to misidentify threats due to incomplete training data, leading to both false positives and false negatives. For example, if a phishing email does not match all the criteria on which an AI model was trained, it may be incorrectly classified as benign, leaving employees responsible for identifying the threat manually. Users must then retrain the model by manually flagging misclassified emails, a process that still requires human oversight.

One of the primary areas in which AI is leveraged today is in early warning systems for compromises, particularly in solutions like Microsoft Defender for Identity and other tools like CrowdStrike Falcon for Identity. While these tools provide highly enriched data, they still make errors because the datasets they rely on may be incomplete. As a result, AI often infers conclusions without having complete and accurate data.

I have witnessed organizations allowing AI to baseline their on-premise user base but failing to connect it to their Azure Active Directory. As a result, accounts were disabled based on incomplete data, leading to productivity issues. The AI model made decisions without requiring all data sources to be available to create a full picture, ultimately causing disruptions in operations. Without proper training and validation, AI can create bigger problems.

Balancing Automation and Human Expertise

Implementing propriety AI security solutions is incredibly costly. Large MSSPs with significant investment are developing proprietary AI and LLM models to enhance service delivery, while smaller providers rely on vendor-supplied AI solutions. However, as lower-cost AI models emerge, more providers are considering private AI infrastructure to maintain data ownership and control. This approach guarantees that sensitive information remains within the organization’s environment, reducing exposure to external AI providers.

AI tools such as Microsoft Copilot for Security introduce convenience but also raise concerns about data privacy. Many AI solutions explicitly state in their user agreements that they collect and reuse harvested data, which presents a significant risk for sensitive environments. Organizations must carefully evaluate whether the benefits of AI outweigh the risks, particularly regarding data exposure and potential misuse.

Is your organization ready to leverage AI? Here’s how to determine if AI is the right fit and key considerations for implementation.

  1. Define your objectives and how you intend to use it. Ensure your AI adoption aligns with your business and security goals and objectives rather than simply for the sake of being innovative.
  2. Assess your existing datasets. Make sure the data AI is analyzing is comprehensive, high quality, and as complete as possible. Incomplete data can create gaps in your analysis.
  3. Evaluate your risk and compliance concerns. If you are in an industry with strict data privacy regulations, understand that some AI solutions collect and reuse data, which could pose compliance and security risks.
  4. Evaluate the cost versus ROI. Propriety solutions require significant investment, while vendor solutions offer a cost-effective alternative. Also, AI's effectiveness in security operations depends on continuous monitoring and retraining. Train staff to understand this methodology. Models must be periodically reviewed to identify past errors and improve accuracy.

AI can help streamline and simplify security monitoring and management but requires human oversight to ensure it is executed properly. While AI models save time, improper implementation can cause problems, having the opposite effect on your company’s operations.

MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to MSSPAlert.perspectives@cyberriskalliance.com.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.
Jim Broome

Jim Broome is a seasoned IT/IS veteran with more than 20 years of information security experience in both consultative and operational roles. Jim leads DirectDefense, where he is responsible for the day-to-day management of the company, as well as providing guidance and direction for our service offerings.

Previously, Jim was a Director with AccuvantLABS where he managed, developed, and performed information security assessments for organizations across multiple industries. Prior to AccuvantLABS, Jim was a Principal Security Consultant with Internet Security Systems (ISS) and their X-Force penetration testing team.

You can skip this ad in 5 seconds

Cookies

This website uses cookies to improve your experience, provide social media features and deliver advertising offers that are relevant to you.

If you continue without changing your settings, you consent to our use of cookies in accordance with our privacy policy. You may disable cookies.