COMMENTARY: Artificial intelligence (AI) is everywhere. Companies of all sizes and in every market, including MSSPs, are deploying or experimenting with how the technology can improve everything from call center operations to threat intelligence to marketing to quality control on the manufacturing floor. McKinsey estimates that Gen AI could add up to $4.4 trillion to the global economy.
However, that opportunity comes with risk. Cybercriminals are already utilizing AI to improve the effectiveness of their attacks. Still, AI opens up companies to other types of vulnerabilities – some of which we are just beginning to recognize and understand.
How AI is Reshaping Cyberattacks
McKinsey found that while the majority of companies are putting a high priority on AI implementations, more than 90% don’t feel like they are adequately prepared to do so. That means, in many cases, speed may be prioritized over security. However, suppose companies implement risk management approaches to their AI deployments. In that case, they can reduce the likelihood of their AI solutions being used against them by bad actors or inadvertently creating new security vulnerabilities.
New AI deployments may expose companies in several ways. First, new applications may not be sufficiently secure, which could open up backdoor entry to your network via third-party providers. Second, AI-based inward or outward-facing applications could be tricked into exposing or sharing sensitive data. In addition, the inherent flaws in AI (like the potential for model collapse or hallucination) could result in AI-based solutions or automated workflows acting in unpredictable ways that could leave networks or apps exposed to breaches. Data sent to generative AI models needs to be handled securely and in ways that ensure privacy.
Companies must also guard against untrusted AI models or model-sharing that could introduce malware or result in data breaches. Access keys used for communication among different AI applications should also be managed securely.
AI also creates risks outside of cybersecurity. For example, AI chatbots might develop biases that could offend customers or damage your brand. AI algorithms can also generate unreliable outputs, creating downstream design, production, or workflow problems.
A Proactive Approach: Best Practices for AI Security
Before implementing AI technology, how can you ensure your networks and applications are sufficiently protected? There are some recommended best practices MSSPs can implement for greater AI security, including:
- Create a comprehensive view of the potential AI-related risks across use cases and map out options for managing those risks (both technical and non-technical). A cross-functional team should be established for this task to review and validate risk assessments.
- Implement a governance structure that can include requiring references and fact-checking for AI responses, keeping humans in the loop, and protecting against problematic third-party data usage.
- Embed the governance structure in an operating model and provide training for end users. An AI steering group should meet regularly to evaluate risks and mitigation strategies.
- Automate data governance and information management (including archiving and deletion) to help avoid having employees overshare or expose sensitive information. Role-based access and elimination of manual intervention can reduce the risk of human error.
- Reassess your data backup and recovery capabilities. AI tools like Microsoft CoPilot and others will exponentially increase the volume of data generated across every company. Ensure you have sufficient storage in multiple locations and regular backups to help mitigate against system failures, cyberattacks, and other disasters. This will be critical for managing AI-generated data and ensuring you have sufficient data to train new AI applications.
- Conduct customer training on cybersecurity awareness and AI risk management. Establish acceptable use policies for AI and regularly train staff about proper usage and the potential risks of AI-based workflows and solutions. There should also be rules around using public AI tools like ChatGPT.
AI has the potential to unlock new levels of innovation and productivity. Still, if companies do not fully understand the risks around AI and use best practices to ensure the secure use of the technology, they will not be able to use AI to its full potential. Plus, they could leave themselves open to new and difficult-to-detect vulnerabilities.
MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to [email protected].