AI/ML, AI benefits/risks

How to Responsibly Bring AI Into Cybersecurity

(Adobe Stock)

Guest blog courtesy of D3 Security.

As artificial intelligence (AI) has quickly become a critical component of many security tools, the topic of AI has become inescapable for MSSPs. Bringing powerful AI tools into the security operations center (SOC) comes with huge opportunities but also major risks.

Anthony Green is an expert at the intersection of AI and cybersecurity. He is a member of the AI Ethics Advisory Panel for the Digital Governance Council and a former president of the ISACA Vancouver chapter. D3 recently hosted Anthony on our podcast, Let’s SOC About It, where the conversation covered many key considerations for MSSPs in the AI era.

While AI in cybersecurity isn’t new, the landscape has evolved dramatically with the emergence of generative AI. Green explains that organizations must approach AI implementation with the same rigor applied to any critical security infrastructure. He advocates for comprehensive vendor evaluation — examining everything from data processing locations to training methodologies. This due diligence becomes particularly crucial when dealing with AI systems that make automated decisions affecting security operations**, where accuracy and reliability are paramount.**

Green recommends approaching AI implementation through the familiar lens of cloud security principles while adding necessary considerations for ethics and bias. Organizations must scrutinize AI vendors with the same rigor applied to any cloud service provider by examining encryption standards, access controls, and vulnerability management practices. Additionally, security teams should work in concert with privacy teams to establish proper guardrails, especially when handling sensitive data through AI systems. You can watch the full interview with Anthony here.

Episode Highlights

1) Evolution of AI in Cybersecurity (1:21) Anthony reveals that AI has been a part of cybersecurity for over a decade and gives a brief overview of its applications.

2) Risk Assessment Framework (3:11) Key discussion about approaching AI integration through the familiar lens of cloud software security. Anthony emphasizes evaluating AI systems using established third-party risk management principles, including encryption, access controls, and vulnerability scanning.

3) AI Bias and Ethics Considerations (6:03) A critical exploration of AI bias through real-world examples, including problematic cases from major tech companies. Anthony introduces the Digital Governance Council’s 40-question framework, which is based on the EU AI Privacy Act and the OWASP Top 10 guidelines

4) Organizational AI Ethics Structure (18:12) A detailed breakdown of how companies should structure AI governance, comparing it to application security models and emphasizing the need for collaboration between security teams, privacy teams, and end users.

5) Data Protection Strategies (19:58) Interesting insights about Microsoft's approach to AI implementation to ensure proper data access controls and prevent unauthorized information exposure.

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.

You can skip this ad in 5 seconds

Cookies

This website uses cookies to improve your experience, provide social media features and deliver advertising offers that are relevant to you.

If you continue without changing your settings, you consent to our use of cookies in accordance with our privacy policy. You may disable cookies.