The U.S. House of Representatives has banned Congressional staffers from using Microsoft Copilot for 365 AI chatbot.
Axios reported on March 29 that House Chief Administrative Officer Catherine Szpindor declared Microsoft Copilot as "unauthorized for House use” and that the Office of Cybersecurity has deemed it a risk to leaking House data to non-House approved cloud services.
Copilot has been removed from and blocked on all House Windows devices. In June 2023 the House also restricted staffers' use of ChatGPT, banning the free version and only allowing limited use of the paid subscription version.
Is Congress overreacting to the data leak threat of Copilot? MSSP Alert asked an MSSP about the security and data privacy offered by Microsoft in Copilot for 365.
Quorum Cyber’s Confidence in Copilot
While the House has its reservations over Copilot, Graham Hosking, solutions director for Data Security & AI for Quorum Cyber — an MSSP that participated in the Copilot for Security Partner Private Preview — gave assurances that the Copilot products are secure. Quorum Cyber is also a double award finalist in the Microsoft Security Excellence Awards as Security MSSP of the Year and Security Customer Champion.
“I have confidence in Copilot's capacity to transform the way in which data is found by end users, in line with Microsoft's dedication to privacy and data security,” Hosking told MSSP Alert. “Microsoft stresses that Copilot is conceived with privacy and data security as its foundation, giving precedence to user privacy and ensuring that customer data is safeguarded and managed with the greatest diligence. However, there's work to be performed as readiness to these new features.”
Hosking explained that the apprehensions leading to the U.S. House ban on Copilot for Microsoft 365 usage highlights the significance of stringent privacy and data security protocols, which are central tenets of Microsoft's strategy.
“Microsoft's Copilot privacy documentation delineates the company's pledge to transparency, accountability and adherence to global privacy regulations,” he said. “Furthermore, data security considerations, such as comprehending current permissions, data classification, role-based access and the sensitivity of data, are pivotal in reducing the likelihood of sensitive information being inadvertently revealed through Copilot for M365.”
Hosking noted that employing built-in security solutions from Microsoft 365 ensures that data is secured by default. However, accurate configuration and application are crucial to fully capitalize on these advantages.
“So, from a compliance standpoint, utilizing AI permits end-users to locate information more efficiently than ever,” he said. “Yet it is vital that data security measures are incorporated seamlessly into the process.”
Hosking added that Quorum Cyber “recognizes the necessity for a prudent approach” to the adoption of Copilot within government and business sectors — considering the confidential nature of data and the regulatory environment. But he believes the appropriate safeguards are in place.
“Microsoft's Copilot privacy and data security information acts as an invaluable asset for organizations assessing the platform's appropriateness for their specific security and privacy needs,” he said. “By conforming to Microsoft's privacy principles and utilizing the forthcoming government-specific tools, organizations can bolster their security stance while adhering to privacy regulations and ensuring data security by default.”
Copilot Spurs Product Development
Varonis Systems, a data security specialist and MDR provider, is confident in Copilot’s security to the degree that it has brought to market Varonis for Microsoft 365 Copilot — billed as the “industry's first purpose-built solution to secure Microsoft's AI-powered productivity tool before and after deployment.”
Varonis said its new offering builds on its existing Microsoft 365 security suite, adding new capabilities to monitor Copilot data access in real time, detect abnormal Copilot interactions, and automatically limit sensitive data accessible by both humans and AI agents.
"Strong data security posture is necessary to roll out Copilot safely," said David Bass, Varonis’ executive vice president of engineering and chief technology officer. "However, once you deploy, you need ongoing visibility into what Copilot is doing, what sensitive data it's accessing, and the ability to detect abnormal behavior and policy violations in real time. Varonis for Microsoft 365 Copilot does that."
Copilot for Security Makes Market Debut
Microsoft Copilot for Security officially debuted on April 1. It provides a live, comprehensive view of a user’s security estate that lets them consistently evaluate and enhance their protection — to increase efficiency and cooperation with a simplified, natural language-based user experience.
Over the past six months, Microsoft has partnered with more than 100 MSSPs like Quorum Cyber and independent software vendors in a Copilot for Security Partner private preview. These organizations have enhanced Copilot’s development by testing and enhancing new scenarios and providing feedback on product development and operations for future releases.
Federal agencies that use Microsoft’s Azure Government service now have access to its Azure OpenAI Service through the cloud platform, permitting use of Microsoft’s AI tools in a more regulated environment.
Microsoft Plans Tools for Higher Security Clearance
According to Candice Ling, senior vice president of Microsoft Federal, Azure OpenAI Service in Azure Government "enables agencies with stringent security and compliance requirements to utilize this industry-leading generative AI service at the unclassified level."
Microsoft is submitting Azure OpenAI Service for FedRAMP High authorization from the Joint Authorization Board (JAB). This service will be submitted for additional authorization for Department of Defense (DoD) Impact Level (IL) 4 and 5, Ling said in her blog.
Microsoft hopes that a suite of government-oriented tools it plans to roll out this summer will address Congress' concerns, according to a report from Axios.
In a blog post in October 2023, Ling provided a product roadmap for its AI in government environments, noting that it was developing tools to meet high security clearance needs.
"We began enabling government agencies to adopt new AI technologies for mission critical solutions with Azure OpenAI Service in the Azure commercial cloud. Azure OpenAI Service is included as a service within the FedRAMP High authorization for our commercial cloud, enabling many agencies to securely access new large language models (LLMs) like GPT 3.5 and GPT 4 for their unclassified (non-CUI) workloads," the post said.
"However, we recognize that many agencies handle sensitive data requiring higher levels of security and compliance. To enable these agencies to fully realize the potential of AI, over the coming months Microsoft will begin rolling out new AI capabilities and infrastructure solutions across both our Azure commercial and Azure Government environments to support critical government needs and further drive innovation for mission critical workloads."