Government Regulations, AI/ML

Biden Urges AI Adoption for National Security

Share
Credit: Getty Images

The national security memorandum for AI released by the White House this week lays out an ambitious roadmap that winds through everything from the need to develop the technology in a secure way that aligns with the country’s values to ensuring the United States remains the global leader in the fast-emerging market.

The guidelines from the Biden Administration look to scale government adoption of AI for national security purposes, protect and expand the domestic supply chain for AI technology, lead the world in the development of international standards, and keep at bay China and other foreign adversaries that are developing AI technologies.

Much of the innovation that’s been done to date, particularly since the emergence of generative AI almost two years ago, has come from the private sector, according to a senior administration official during a briefing with journalists before the memorandum was released. The government needs to continue supporting and pushing private development while at the same time making sure that national security agencies are using the technologies.

“A failure to do this, a failure to take advantage of this leadership and adopt this technology … could put us at risk of a strategic surprise by our rivals, such as China,” said the official, who was not identified in the transcript of the meeting. “And as you all know, there are very clear national security applications of artificial intelligence, including in areas like cybersecurity and counterintelligence, not to mention the broad array of logistics and other activities that support military operations.”

The official added that “because countries like China recognize similar opportunities to modernize and revolutionize their own military and intelligence capabilities using artificial intelligence, it’s particularly imperative that we accelerate our national security community’s adoption and use of cutting-edge AI capabilities to maintain our competitive edge.”

Government’s Expanding Role in AI

The federal government has worked to guide the safe and secure development of AI since last year, when Biden issued an executive order that also directed that a national security memorandum for AI be created, setting the stage for the expansion this week of the role of AI in the government.

The multiple goals in the memorandum include ensuring that the supply chain for AI chips in the country is maintained, the next generation of supercomputers are developed with AI in mind, and government support is expanded to include smaller companies and doesn’t rely on a handful of large organizations.

NIST’s AI Safety Institute is now the IT industry’s government point of contact when it comes to AI, and a Framework to Advance AI Governance and Risk Management in National Security will be created to provide more guidance for implementing the memorandum’s directives.

Guidelines are Needed

Melissa Ruzzi, director of AI at cybersecurity company AppOmni, applauded the memorandum, saying the actions listed are a strong starting point for getting the information needed to make decisions moving forward.

“AI already has implications for national security, as we know that more and more attackers are using AI to create higher volume and more complex attacks, especially in the social engineering and misinformation fronts,” Ruzzi said. “Cybersecurity of AI is crucial. We know that if AI is misconfigured, it can pose risks similar to misconfigurations in SaaS applications that cause confidential data to be exposed.”

Gabrielle Hempel, customer solutions engineer at cybersecurity company Exabeam, said the directive balances innovation with a healthy understanding of the need for guardrails around AI safety, security, and ethics, but that it lacks the structure for actual policy and regulation for the technology’s use.

“With the rapid push for federal agencies to adopt AI, there are going to be gaps in oversight, vulnerabilities, and biases,” Hempel said. “With AI systems, it is hard to know and understand where data is going and how it is being used, especially with those that continually ingest data to continue learning. This can pose an extreme risk, especially when used in federal environments that are responsible for national security.”

A problem with intelligence-enabled tools is that their scope of use is always changing, which can threaten privacy and civil liberties if AI models are used for such tasks as surveillance, recognition, and biased decision-making. Clear ethical guidelines, regular audits of how data is collected and used, and transparency around the data will be needed, she said.

For Channel, Memo Means Order and Scale

The memorandum should be a boon for MSSPs and other channel partners that operate best with guidelines, according to Zeus Kerravala, principal analyst at ZK Research.

“The channel does well when there’s order,” Kerravala said. “When there’s order, they can grow their businesses and build best practices. AI has been the wild west – it still is the wild west, really. By bringing guidelines, the channel can focus more on [building services around] the development and usage of AI.”

Such broad government focus on such an emerging technology is also an indication of the continued scaling of AI, which usually means it is moving in the channel’s direction with more vendors getting involved and more AI solutions being created.

“When dealing with a small set [of offerings], it’s 90% vendor and 10% channel,” they said. “The bleeding edge is always vendor-led. For the Fortune 1000, it’s vendor-driven. But for the next 1 million [organizations], it’s channel-led.”

An In-Depth Guide to AI

Get essential knowledge and practical strategies to use AI to better your security program.