Governance, Risk and Compliance, MSSP

Securing the Quantum Era: What NIST’s New Encryption Standards Mean for Cybersecurity

Concept design showcasing the future of quantum computing in cybersecurity

COMMENTARY: The National Institute of Standards and Technology (NIST) recently released the first set of Federal Information Processing Standards (FIPS) for post-quantum cryptography (PQC). These standards offer a framework for safeguarding sensitive information against potential threats from quantum computers, which could one day break existing encryption protocols. Institutions worldwide, including government bodies and large financial organizations, now have a foundation for implementing these PQC standards to future-proof their systems. These can be helpful to MSSPs as well when implementing cybersecurity systems and advising their customers on PQC protection. But how certain are we that these standards can withstand quantum attacks? The answer is more complex than it appears.

The Quantum Threat to Classical Cryptography

Today’s cryptographic systems rely on mathematical problems that are extremely difficult for classical computers to solve. These problems, like factoring large integers or solving discrete logarithms, serve as the basis for widely used encryption methods that secure everything from online banking transactions to private messages. However, in 1994, researcher Peter Shor discovered an algorithm that could easily solve these problems—if run on a powerful enough quantum computer. Shor’s algorithm demonstrated that many existing cryptographic protocols would be vulnerable if quantum computing technology advances to a certain level.

This realization led to the development of PQC, an emerging field focused on creating encryption methods that even a quantum computer would struggle to break. The NIST PQC standards, including algorithms like ML-KEM and ML-DSA, rely on mathematical problems in module lattices that are believed to resist quantum attacks. While NIST cannot guarantee that these lattice-based problems will be safe from quantum breakthroughs in the distant future, we do have a fair amount of confidence in their quantum-hardness as researchers have tried for decades to break these lattice problems and have continually failed.

The Security Proof Behind the Standards

So, how can we assess the security of these standards? NIST’s PQC standards come with theoretical proofs that support their resistance to quantum attacks, provided certain assumptions hold. For example, the security proofs for ML-KEM and ML-DSA assume that the underlying lattice problems are difficult for quantum computers to solve. This approach, known as “proof by reduction,” shows that if an attacker could break ML-KEM or ML-DSA, they could also solve the underlying lattice problem, which is assumed to be quantum hard.

The catch here is that these proofs are only as strong as the assumptions they rely on. While the current consensus in the cryptographic community is that these lattice problems are secure, there’s no way to definitively prove this. In other words, if someone in the future finds an efficient quantum algorithm for these lattice problems, the security of these PQC standards would likely be compromised.

The Role of Hash Functions and the Random Oracle Model

One key component in many cryptographic schemes, including PQC, is the hash function—a mathematical function that generates a fixed-size string from any input. In most cryptographic applications, hash functions must be collision-resistant, meaning finding two inputs that produce the same output should be difficult. The NIST PQC standards use hash functions for different purposes, such as data compression, hedging against bad system randomness, etc., but the security proofs for these standards make a stronger assumption: that hash outputs are perfectly random. This assumption, known as the “random oracle model” (ROM), is a theoretical construct used to simplify security proofs.

The ROM imagines a scenario where all parties involved have access to a “random oracle,” a hypothetical black box that provides random, consistent hash values for each unique input. This model allows cryptographers to create security proofs that assume the hash function is ideally random, but it’s an assumption that doesn’t perfectly translate to the real world. In practice, real-world hash functions like SHA3-256, which is used in the NIST standards, don’t produce truly random outputs. While security proofs using the ROM strongly indicate that a scheme is secure, they aren’t foolproof.

Adapting to Quantum Capabilities: The Quantum Random Oracle Model

In the post-quantum world, a quantum adversary might be able to take advantage of a hash function’s properties in ways that classical attackers cannot. For example, a quantum attacker could evaluate a hash function like SHA3-256 in a quantum superposition, speeding up their ability to find collisions and potentially compromising the scheme’s security. The classical ROM does not fully address this scenario.

Researchers developed an enhanced model called the Quantum Random Oracle Model (QROM) to account for this. The QROM allows security proofs to consider attackers with quantum capabilities who can access the random oracle in a quantum manner. Cryptographers can more accurately assess how these schemes would perform against a quantum attacker by simulating this access in security proofs. While this model is still theoretical, it represents an essential step toward understanding and ensuring the security of PQC standards in the face of quantum threats.

Moving Forward with PQC and Quantum-Secure Cryptography

The challenges of proving PQC standards secure in both the ROM and QROM highlight the complexities of ensuring cryptographic resilience in a post-quantum world. Current security proofs in these models provide strong indications of PQC standards’ robustness, but they aren’t guaranteed. The ROM and QROM serve as heuristics, or best estimates, that help identify potential weaknesses. They allow cryptographers to address possible vulnerabilities in the design of cryptographic schemes, ensuring that any flaws are likely in the hash functions themselves rather than in the broader structure of the cryptographic protocol.

Despite these models' limitations, NIST’s PQC standards represent a significant milestone in preparing for a quantum future. While researchers will continue to search for weaknesses and explore alternative solutions, the PQC standards provide institutions with valuable tools for protecting sensitive data from both current and future threats.

MSSP Alert Perspectives columns are written by trusted members of the managed security services, value-added reseller and solution provider channels or MSSP Alert's staff. Do you have a unique perspective you want to share? Check out our guidelines here and send a pitch to [email protected].

Varun Maram

Varun Maram is a postdoctoral fellow at SandboxAQ, a B2B company delivering solutions at the intersection of AI and quantum techniques. The company’s Large Quantitative Models (LQMs) are used in life sciences, financial services, navigation, and other sectors. SandboxAQ emerged from Alphabet Inc. as an independent company backed by a growth capital round of $500 million, funded and advised by T. Rowe Price Associates, Inc., IQT, US Innovative Technology Fund, Eric Schmidt, Breyer Capital, Guggenheim Partners, Marc Benioff, Thomas Tull, Paladin Capital Group, and others.

You can skip this ad in 5 seconds