Work on quantum computers is accelerating as developers grow more confident that they will be able to address problems that are intractable with classical computing. That may be good news in some contexts, but bad news for cryptography.
The concern is that quantum computers will crack the cryptographic schemes that protect our online lives, financial systems, and communications networks. An algorithm to do so has been around since 1994, awaiting the development of a quantum computer upon which it can run.
The challenge in cryptography is to find a way in which two entities can communicate securely over a public channel. This is easy to do if the entities can meet and share a secret for use as the basis of a coding scheme. It’s more difficult if the two entities never meet, and so cannot share a secret in this way. Public key schemes address the issue by making it possible to encrypt a message using a public key that anyone can access, but then requiring a private key to decrypt the message. The trick to these schemes is to base them upon a mathematical relationship between the public key and the private key that makes it very difficult for an eavesdropper to work out the private key or the message.
RSA, one of the best-known such schemes, uses the fact that if you multiply two very large prime numbers together to create an even larger integer, it is difficult to find out what the initial prime numbers were without dividing each prime number into the integer until one of the prime numbers is revealed.
In RSA, an entity (Bob) uses the very large integer as his public key and keeps the two prime factors used to produce it as his secret key. To send a message to Bob, Alice encrypts it using Bob’s public key in such a way that it can only be decrypted by knowing the prime factors. Bob can decrypt the message sent to him, but an eavesdropper can only crack the code by factorising the public key, which, as previously discussed, can’t be done efficiently on today’s classical computers, given that the prime factors are large enough. (See Part 1 of “Quantum computing’s threat to cryptography”)
Back in 1994, Peter Shor developed an algorithm that future quantum computers could use to factorize large integers much more quickly than classical computers. Worse, the Quantum Fourier Transform used as the basic building block of Shor’s algorithm can be adapted to attack other common cryptographic approaches, such as Diffie-Helman key exchange and elliptic-curve digital signatures.
So, the race is on to find new classes of ‘easy questions with hard answers’, computational problems that will run on classical computers and yet resist attacks based on the peculiar properties of quantum computers. Most of these post-quantum cryptography (PQC) schemes fall into one of four categories: lattice, code-based, multivariate, and isogeny-based strategies.
The most popular of these is lattice cryptography. A lattice is a regular grid of points in space whose positions are defined by the lattice’s basis, a set of vectors which control how you can move between lattice points
A lattice with two different bases
Research on lattices suggests that they can be used as the foundation of computationally difficult problems that will resist quantum attacks. The classic example is the shortest vector problem, in which a solver is given the basis for a lattice and then asked to find the point closest to its origin. This is easy for small lattices, as depicted above, but becomes very difficult when the lattice is defined in hundreds of dimensions.
Lattice problems may be useful for PQC because they can be defined from more than one base, and some bases are easier to work with than others. For example, in the diagram above all the lattice points can be reached by starting at 0 then progressing along various combinations of either the red or the green lines (which are the vectors of the two different bases).
Using the green basis, for which the vectors are short and nearly orthogonal, you can quickly explore the shape of the lattice and then read off the shortest vector. Using the red lines, however, it is much more difficult to find points close to 0, and hard to verify that there’s no closer point when you have done so. The two approaches are like trying to map a subway by, with the green basis, only being allowed to move one stop at a time north or north-north-west, and with the red basis, only being allowed to move four steps at a time north-east-east or east-north-east. It’s just harder to complete a subway map using the red basis.
Lattice-based cryptosystems use a ‘bad’ basis as a public key and a ‘good’ basis as a private key. The bad basis describes the lattice well enough to enable users to hide the message within it as a hard problem. Only the intended recipient can then solve this problem easily, because finding the solution requires access to the privately held good basis. Lattice cryptography is flexible, because there are lots of ways to define a lattice and hide points within it. This approach is already being used in encryption and digital signatures.
The second approach is code-based cryptography, which borrows techniques used to transfer information over an unreliable channel such as a noisy radio link. Such communications strategies usually use an error-correcting code to encode the data with enough extra information to enable the recipient to recover the message. The approach is applied to cryptography by defining an error-correcting scheme as a private key and using an agreed encoding scheme as the corresponding public key. To encrypt a message, the sender encodes the message with the agreed approach and then introduces errors to simulate an unreliable channel. As long as enough errors are introduced, anyone who lacks the private error-correcting code cannot recover the message. This approach is well-regarded from a security point of view but uses very large public keys that make for slow implementations.
Multivariate cryptography is based on the challenge of solving systems of multivariate polynomial equations, which look like this:
This approach has the useful property that it is an ‘NP-Complete’ problem, which is crypto-speak for “really very hard indeed and unlikely to be cracked even by quantum computing approaches”. Multivariate cryptography is developing quickly and although current implementations compare poorly to lattice-based schemes, it may still be useful for signature schemes.
The latest approach to cryptography is based on an isogeny, a type of structured mapping between two elliptic curves like those shown in the diagram below:
An elliptic curve in two parts
Returning to public-key fundamentals for a moment, we want to find mathematical functions which are easy to work forwards, from inputs to answer, but difficult or even impossible to work backwards, from answer to inputs. This can be done using elliptic curves. Given two elliptic curves E1 and E2, it is hard to discover the process (the isogeny) that maps from E1 to E2. Knowing this we can use a pair of elliptic curves as a public key, and then keep the isogeny, i.e., the technique for mapping from E1 to E2, as the private key.
Now, whoever has the isogeny can use it to compute operations involving E1 and E2 that are intractable for anyone without access to it. As usual, the public pair E1 and E2 are used to set up these problems. Although isogeny-based cryptography is in its infancy, the cryptographic community is excited about its potential following successful experiments using isogeny-based schemes in internet protocols.
Quantum computers are yet to challenge the cryptographic schemes that we rely upon to protect our online lives but may soon do so. The good news is that cryptographers are already working on a menagerie of new mathematics to protect our privacy in the quantum-computing age.
Dr. Charlie Grover is a cryptography researcher at Crypto Quantique, where he is driving performance improvements in securely extracting entropy, or randomness, from Physical Unclonable Functions (PUFs) in CMOS semiconductors.
His work is contributing to further development of the world’s most secure root-of-trust technology for microcontrollers and application-specific integrated circuits (ASICs), where identities and cryptographic keys are developed inside these silicon devices on-demand, eliminating the need for key injection, key storage, or third-party involvement. He holds a PhD in Electrical and Electronic Engineering from Imperial College London, where he worked on lattice-based cryptography and other aspects of post-quantum cryptography.