The Crypto Library Disaster

The first time someone writes (or contributes to, so this does not apply only to first systems) an application using low-level cryptography, (s)he has a choice among two bad and two very bad solutions. This blog explains the reasons of this recurrent disaster and tries to give some suggestions to avoid it…

The worst possible solution is of course to build a new crypto implementation from scratch. Not only is it a very long task, but there are a lot of critical details which are not easily managed by newcomers; for instance, when generating RSA keys, it is not enough to get two prime numbers of the right size (e.g., if you get two consecutive primes, and there are an infinite number of such pairs, the two primes can be incredibly easily recovered from their product). Note this is specified in the relevant NIST document (and similar documents); it is just not enough to have a big integer library to build a good RSA implementation.

The second worst solution is to use a bad crypto library. Note here I am a bit high church about what I call bad; according to my personal criteria a crypto implementation is bad when it was not designed for a strong security usage. For instance:

  • There is no reasonable cryptographic random generator.
  • There is no FIPS 140-2 certified version of the code (e.g., Botan is bad). I have mixed feelings about implementations which only claim to be FIPS 140-2 “ready,” as the reason they are not a candidate to certification is not always explicit/clear.

So the choice is between a solution which supports crypto hardware (i.e., PKCS#11) or a solution which works well in software (i.e., OpenSSL):

  • The software/OpenSSL way (note I refer to OpenSSL because any alternatives in the open source world are very likely based on OpenSSL): It is not so bad. OpenSSL is aggressively optimised for the common cases; heavily used algorithms (i.e., cryptographic protocols in the cryptographers’ terminology) on current platforms are written in assembly, security bugs are fixed as soon as they are known, and it covers almost everything one can need. I have more concerns about the not-crypto part, in particular the ASN.1 (lack of) parser or delay in years for not-crypto bug fixes. But the real problem with OpenSSL and similar software solution is the support of crypto hardware; PKCS#11 engines are buggy and a nightmare to both debug and use. So this solution becomes less good when crypto hardware is available and bad when crypto hardware must be used; there is a large community against the use of pure software for security core, with the idea that a software cannot really be protected. For instance, this argument constrains FIPS 140-2 certified software to level 1 (over 4) certification.
  • The hardware/PKCS#11 way (here it is simpler; the only hardware independent generic API is PKCS#11): The idea is to use the PKCS#11 application programming interface directly. This raises two real world issues: first, all PKCS#11 providers (the piece of software between the application and the hardware providing the PKCS#11 API on the application side) implement only a part of the whole PKCS#11 specification, so it is easy to fall into something required by an application which is not supported by a particular Hardware Security Module. Second, when you have no HSM, you need a software one but they were written to help debug the PKCS#11 application, not for security by themselves; it is not the best idea to add a layer of software in the security (and sometimes performance-critical) path.

What to do now? The software solution is a dead end unless one is sure no HSM will ever be used. There are at least 3 reasons to get crypto hardware someday: first, hardware is considered to be intrinsically more secure, so it will be required in some environments; second, there are situations where hardware is better, for instance random number generators (by definition a software random generator is a pseudo-random one) and key store protection; and finally, in some usages (and without a security risk analysis to support it) one believes a HSM is an essential part of the security.

So the right thing is to begin with PKCS#11. This adds some constraints (unique initialisation, sessions, separated sign/verify contexts, etc.) but IMHO most of these constraints could lead to better code. For instance, I believe a unified sign/verify context (vs. different context types for sign or verify) is a bad design; it ignores the difference between a public and a private key. The next step is to make the interface more generic so one can plug any crypto provider, PKCS#11, or software library (any library as soon as a good one is supported). A way to do this is to squeeze the PKCS#11 handling in the SoftHSMv2 implementation, so you finish with a code working with PKCS#11, or the Botan and OpenSSL backends of SoftHSMv2. Another benefit is that the code can be improved to accept a FIPS 140-2 certified crypto following the required Guidelines, so it can claim to use an embedded FIPS 140-2-validated cryptographic module running per FIPS 140-2 Implementation Guidance section X.Y guidelines.

Recent Posts

What's New from ISC