A couple of weeks ago I tweeted about “what would it take to replace OpenSSL”? You can see my tweet thread and responses here. I phrased my question poorly: I didn’t make it clear that the intent was to replace OpenSSL within an organization which already had a lot of software (including open source) that uses OpenSSL. Unfortunately, this made a lot of excellent comments, notably around using a memory-safe language, not relevant. Sorry, my fault.
The OpenSSL API is the de facto standard within the open source community. The two major forks of OpenSSL recognize this:
- When Google’s BoringSSL has a missing function, or one they implemented differently, they try to work with the open source project to get the change adopted (or at least ifdef’d). If this isn’t possible, and if nobody within Google will take on the responsibility for the change, they adopt what OpenSSL did.
- OpenBSD’s LibreSSL initially did not want to add all the settor and accessor functions. Their reasoning was, well, reasonable: since OpenSSL added functions for every field in the struct you’re not really making things opaque, just harder on the programmers and a bigger API surface. Since I was part of OpenSSL at the time, I know our interest was in being able to add fields (and also move them around), as we had problems with that before.
Ignoring documentation errors – the answer is always what the code does not what the manpage says – it is also important to point out that out of nearly 6,000 API’s, more than one-fourth are undocumented. Their policy is no new undocumented functions, which is good, but there doesn’t seem to be any effort to clear out that technical debt. Your staff probably has a great deal of internal knowledge of OpenSSL’s undocumented quirks, without even realizing it.
If your use of OpenSSL is just within a single program, perhaps embedded, and minimal interaction with open source, you might not care about this.
FedRAMP is the certification required to provide cloud-based services to the US Federal Government and FedRAMP requires FIPS 140. So if most Internet companies don’t care about FIPS, they are likely to care about FedRAMP as the revenue there is measured in the hundreds of millions overall. I have no inside information, but I would be very surprised if OpenSSL wasn’t inside most of the NIST validated modules.
OpenSSL was the first “vendor” to provide an open source freely available FIPS module. The most recent major release, OpenSSL 3.0, includes a new module that is just finishing up validation. One nice thing is that the new module is a standard shared library, so as long as the applicable runtime hooks are maintained, this module could be used inside a separate runtime, or a fork of OpenSSL.
If you use something other than OpenSSL 3, you need to consider the importance of a FIPS provider. Another option is to use validated hardware. One possible concern is if the hardware is integrated using OpenSSL’s “ENGINE” interface, which is now deprecated. the solution must include a FIPS provider. It must be able to have both FIPS and non-FIPS TLS connections.
Many users have made modifications to OpenSSL. At my employer, for example, we have a couple of dozen changes. Most of them are internal, but some are open Pull Requests, and the QUIC support has become very popular. If you leave OpenSSL, you have to consider such changes. Some are likely to be already present, such as fine-grained cipher configuration, controlling the contents of session tickets and the like. Some are less likely, such as the ability to “hook into” the state machine to process SNI or other extensions.
Nothing can be done without source access, and it is important to understand what modifications you can make, if any, that don’t invalidate the provided FIPS module.
There is no doubt that more people have reviewed the OpenSSL source base than any other equivalent library. Note that this just measures the number, not the quality, of the reviews. But moving to a different library needs to consider this.
Before changing libraries, you should consider how security reviews are done and how often, and perhaps by whom.
And also don’t discount the difference in “notification velocity” when a bug, particularly a CVE, is found. Most libraries would be hard-pressed to match OpenSSL’s speed in that area.
There are many standards and compliance organizations involved in cryptography. Some are like NIST, actively evaluating new cryptographic algorithms, and some are like PCI, defining requirements for managing credit/debit cards securely.
In general, one of the most important organizations for moving forward the use of cryptography on the Internet, is the IETF. They are the home for TLS, best-practice uses of it, definition of how to do Elliptic Curve in TLS and for signatures, and so on.
For a few years, OpenSSL had a policy of following IETF standards. This seems to have become less important, and it is noticeably harder to get the attention of the project team to review pull requests. My guess for this is that first they were “distracted” by the FIPS work, and now they seem to be heads-down intent on providing their own implementation of QUIC. That’s unfortunate. There are a handful of things that I know customers — you know, the folks with money — want:
- Encrypted Client Hello, masking the SNI extension for more privacy
- Raw Public Keys, such as used by Apple’s Private Relay
- Support for existing, full-featured industrial-strength QUIC libraries
- Certificate compression, reducing the size of the TLS handshake
Governance, Transparency, Participation
Suppose you find a supplier that can address all of the issues; now what? Is it an open source project? Is it transparent and open to all participants? Is it a company that will let — or better, help — you make changes that you need, or are their timetable and priorities in sync with your needs?
OpenSSL used to meet all of these. It no longer does, and that’s sad, but it’s why I’m asking these questions.