This adds three features flags: SHA1CSRs, OldTLSOutbound, and
OldTLSInbound. Each controls the behavior of an upcoming deprecation
(except OldTLSInbound, which isn't yet scheduled for a deprecation
but will be soon). Note that these feature flags take advantage of
`features`' default values, so they can default to "true" (that is, each
of these features is enabled by default), and we set them to "false"
in the config JSON to turn them off when the time comes.
The unittest for OldTLSOutbound requires that `example.com` resolves
to 127.0.0.1. This is because there's logic in the VA that checks
that redirected-to hosts end in an IANA TLD. The unittest relies on
redirecting, and we can't use e.g. `localhost` in it because of that
TLD check, so we use example.com.
Fixes#6036 and #6037
Simplify the WFE `RevokeCertificate` API method in three ways:
- Remove most of the logic checking if the requester is authorized to
revoke the certificate in question (based on who is making the
request, what authorizations they have, and what reason they're
requesting). That checking is now done by the RA. Instead, simply
verify that the JWS is authenticated.
- Remove the hard-to-read `authorizedToRevoke` callbacks, and make the
`revokeCertBySubscriberKey` (nee `revokeCertByKeyID`) and
`revokeCertByCertKey` (nee `revokeCertByJWK`) helpers much more
straight-line in their execution logic.
- Call the RA's new `RevokeCertByApplicant` and `RevokeCertByKey` gRPC
methods, rather than the deprecated `RevokeCertificateWithReg`.
This change, without any flag flips, should be invisible to the
end-user. It will slightly change some of our log message formats.
However, by now relying on the new RA gRPC revocation methods, this
change allows us to change our revocation policies by enabling the
`AllowDoubleRevocation` and `MozRevocationReasons` feature flags, which
affect the behavior of those new helpers.
Fixes#5936
Add two new gRPC methods to the SA:
- `RevokeCertByKey` will be used when the API request was signed by the
certificate's keypair, rather than a Subscriber keypair. If the
request is for reason `keyCompromise`, it will ensure that the key is
added to the blocked keys table, and will attempt to "re-revoke" a
certificate that was already revoked for some other reason.
- `RevokeCertByApplicant` supports both the path where the original
subscriber or another account which has proven control over all of the
identifier in the certificate requests revocation via the API. It does
not allow the requested reason to be `keyCompromise`, as these
requests do not represent a demonstration of key compromise.
In addition, add a new feature flag `MozRevocationReasons` which
controls the behavior of these new methods. If the flag is not set, they
behave like they have historically (see above). If the flag is set to true,
then the new methods enforce the upcoming Mozilla policies around
revocation reasons, namely:
- Only the original Subscriber can choose the revocation reason; other
clients will get a set reason code based on the method of requesting
revocation. When the original Subscriber requests reason
`keyCompromise`, this request will be honored, but the key will not be
blocked and other certificates with that key will not also be revoked.
- Revocations signed with the certificate key will always get reason
`keyCompromise`, because we do not know who is sending the request and
therefore must assume that the use of the key in this way represents
compromise. Because these requests will always be fore reason
`keyCompromise`, they will always be added to the blocked keys table
and they will always attempt "re-revocation".
- Revocations authorized via control of all names in the cert will
always get reason `cessationOfOperation`, which is to be used when the
original Subscriber does not control all names in the certificate
anymore.
Finally, update the existing `AdministrativelyRevokeCertificate` method
to use the new helper functions shared by the two new methods.
Part of #5936
Empty the bodies of the WFE's and RA's `NewAuthorization` methods. These
were used exclusively by the ACMEv1 flow. Also remove any helper functions
which were used exclusively by this code, and any tests which were testing
exclusively this code and which have equivalent tests for the ACMEv2 flow.
Greatly simply `SA.GetAuthorizations2`, as it no longer has to contend with
there being two different kinds of authorizations in the database. Add a few
TODOs to consider removing a few other SA gRPC methods which no longer
have any callers.
Part of #5681
Add a new feature flag `GetAuthzUseIndex` which causes the SA
to add `USE INDEX (regID_identifer_status_expires_idx)` to its authz2
database queries. This should encourage the query planner to actually
use that index instead of falling back to large table-scans.
Fixes#5822
Add a feature flag which causes the SA to switch between using the
traditional read-write database connector (pointed at the primary db)
or the newer read-only database connector (usually pointed at a
replica) when executing the `GetAuthorizations2` query.
Add a new feature flag to control whether or not the experimental ARI
information is exposed. Add a new entry to the Directory object which
provides the base URL for ARI requests. Add a new handler to the WFE
which parses incoming requests and returns reasonable renewalInfo.
Part of #5674
Add a new method to the SA's gRPC interface which takes both an Order
and a list of new Authorizations to insert into the database, and adds
both (as well as the various ancillary rows) inside a transaction.
To enable this, add a new abstraction layer inside the `db/` package
that facilitates inserting many rows at once, as we do for the `authz2`,
`orderToAuthz2`, and `requestedNames` tables in this operation.
Finally, add a new codepath to the RA (and a feature flag to control it)
which uses this new SA method instead of separately calling the
`NewAuthorization` method multiple times. Enable this feature flag in
the config-next integration tests.
This should reduce the failure rate of the new-order flow by reducing
the number of database operations by coalescing multiple inserts into a
single multi-row insert. It should also reduce the incidence of new
authorizations being created in the database but then never exposed to
the subscriber because of a failure later in the new-order flow, both by
reducing failures overall and by adding those authorizations in a
transaction which will be rolled back if there is a later failure.
Fixes#5577
Make the `NonCFSSLSigner` code path the only code path through the CA.
Remove all code related to the old, CFSSL-based code path. Update tests
to supply (or mock) issuers of the new kind. Remove or simplify a few
tests that were testing for behavior only exhibited by the old code
path, such as incrementing certain metrics. Remove code from `//cmd/`
for initializing the CFSSL library. Finally, mark the `NonCFSSLSigner`
feature flag itself as deprecated.
Delete the portions of the vendored CFSSL code which were only used
by these deleted code paths. This does not remove the CFSSL library
entirely, the rest of the cleanup will follow shortly.
Part of #5115
This flag is now enabled in Let's Encrypt staging/prod.
This change deprecates the flag and prepares it for deletion in a future
change. It can then be removed once no staging/prod configs reference the
flag.
Fixes#5236
features.go contains some feature flags that no longer control any Boulder
behavior. They have been removed from Let's Encrypt staging and prod
configs and can now be removed from Boulder.
This change removes the deprecated feature flags.
Currently, the CA is configured with a set of `internalIssuer`s,
and a mapping of public key algorithms (e.g. `x509.RSA`) to which
internalIssuer to use. In operation today, we use the same issuer
for all kinds of public key algorithms. In the future, we will use
different issuers for different algorithms (in particular, we will
use R3 to issue for RSA keys, and E1 to issue for ECDSA keys). But
we want to roll that out slowly, continuing to use our RSA issuer
to issue for all types of public keys, except for ECDSA keys which
are presented by a specific set of allowed accounts.
This change adds a new config field to the CA, which lets us specify
a small list of registration IDs which are allowed to have issuance
from our ECDSA issuer. If the config list is empty, then all accounts
are allowed. The CA checks to see if the key being issued for is
ECDSA: if it is, it then checks to make sure that the associated
registration ID is in the allowlist. If the account is not allowed,
it then overrides the issuance algorithm to use RSA instead,
mimicking our old behavior. It also adds a new feature flag, which
can be enabled to skip the allowlist entirely (effectively allowing
all registered accounts). This feature flag will be enabled when
we're done with our testing and confident in our ECDSA issuance.
Fixes#5259
The PrecertificateRevocation flag is turned on everywhere, so the
else case is unused code. This change updates the WFE to always
use the PrecertificateRevocation code path, and deprecates the old
feature flag.
The TestRevokeCertificateWithAuthz method was deleted because
it is redundant with TestRevokeCertificateWithAuthorizations.
Fixes#5240
Adds a replacement issuance library that replaces CFSSL. Usage of the
new library is gated by a feature, meaning until we fully deploy the
new signer we need to support both the new one and CFSSL, which makes
a few things a bit complicated.
One Big follow-up change is that once CFSSL is completely gone we'll
be able to stop using CSRs as the internal representation of issuance
requests (i.e. instead of passing a CSR all the way through from the
WFE -> CA and then converting it to the new signer.IssuanceRequest,
we can just construct a signer.IssuanceRequest at the WFE (or RA) and
pass that through the backend instead, making things a lot less opaque).
Fixes#4906.
The StoreKeyHashes feature flag controls whether rows are added to the
keyHashToSerial table. This feature is now enabled everywhere, so the
flag-protected code can be turned on unconditionally and the flag
removed from configs.
Related to #4895
This commit consists of three classes of changes:
1) Changing various command main.go files to always behave as they
would have when features.BlockedKeyTable was true. Also changing
one test in the same manner.
2) Removing the BlockedKeyTable flag from configuration in config-next,
because the flag is already live.
3) Moving the BlockedKeyTable flag to the "deprecated" section of
features.go, and regenerating featureflag_strings.go.
A future change will remove the BlockedKeyTable flag (and other
similarly deprecated flags) from features.go entirely.
Fixes#4873
Currently 99.99% of RSA keys we see in certificates at Let's Encrypt are
either 2048, 3072, or 4096 bits, but we support every 8 bit increment
between 2048 and 4096. Supporting these uncommon key sizes opens us up to
having to block much larger ranges of keys when dealing with something
like the Debian weak keys incident. Instead we should just reduce the
set of key sizes we support down to what people actually use.
Fixes#4835.
Adds a daemon which monitors the new blockedKeys table and checks for any unexpired, unrevoked certificates that are associated with the added SPKI hashes and revokes them, notifying the user that issued the certificates.
Fixes#4772.
Prev. we inserted data for tracking issued names into the `issuedNames` table
during `sa.AddCertificate`. A more robust solution is to do this during
`sa.AddPrecertificate` since this is when we've truly committed to having
issued for the names.
The new SA `WriteIssuedNamesPrecert` feature flag enables writing this table
during `AddPrecertificate`. The legacy behaviour continues with the flag
enabled or disabled but is updated to tolerate duplicate INSERT errors so that
it is possible to deploy this change across multiple SA instances safely.
Along the way I also updated `SA.AddPrecertificate` to perform its two
`INSERT`s in a transaction using the `db.WithTransaction` wrapper.
Resolves https://github.com/letsencrypt/boulder/issues/4565
In the process I tweaked a few variable names in GetAuthorizations2 to
refer to just "authz" instead of "authz2" because it made things
clearer, particularly in the case of authz2IDMap, which is a map of
whether a given ID exists, not a map from authz's to IDs.
Fixes#4564
This avoids needing to send the entire certificate in OCSP generation
RPCs.
Ended up including a few cleanups that made the implementation easier.
Initially I was struggling with how to derive the issuer identification info.
We could just stick the full SPKI hash in certificateStatus, but that takes a
significant amount of space, we could configure unique issuer IDs in the CA
config, but that would require being very careful about keeping the IDs
constant, and never reusing an ID, or we could store issuers in a table in the
database and use that as a lookup table, but that requires figuring out how to
get that info into the table etc. Instead I've just gone with what I found to
be the easiest solution, deriving a stable ID from the cert hash. This means we
don't need to remember to configure anything special and the CA config stays
the same as it is now.
Fixes#4469.
Previously we used a JOIN on the orderToAuthz2 table in order to make sure
we only returned authorizations created using the ACME v2 API. Each time an
order is created a pivot row (order ID + authz ID) is added to the
orderToAuthz2 table. If a large number of orders are created that all contain
the same authorization, due to reuse, then the JOINd query would return a full
authorization row for each entry in the orderToAuthz2 table with the authorization
ID.
Instead we now filter out these authorizations by doing a second query against
the orderToAuthz2 table. Using this query still requires examining a large number
of rows, but because we don't need to construct a temporary table for the JOIN
and fill it with all the full authorization rows we should save resources.
Fixes#4500.
This change set makes the authz2 storage format the default format. It removes
most of the functionality related to the previous storage format, except for
the SA fallbacks and old gRPC methods which have been left for a follow-up
change in order to make these changes deployable without introducing
incompatibilities.
Fixes#4454.
When the `features.PrecertificateRevocation` feature flag is enabled the WFE2
will allow revoking certificates for a submitted precertificate. The legacy WFE1
behaviour remains unchanged (as before (pre)certificates issued through the V1
API will be revocable with the V2 API).
Previously the WFE2 vetted the certificate from the revocation request by
looking up a final certificate by the serial number in the requested
certificate, and then doing a byte for byte comparison between the stored and
requested certificate.
Rather than adjust this logic to handle looking up and comparing stored
precertificates against requested precertificates (requiring new RPCs and an
additional round-trip) we choose to instead check the signature on the requested
certificate or precertificate and consider it valid for revocation if the
signature validates with one of the WFE2's known issuers. We trust the integrity
of our own signatures.
An integration test that performs a revocation of a precertificate (in this case
one that never had a final certificate issued due to SCT embedded errors) with
all of the available authentication mechanisms is included.
Resolves https://github.com/letsencrypt/boulder/issues/4414
This change adds two tables and two methods in the SA, to store precertificates
and serial numbers.
In the CA, when the feature flag is turned on, we generate a serial number, store it,
sign a precertificate and OCSP, store them, and then return the precertificate. Storing
the serial as an additional step before signing the certificate adds an extra layer of
insurance against duplicate serials, and also serves as a check on database availability.
Since an error storing the serial prevents going on to sign the precertificate, this decreases
the chance of signing something while the database is down.
Right now, neither table has read operations available in the SA.
To make this work, I needed to remove the check for duplicate certificateStatus entry
when inserting a final certificate and its OCSP response. I also needed to remove
an error that can occur when expiration-mailer processes a precertificate that lacks
a final certificate. That error would otherwise have prevented further processing of
expiration warnings.
Fixes#4412
This change builds on #4417, please review that first for ease of review.
For authzv1, this actually executes a SQL DELETE for the unused challenges
when an authorization is updated upon validation.
For authzv2, this doesn't perform a delete, but changes the authorizations that
are returned so they don't include unused challenges.
In order to test the flag for both authz storage models, I set the feature flag in
both config/ and config-next/.
Fixes#4352
This rolls forward #4326 after it was reverted in #4328.
Resolves https://github.com/letsencrypt/boulder/issues/4329
The older query didn't have a `LIMIT 1` so it was returning multiple results,
but gorp's `SelectOne` was okay with multiple results when the selection was
going into an `int64`. When I changed this to a `struct` in #4326, gorp started
producing errors.
For this bug to manifest, an account needs to create an order, then fail
validation, twice in a row for a given domain name, then create an order once
more for the same domain name - that third request will fail because there are
multiple orders in the orderFqdnSets table for that domain.
Note that the bug condition doesn't happen when an account does three successful
issuances in a row, because finalizing an order (that is, issuing a certificate
for it) deletes the row in orderFqdnSets. Failing an authorization does not
delete the row in orderFqdnSets. I believe this was an intentional design
decision because an authorization can participate in many orders, and those
orders can have many other authorizations, so computing the updated state of
all those orders would be expensive (remember, order state is not persisted in
the DB but is calculated dynamically based on the authorizations it contains).
This wasn't detected in integration tests because we don't have any tests that
fail validation for the same domain multiple times. I filed an issue for an
integration test that would have incidentally caught this:
https://github.com/letsencrypt/boulder/issues/4332. There's also a more specific
test case in #4331.
This reverts commit 9fa360769e.
This commit can cause "gorp: multiple rows returned for: ..." under certain situations.
See #4329 for details of followup.
When there are a lot of potential orders to reuse, the query could scan
unnecessary rows, sometimes leading to timeouts. The new query used
when the FasterGetOrderForNames feature flag is enabled uses the
available index more effectively and adds a LIMIT clause.
Add flag to explicitly disable orders containing authz2 authorizations. After looking at a handful of much more complex solutions this feels like the best option. With NewAuthorizationSchema disabled and DisableAuthz2Orders enabled any requests for orders that include authz2 authorizations will return a 404 (where previously they would return a 500).
Fixes#4263.
In November 2019 we will be removing support for legacy pre RFC-8555
unauthenticated GET requests for accessing ACME resources. A new
`MandatoryPOSTAsGET` feature flag is added to the WFE2 to allow
enforcing this change. Once this feature flag has been activated in Nov
we can remove both it and the WFE2 code supporting GET requests.