Adds a rocsp redis client to the sa if cluster information is provided in the
sa config. If a redis cluster is configured, all new certificate OCSP
responses added with sa.AddPrecertificate will attempt to be written to
the redis cluster, but will not block or fail on errors.
Fixes: #5871
Add two new gRPC methods to the SA:
- `RevokeCertByKey` will be used when the API request was signed by the
certificate's keypair, rather than a Subscriber keypair. If the
request is for reason `keyCompromise`, it will ensure that the key is
added to the blocked keys table, and will attempt to "re-revoke" a
certificate that was already revoked for some other reason.
- `RevokeCertByApplicant` supports both the path where the original
subscriber or another account which has proven control over all of the
identifier in the certificate requests revocation via the API. It does
not allow the requested reason to be `keyCompromise`, as these
requests do not represent a demonstration of key compromise.
In addition, add a new feature flag `MozRevocationReasons` which
controls the behavior of these new methods. If the flag is not set, they
behave like they have historically (see above). If the flag is set to true,
then the new methods enforce the upcoming Mozilla policies around
revocation reasons, namely:
- Only the original Subscriber can choose the revocation reason; other
clients will get a set reason code based on the method of requesting
revocation. When the original Subscriber requests reason
`keyCompromise`, this request will be honored, but the key will not be
blocked and other certificates with that key will not also be revoked.
- Revocations signed with the certificate key will always get reason
`keyCompromise`, because we do not know who is sending the request and
therefore must assume that the use of the key in this way represents
compromise. Because these requests will always be fore reason
`keyCompromise`, they will always be added to the blocked keys table
and they will always attempt "re-revocation".
- Revocations authorized via control of all names in the cert will
always get reason `cessationOfOperation`, which is to be used when the
original Subscriber does not control all names in the certificate
anymore.
Finally, update the existing `AdministrativelyRevokeCertificate` method
to use the new helper functions shared by the two new methods.
Part of #5936
Add a new gRPC method `UpdateRevokedCertificate` to the SA. This
method takes the same argument as the existing `RevokeCertificate` RPC,
but only operates on certificates that have already been revoked with a
reason other than keyCompromise (c.f. `RevokeCertificate`, which only
operates on certificates that have not been revoked).
One thing to be careful of here is that storing an updated revocation reason
should not also change the revocation date. To support this, add a new field
to the existing `RevokeCertificateRequest` that allows us to differentiate the
time at which the new OCSP response was created, and the time at which
the revocation went into effect.
Part of #5936
We have the `core.AcmeStatus` type precisely so that we don't have to
rely on correctly typing the word `"pending"` everywhere. Use those values
instead of raw strings when converting between string-like Authz statuses
and the database bitfield.
We have decided that we don't like the if err := call(); err != nil
syntax, because it creates confusing scopes, but we have not cleaned up
all existing instances of that syntax. However, we have now found a
case where that syntax enables a bug: It caused readers to believe that
a later err = call() statement was assigning to an already-declared err
in the local scope, when in fact it was assigning to an
already-declared err in the parent scope of a closure. This caused our
ineffassign and staticcheck linters to be unable to analyze the
lifetime of the err variable, and so they did not complain when we
never checked the actual value of that error.
This change standardizes on the two-line error checking syntax
everywhere, so that we can more easily ensure that our linters are
correctly analyzing all error assignments.
We have no automated processes which transition Authorizations into
the "revoked" status. Therefore the code which compute the status of
an Order based on the status of its associated Authorizations did not
handle Authorizations with status `core.StatusRevoked`. This code
started failing (by falling through to its default case) when Revoked
Authorizations were introduced to the database.
Unfortunately, the `statusForOrder` function is called during /new-order
handling, to determine whether an existing Order can be reused for
the new request. This means that clients attempting to place a new
order for which there was a candidate existing order were getting 500s,
rather than getting either a new or reused Order object.
To fix the error, introduce a new case which handles `core.StatusRevoked`.
Also collapse the various integer counters which were all just counting
"any status other than Valid or Pending" into a single counter.
This bug would have been avoided by Rust's requirement that case
switches cover all of their possible branches.
Running an older version (v0.0.1-2020.1.4) of `staticcheck` in
whole-program mode (`staticcheck --unused.whole-program=true -- ./...`)
finds various instances of unused code which don't normally show up
as CI issues. I've used this to find and remove a large chunk of the
unused code, to pave the way for additional large deletions accompanying
the WFE1 removal.
Part of #5681
Empty the bodies of the WFE's and RA's `NewAuthorization` methods. These
were used exclusively by the ACMEv1 flow. Also remove any helper functions
which were used exclusively by this code, and any tests which were testing
exclusively this code and which have equivalent tests for the ACMEv2 flow.
Greatly simply `SA.GetAuthorizations2`, as it no longer has to contend with
there being two different kinds of authorizations in the database. Add a few
TODOs to consider removing a few other SA gRPC methods which no longer
have any callers.
Part of #5681
Add a new feature flag `GetAuthzUseIndex` which causes the SA
to add `USE INDEX (regID_identifer_status_expires_idx)` to its authz2
database queries. This should encourage the query planner to actually
use that index instead of falling back to large table-scans.
Fixes#5822
Add a feature flag which causes the SA to switch between using the
traditional read-write database connector (pointed at the primary db)
or the newer read-only database connector (usually pointed at a
replica) when executing the `GetAuthorizations2` query.
When a valid authorization is stored in the database the authorization
column attemptedAt is set based on the challenge `Validated` value. Use
this value in `checkAuthorizationsCAA` to determine if an authorization
is sufficiently stale to need a recheck of the CAA DNS record. Error if the
time is nil. Keeps old codepath for safety check and increments a metric
if the old codepath is used.
Remove the last of the gRPC wrapper files. In order to do so:
- Remove the `core.StorageGetter` interface. Replace it with a new
interface (whose methods include the `...grpc.CallOption` arg)
inside the `sa/proto/` package.
- Remove the `core.StorageAdder` interface. There's no real use-case
for having a write-only interface.
- Remove the `core.StorageAuthority` interface, as it is now redundant
with the autogenerated `sapb.StorageAuthorityClient` interface.
- Replace the `certificateStorage` interface (which appears in two
different places) with a single unified interface also in `sa/proto/`.
- Update all test mocks to include the `_ ...grpc.CallOption` arg in
their method signatures so they match the gRPC client interface.
- Delete many methods from mocks which are no longer necessary (mostly
because they're mocking old authz1 methods that no longer exist).
- Move the two `test/inmem/` wrappers into their own sub-packages to
avoid an import cycle.
- Simplify the `satest` package to satisfy one of its TODOs and to
avoid an import cycle.
- Add many methods to the `test/inmem/sa/` wrapper, to accommodate all
of the methods which are called in unittests.
Fixes#5600
Overhaul how expiration-mailer checks to see if a `certIsRenewed`.
First, change the helper function to take the list of names (which
can be hashed into an fqdnSet) and the issuance date. This allows the
search for renewals to be a much simpler linear scan rather than an
ugly outer left join. Second, update the query to examine both the
`fqdnSets` and `fqdnSets_old` tables, to account for the fact that
this code cares about more time (~90d) than the `fqdnSets` table
currently holds.
Also export the SA's `HashNames` method so it can be used by the mailer,
and update the mailer's tests to use correct name hashes instead of
fake human-readable hashes.
Fixes#5672
In `sa.checkFQDNSetExists`, query both the normal `fqdnSets` and the
`fqdnSets_old` tables. The `fqdnSets` table was recently truncated to
only have 7 days worth of data, but this helper function is used to
bypass other rate limits if there exists a prior certificate for the
exact same set of names, and that functionality cares about at least
90 days worth of data. Therefore we need to query both tables, at least
until `fqdnSets` contains 90 days worth of data again.
Also make a variety of other changes to support this change: creating
the `fqdnSets_old` table in our test environment, documenting various
places where it needs to be cleaned up, and removing some unused code.
Fixes#5671
Add a new method to the SA's gRPC interface which takes both an Order
and a list of new Authorizations to insert into the database, and adds
both (as well as the various ancillary rows) inside a transaction.
To enable this, add a new abstraction layer inside the `db/` package
that facilitates inserting many rows at once, as we do for the `authz2`,
`orderToAuthz2`, and `requestedNames` tables in this operation.
Finally, add a new codepath to the RA (and a feature flag to control it)
which uses this new SA method instead of separately calling the
`NewAuthorization` method multiple times. Enable this feature flag in
the config-next integration tests.
This should reduce the failure rate of the new-order flow by reducing
the number of database operations by coalescing multiple inserts into a
single multi-row insert. It should also reduce the incidence of new
authorizations being created in the database but then never exposed to
the subscriber because of a failure later in the new-order flow, both by
reducing failures overall and by adding those authorizations in a
transaction which will be rolled back if there is a later failure.
Fixes#5577
* Make `sa.SetOrderError` passthrough.
* Create new proto message `sapb.SetOrderErrorRequest`
that includes only the order id and error to avoid passing around
unnecessary fields of an order.
Part of: #5533
* Make `sa.NewOrder` passthrough.
* Create a new proto message `sapb.NewOrderRequest`
that includes only the information needed to store a new order.
Part of: #5533
* Make sa.SetOrderProcessing GRPC wrapper passthrough. Also, change the
server method to accept an `*sapb.OrderRequest{}` (essentially just an
order ID) as the parameter instead of a whole order.
Part of: #5533
- Make `CountRegistrationsByIP` a pass-through
- Make `CountRegistrationsByIPRange` a pass-through
- Make `CountOrders` a pass-through
- Make `CountFQDNSets` a pass-through
- Make `CountPendingAuthorizations2` a pass-through
- Make `CountInvalidAuthorizations2` a pass-through
Fixes#5535
- Make `GetAuthorization2` a pass-through
- Make `GetAuthorizations2` a pass-through
- Make `GetPendingAuthorization2` a pass-through
- Make `GetValidOrderAuthorizations2` a pass-through
- Make `GetValidAuthorizations2` a pass-through
- Make `NewAuthorizations2` a pass-through
- Make `FinalizeAuthorization2` a pass-through
- Make `DeactivateAuthorization2` a pass-through
Fixes#5534
Make the gRPC wrappers for the SA's `AddCertificate`,
`AddPrecertificate`, `AddSerial`, and `RevokeCertificate`
methods simple pass-throughs.
Fixup a couple tests that were passing only because their
requests to in-memory SA objects were not passing through
the wrapper's consistency checks.
Part of #5532
Make the gRPC wrappers for sa.GetCertificate and
sa.GetPrecertificate bare passthroughs. The latter of
these already took and returned appropriate protobufs,
so this change mostly just makes the former look like the
latter.
Part of #5532
Batch all new authorizations to be inserted into the database into a
single SQL `INSERT` statement, rather than inserting each row one
at a time. Also add a unit test which inserts 100 authzs to test this
new batching behavior.
Since each statement has significant cost, when there are a lot
of authorizations to insert at a time, the round-trips here can
eat up a lot of time both on the SA and on the database, as it
works to commit each statement as its own transaction. gorp does
not have any means to automatically make multi-value inserts, so
we do it in a bespoke fashion here, somewhat akin to what we
already do in AddBlockedKey and addIssuedNames but with more
loops.
Fixes#5578
Remove all error checking and type transformation from the gRPC wrappers
for the following methods on the SA:
- GetRegistration
- GetRegistrationByKey
- NewRegistration
- UpdateRegistration
- DeactivateRegistration
Update callers of these methods to construct the appropriate protobuf
request messages directly, and to consume the protobuf response messages
directly. In many cases, this requires changing the way that clients
handle the `Jwk` field (from expecting a `JSONWebKey` to expecting a
slice of bytes) and the `Contacts` field (from expecting a possibly-nil
pointer to relying on the value of the `ContactsPresent` boolean field).
Implement two new methods in `sa/model.go` to convert directly between
database models and protobuf messages, rather than round-tripping
through `core` objects in between. Delete the older methods that
converted between database models and `core` objects, as they are no
longer necessary.
Update test mocks to have the correct signatures, and update tests to
not rely on `JSONWebKey` and instead use byte slices.
Fixes#5531
This changeset adds a second DB connect string for the SA for use in
read-only queries that are not themselves dependencies for read-write
queries. In other words, this is attempting to only catch things like
rate-limit `SELECT`s and other coarse-counting, so we can potentially
move those read queries off the read-write primary database.
It also adds a second DB connect string to the OCSP Updater. This is a
little trickier, as the subsequent `UPDATE`s _are_ dependent on the
output of the `SELECT`, but in this case it's operating on data batches,
and a few seconds' replication latency are several orders of magnitude
below the threshold for update frequency, so any certificates that
aren't caught on run `n` can be caught on run `n+1`.
Since we export DB metrics to Prometheus, this also refactors
`InitDBMetrics` to take a DB Address (host:port tuple) and User out of
the DB connection DSN and include those as labels in the metrics.
Fixes#5550Fixes#4985
Replace `core.Empty` with `google.protobuf.Empty` in all of our gRPC
methods which consume or return an empty protobuf. The golang core
proto libraries provide an empty message type, so there is no need
for us to reinvent the wheel.
This change is backwards-compatible and does not require a special
deploy. The protobuf message descriptions of `core.Empty` and
`google.protobuf.Empty` are identical, so their wire-formats are
indistinguishable and therefore interoperable / cross-compatible.
Fixes#5443
Move the validated timestamp to the RA where the challenge is passed to
the SA for database storage. If a challenge becomes valid or invalid, take
the validated timestamp and store it in the attemptedAt field of the
authz2 table. Upon retrieval of the challenge from the database, add the
attemptedAt value to challenge.Validated which is passed back to the WFE
and presented to the user as part of the challenge as required in ACME
RFC8555.
Fix: #5198
This change refactors the sa.NewOrder method to be more readable.
Previously, the outer method both modified and returned the request
object, a confusion violation of abstraction. Simultaneously, the inner
transaction function modified its input order object, but returned the
original request object which it hadn't modified.
Now, the NewOrder method does not modify its input, and instead
returns an all-new Order object. Additionally, the inner transaction
function does not modify its input, and returns an order model with
relevant fields set.
Part of #5166
It's not vital that this row be strongly consistent with the other
updates. And updating it inside the transaction means we hold a lock on
this row while doing a bunch of other expensive inserts, which is likely
creating lock contention.
One slightly surprising / interesting thing: Since core types like
Order and Registration are still proto2 and have pointer fields,
there are actually some places in this PR where I had to add
a `*` rather than delete an `&`, because I was taking a pointer
field from one of those core types and passing it as a field in
an SA RPC request.
Fixes#5037.
As part of the migration to proto3, any fields in requests that may be
zero should also be allowed to be nil. That's because proto3 will
represent those fields as absent when they have their zero value.
This is based on a manual review of the wrappers for the SA, plus
a pair of integration test runs. For the integration test runs I took these
steps:
1. Copy sa/proto to sa/proto2
2. Change sa/proto to use proto3 and regenerate.
3. In sa/*.go and cmd/boulder-sa/main.go, update the imports to use the
proto2 version.
4. Split grpc/sa-wrappers.go into sa-server-wrappers.go and sa-wrappers.go
(containing the client code)
5. In sa-server-wrappers.go, change the import to use sa/proto2.
6. In sa-server-wrappers.go, make a local copy of the core.StorageAuthority
interface that uses the sa/proto2 types. This was necessary as
a temporary kludge because of how the server wrapper internally
uses the core.StorageAuthority interface.
7. Fix all the pointer-vs-value build errors in every other package.
8. Run integration tests.
I also performed those steps with proto2 and proto3 swapped, to confirm the
behavior when a proto2 client talks to a proto3 SA.
Previously, this limit was bucketed by hour, but that created too much
sudden traffic at the beginning of each hour as accounts' rate limits
expired. Chunking by the minute should make it possible to smooth out
traffic more.
Simplify database interactions
This change is a result of an audit of all places where
Go code directly constructs SQL queries and executes them
against a dbMap, with the goal of eliminating all instances
of constructing a well-known object type (such as a
core.CertificateStatus) from explicitly-listed database columns.
Instead, we should be relying on helper functions defined in the
sa itself to determine which columns are relevant for the
construction of any given object.
This audit did not find many places where this was occurring. It
did reveal a few simplifications, which are contained in this
change:
1) Greater use of existing SelectFoo methods provided by models.go
2) Streamlining of various SelectSingularFoo methods to always
select by serial string, rather than user-provided WHERE clause
3) One spot (in ocsp-responder) where using a well-known type seemed
better than using a more minimal custom type
Addresses #4899
Adds a daemon which monitors the new blockedKeys table and checks for any unexpired, unrevoked certificates that are associated with the added SPKI hashes and revokes them, notifying the user that issued the certificates.
Fixes#4772.
In addition to base64(sha256(spki)).
As part of that, change KeyDigest to return [32]byte, and add KeyDigestB64 which provides the base64-encoded output that KeyDigest used to provide. Also update all call sites.
This cleans up after the authzv2 migration and makes names a little
easier to read, since there is no longer a v1/v2 distinction. This
leaves the names of tables the same since they would require a migration
to change.
In a handful of places I've nuked old stats which are not used in any alerts or dashboards as they either duplicate other stats or don't provide much insight/have never actually been used. If we feel like we need them again in the future it's trivial to add them back.
There aren't many dashboards that rely on old statsd style metrics, but a few will need to be updated when this change is deployed. There are also a few cases where prometheus labels have been changed from camel to snake case, dashboards that use these will also need to be updated. As far as I can tell no alerts are impacted by this change.
Fixes#4591.
New types and related infrastructure are added to the `db` package to allow
wrapping gorp DbMaps and Transactions.
The wrapped versions return a special `db.ErrDatabaseOp` error type when errors
occur. The new error type includes additional information such as the operation
that failed and the related table.
Where possible we determine the table based on the types of the gorp function
arguments. Where that isn't possible (e.g. with raw SQL queries) we try to use
a simple regexp approach to find the table name. This isn't great for general
SQL but works well enough for Boulder's existing SQL queries.
To get additional confidence my regexps work for all of Boulder's queries
I temporarily changed the `db` package's `tableFromQuery` function to panic if
the table couldn't be determined. I re-ran the full unit and integration test
suites with this configuration and saw no panics.
Resolves https://github.com/letsencrypt/boulder/issues/4559
The AddCertificate processing related to updating the fqdnSets and certificatesPerNames tables can be done in a separate transaction from the inserts to issuedNames and certificates.
This has the advantage of letting the overall AddCertificate request succeed when the primary transaction succeeds but the rate limit update transaction fails. We are OK with slightly incorrect rate limit results if it means more AddCertificate requests succeed and there are fewer orphaned final certificates.
To maintain visibility we audit log when the rate limit transaction fails and also increment a new failedAddCertRLTransactions prometheus counter.
Resolves#4566
Prev. we inserted data for tracking issued names into the `issuedNames` table
during `sa.AddCertificate`. A more robust solution is to do this during
`sa.AddPrecertificate` since this is when we've truly committed to having
issued for the names.
The new SA `WriteIssuedNamesPrecert` feature flag enables writing this table
during `AddPrecertificate`. The legacy behaviour continues with the flag
enabled or disabled but is updated to tolerate duplicate INSERT errors so that
it is possible to deploy this change across multiple SA instances safely.
Along the way I also updated `SA.AddPrecertificate` to perform its two
`INSERT`s in a transaction using the `db.WithTransaction` wrapper.
Resolves https://github.com/letsencrypt/boulder/issues/4565
In the process I tweaked a few variable names in GetAuthorizations2 to
refer to just "authz" instead of "authz2" because it made things
clearer, particularly in the case of authz2IDMap, which is a map of
whether a given ID exists, not a map from authz's to IDs.
Fixes#4564
This avoids needing to send the entire certificate in OCSP generation
RPCs.
Ended up including a few cleanups that made the implementation easier.
Initially I was struggling with how to derive the issuer identification info.
We could just stick the full SPKI hash in certificateStatus, but that takes a
significant amount of space, we could configure unique issuer IDs in the CA
config, but that would require being very careful about keeping the IDs
constant, and never reusing an ID, or we could store issuers in a table in the
database and use that as a lookup table, but that requires figuring out how to
get that info into the table etc. Instead I've just gone with what I found to
be the easiest solution, deriving a stable ID from the cert hash. This means we
don't need to remember to configure anything special and the CA config stays
the same as it is now.
Fixes#4469.
The `boulder-janitor` is extended to cleanup rows from the `orders` table that
have expired beyond the configured grace period, and the associated referencing
rows in `requestedNames`, `orderFqdnSets`, and `orderToAuthz2`.
To make implementing the transaction work for the deletions easier/consistent
I lifted the SA's `WithTransaction` code and assoc. functions to a new shared
`db` package. This also let me drop the one-off `janitorDb` interface from the
existing code.
There is an associated change to the `GRANT` statements for the `janitor` DB
user to allow it to find/delete the rows related to orders.
Resolves https://github.com/letsencrypt/boulder/issues/4527
Previously we used a JOIN on the orderToAuthz2 table in order to make sure
we only returned authorizations created using the ACME v2 API. Each time an
order is created a pivot row (order ID + authz ID) is added to the
orderToAuthz2 table. If a large number of orders are created that all contain
the same authorization, due to reuse, then the JOINd query would return a full
authorization row for each entry in the orderToAuthz2 table with the authorization
ID.
Instead we now filter out these authorizations by doing a second query against
the orderToAuthz2 table. Using this query still requires examining a large number
of rows, but because we don't need to construct a temporary table for the JOIN
and fill it with all the full authorization rows we should save resources.
Fixes#4500.
This creates the correct type of backend service for the OCSP generator.
It also adds an invocation of orphan-finder during the integration
tests.
This also adds a minor safety check to SA that I hit while writing the
test. Without this safety check, passing a certificate with no DNSNames
to AddCertificate would result in an obscure MariaDB syntax error
without enough context to track it down. In normal circumstances this
shouldn't be hit, but it will be good to have a solid error message if
we hit it in tests sometime.
Also, this tweaks the .travis.yml so it explicitly sets BOULDER_CONFIG_DIR
to test/config in the default case. Because the docker-compose run
command uses -e BOULDER_CONFIG_DIR="${BOULDER_CONFIG_DIR}",
we were setting a blank BOULDER_CONFIG_DIR in default case.
Since the Python startservers script sets a default if BOULDER_CONFIG_DIR
is not set, we haven't noticed this before. But since this test case relies
on the actual environment variable, it became an issue.
Fixes#4499
Addresses two issues introduced in #4476:
* Keep setting the V2 field in modelToAuthzPB so RPCs returned from new components to old don't cause panics
* Don't return expired orders from the SA, so that users requesting old orders that contain old style authorizations don't cause breakage in the RA
We need to apply some fixes for bugs introduced in #4476 before it can be deployed, as such we need to revert #4495 as there needs to be a full deploy cycle between these two changes.
This reverts commit 3ae1ae1.
😭
This change set makes the authz2 storage format the default format. It removes
most of the functionality related to the previous storage format, except for
the SA fallbacks and old gRPC methods which have been left for a follow-up
change in order to make these changes deployable without introducing
incompatibilities.
Fixes#4454.
We have a nice `sa/precertificates.go` file that holds `AddPrecertificate`
(and other precert functions). Let's put `GetPrecertificate` there
too instead of in the more generic `sa/sa.go` file.
Since 9906c93217 when
`features.PrecertificateOCSP` is enabled it is possible for there to be
`certificateStatus` rows that correspond to `precertificates` that do not have
a matching final `certificates` row. This happens in the case where we began
serving OCSP for a precert and weren't able to issue a final certificate.
Prior to the fix in this branch when the `ocsp-updater` would find stale OCSP
responses by querying the `certificateStatus` table it would error in
`generateResponse` when it couldn't find a matching `certificates` row. This
branch updates the logic so that when `features.PrecertificateOCSP` is enabled
it will also try finding the ocsp update DER from the `precertificates` table
when there is no matching serial in the `certificates` table.
When the `features.PrecertificateRevocation` feature flag is enabled the WFE2
will allow revoking certificates for a submitted precertificate. The legacy WFE1
behaviour remains unchanged (as before (pre)certificates issued through the V1
API will be revocable with the V2 API).
Previously the WFE2 vetted the certificate from the revocation request by
looking up a final certificate by the serial number in the requested
certificate, and then doing a byte for byte comparison between the stored and
requested certificate.
Rather than adjust this logic to handle looking up and comparing stored
precertificates against requested precertificates (requiring new RPCs and an
additional round-trip) we choose to instead check the signature on the requested
certificate or precertificate and consider it valid for revocation if the
signature validates with one of the WFE2's known issuers. We trust the integrity
of our own signatures.
An integration test that performs a revocation of a precertificate (in this case
one that never had a final certificate issued due to SCT embedded errors) with
all of the available authentication mechanisms is included.
Resolves https://github.com/letsencrypt/boulder/issues/4414
This change adds two tables and two methods in the SA, to store precertificates
and serial numbers.
In the CA, when the feature flag is turned on, we generate a serial number, store it,
sign a precertificate and OCSP, store them, and then return the precertificate. Storing
the serial as an additional step before signing the certificate adds an extra layer of
insurance against duplicate serials, and also serves as a check on database availability.
Since an error storing the serial prevents going on to sign the precertificate, this decreases
the chance of signing something while the database is down.
Right now, neither table has read operations available in the SA.
To make this work, I needed to remove the check for duplicate certificateStatus entry
when inserting a final certificate and its OCSP response. I also needed to remove
an error that can occur when expiration-mailer processes a precertificate that lacks
a final certificate. That error would otherwise have prevented further processing of
expiration warnings.
Fixes#4412
This change builds on #4417, please review that first for ease of review.
Include identifierType in queries so that the regID_identifier_status_expires_idx index is properly utilized. Did a once over of the other authz2 queries to verify we are properly using their indexes as well and everything else looks like it is working as intended.
In getAllOrderAuthorizationStatuses, we were using a transaction for a series
of SELECTs. Since these SELECTs don't need to be strongly consistent with
each other, that creates needless locking and round trips.
In the current SA code, we need to remember to call Rollback on any error.
If we don't, we'll leave dangling transactions, which are hard to spot but eventually
clog up the database and cause availability problems.
This change attempts to deal with rollbacks more rigorously, by implementing a
withTransaction function that takes a closure as input. withTransaction opens
a transaction, applies a context.Context to it, and then runs the closure. If the
closure returns an error, withTransaction rolls back and return the error; otherwise
it commits and returns nil.
One of the quirks of this implementation is that it relies on the closure modifying
variables from its parent scope in order to return values. An alternate implementation
could define the return value of the closure as interface{}, nil, and have the calling
function do a type assertion. I'm seeking feedback on that; not sure yet which is cleaner.
This is a subset of the functions that need this treatment. I've got more coming, but
some of the changes break tests so I'm checking into why.
Updates #4337
For authzv1, this actually executes a SQL DELETE for the unused challenges
when an authorization is updated upon validation.
For authzv2, this doesn't perform a delete, but changes the authorizations that
are returned so they don't include unused challenges.
In order to test the flag for both authz storage models, I set the feature flag in
both config/ and config-next/.
Fixes#4352
In #4331 I introduced this new more efficient query for
GetOrderForNames, and commented about why we needed an ORDER BY... ASC
to efficiently use the index. However, the actually query did not match
the comment, and it used DESC. This fixes the query.
To demonstrate that the index is actually used with the ASC version,
here's the EXPLAIN output after filling up the table with a bunch of
failed orders:
MariaDB [boulder_sa_integration]> explain select orderID, registrationID FROM orderFqdnSets
-> WHERE setHash = UNHEX('B60FE34E4A6735D5A575D81C97F4DFED2102DC179B34252E4AA18F6E2A375C98')
-> AND expires > NOW() ORDER BY EXPIRES ASC LIMIT 1 \G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: orderFqdnSets
type: range
possible_keys: setHash_expires_idx
key: setHash_expires_idx
key_len: 37
ref: NULL
rows: 1500
Extra: Using index condition
1 row in set (0.000 sec)
MariaDB [boulder_sa_integration]> explain select orderID, registrationID FROM orderFqdnSets
-> WHERE setHash = UNHEX('B60FE34E4A6735D5A575D81C97F4DFED2102DC179B34252E4AA18F6E2A375C98')
-> AND expires > NOW() ORDER BY EXPIRES DESC LIMIT 1 \G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: orderFqdnSets
type: range
possible_keys: setHash_expires_idx
key: setHash_expires_idx
key_len: 37
ref: NULL
rows: 1500
Extra: Using where
1 row in set (0.000 sec)
In the case where the DB `Select()` returns a non-nil `err` result the
SA's `getAllOrderAuthorizationStatuses` function needs to ensure it
rolls back the transaction it opened or it will be leaked.
This rolls forward #4326 after it was reverted in #4328.
Resolves https://github.com/letsencrypt/boulder/issues/4329
The older query didn't have a `LIMIT 1` so it was returning multiple results,
but gorp's `SelectOne` was okay with multiple results when the selection was
going into an `int64`. When I changed this to a `struct` in #4326, gorp started
producing errors.
For this bug to manifest, an account needs to create an order, then fail
validation, twice in a row for a given domain name, then create an order once
more for the same domain name - that third request will fail because there are
multiple orders in the orderFqdnSets table for that domain.
Note that the bug condition doesn't happen when an account does three successful
issuances in a row, because finalizing an order (that is, issuing a certificate
for it) deletes the row in orderFqdnSets. Failing an authorization does not
delete the row in orderFqdnSets. I believe this was an intentional design
decision because an authorization can participate in many orders, and those
orders can have many other authorizations, so computing the updated state of
all those orders would be expensive (remember, order state is not persisted in
the DB but is calculated dynamically based on the authorizations it contains).
This wasn't detected in integration tests because we don't have any tests that
fail validation for the same domain multiple times. I filed an issue for an
integration test that would have incidentally caught this:
https://github.com/letsencrypt/boulder/issues/4332. There's also a more specific
test case in #4331.
This reverts commit 9fa360769e.
This commit can cause "gorp: multiple rows returned for: ..." under certain situations.
See #4329 for details of followup.
When there are a lot of potential orders to reuse, the query could scan
unnecessary rows, sometimes leading to timeouts. The new query used
when the FasterGetOrderForNames feature flag is enabled uses the
available index more effectively and adds a LIMIT clause.
In #4179 we added a different method of counting the certificatesPerName
rate limit that can provide the correct behavior for exact public suffix
matches without the need for a separate RPC call. This cleans up the
separate code paths in the SA and RA that are no longer necesary.
This is a holdover from one of the tables (I think orderToAuthz2?) used a string
authorization ID and we needed to convert in order to get what we wanted but
apparently I never cleaned it up when we switched to integers for both tables.
What is really confusing here is why we ever needed a CONVERT in the first place
if Maria is happy to arbitrarily compare strings to integers itself... what fun.
Add flag to explicitly disable orders containing authz2 authorizations. After looking at a handful of much more complex solutions this feels like the best option. With NewAuthorizationSchema disabled and DisableAuthz2Orders enabled any requests for orders that include authz2 authorizations will return a 404 (where previously they would return a 500).
Fixes#4263.
Also fixes a minor bug where `sa.UpdateRegistration` didn't properly check a
returned error. If a `errors.Duplicate` type error is returned in either `KeyRollover`/
`Newaccount` in wfe2 or `NewRegistration` in wfe during the update/insert step
the account info/pointer will be returned instead of an internal server error.
Fixes#3000.
This will allow implementing sub-problems without creating a cyclic
dependency between `core` and `problems`.
The `identifier` package is somewhat small/single-purpose and in the
future we may want to move more "ACME" bits beyond the `identifier`
types into a dedicated package outside of `core`.