Add support for managing and querying rate limit overrides in the
database.
- Add `sa.AddRateLimitOverride` to insert or update a rate limit
override. This will be used during Rate Limit Override Portal to commit
approved overrides to the database.
- Add `sa.DisableRateLimitOverride` and `sa.EnableRateLimitOverride` to
toggle override state. These will be used by the `admin` tool.
- Add `sa.GetRateLimitOverride` to retrieve a single override by limit
enum and bucket key. This will be used by the Rate Limit Portal to
prevent duplicate or downgrade requests but allow upgrade requests.
- Add `sa.GetEnabledRateLimitOverrides` to stream all currently enabled
overrides. This will be used by the rate limit consumers (`wfe` and
`ra`) to refresh the overrides in-memory.
- Implement test coverage for all new methods.
Instead, simply return DER bytes from `issuePrecertificate`, and accept
regular parameters to `issueCertificateForPrecertificate` (instead of a
proto message).
Also, move the lookup of the certificate profile up to
`IssueCertificate`, and pass the selected `*certProfileWithId` to both
`issuePrecertificate` and `issueCertificateForPrecertificate`.
Also, change `issueCertificateForPrecertificate` to just return DER, not
a `*corepb.Certificate` (of which most fields were already being
ignored).
Remove the deprecated `DnsNames` field from the `corepb.Order` proto
message. All users of this struct use `Identifiers` instead.
This unblocks future changes that will require `Order` users to handle
different identifier types.
Part of #7311
These methods are still preserved as-is for now, and still take proto
messages as arguments. But they are not exported as RPCs. Refactoring
the arguments will be a followup PR.
Part of #8039
Add `identifier` fields, which will soon replace the `dnsName` fields,
to:
- `corepb.Authorization`
- `corepb.Order`
- `rapb.NewOrderRequest`
- `sapb.CountFQDNSetsRequest`
- `sapb.CountInvalidAuthorizationsRequest`
- `sapb.FQDNSetExistsRequest`
- `sapb.GetAuthorizationsRequest`
- `sapb.GetOrderForNamesRequest`
- `sapb.GetValidAuthorizationsRequest`
- `sapb.NewOrderRequest`
Populate these `identifier` fields in every function that creates
instances of these structs.
Use these `identifier` fields instead of `dnsName` fields (at least
preferentially) in every function that uses these structs. When crossing
component boundaries, don't assume they'll be present, for
deployability's sake.
Deployability note: Mismatched `cert-checker` and `sa` versions will be
incompatible because of a type change in the arguments to
`sa.SelectAuthzsMatchingIssuance`.
Part of #7311
Add a new boulder service, email-exporter, which uses the Pardot API
client added in #8016 and the email.Exporter gRPC service added in
#8017.
Add pardot-test-srv, a test-only service for mocking communication with
Salesforce OAuth and Pardot APIs in non-production environments. Since
Salesforce does not provide Pardot functionality in developer sandboxes,
pardot-test-srv must run in all non-production environments (e.g.,
sre-development and staging).
Integrate the email-exporter service with the WFE and modify
WFE.NewAccount and WFE.UpdateAccount to submit valid email contacts.
Ensure integration tests verify that contacts eventually reach
pardot-test-srv.
Update configuration where necessary to:
- Build pardot-test-srv as a standalone binary.
- Bring up pardot-test-srv and cmd/email-exporter for integration
testing.
- Integrate WFE with cmd/email-exporter when running test/config-next.
Closes#7966
Add a new RPC to the CA: `IssueCertificate` covers issuance of both the
precertificate and the final certificate. In between, it calls out to
the RA's new method `GetSCTs`.
The RA calls the new `CA.IssueCertificate` if the `UnsplitIssuance`
feature flag is true.
The RA had a metric that counted certificates by profile name and hash.
Since the RA doesn't receive a profile hash in the new flow, simply
record the total number of issuances.
Fixes https://github.com/letsencrypt/boulder/issues/7983
The SA had some logic (not yet in use) to return revoked certificates
either by temporal sharding (if `req.ShardIdx` is zero) or by explicit
sharding (if `req.ShardIdx` is nonzero).
This PR splits the function into two. The existing `GetRevokedCerts`
always does temporal sharding. The new `GetRevokedCertsByShard` always
does explicit sharding. Eventually only `GetRevokedCertsByShard` will be
necessary. This change was discussed in
https://github.com/letsencrypt/boulder/issues/7094#issuecomment-2587940962
and is a precursor to having the crl-updater call both methods, so we
can merge the results when generating CRLs.
Remove code using `certificatesPerName` & `newOrdersRL` tables.
Deprecate `DisableLegacyLimitWrites` & `UseKvLimitsForNewOrder` flags.
Remove legacy `ratelimit` package.
Delete these RA test cases:
- `TestAuthzFailedRateLimitingNewOrder` (rl:
`FailedAuthorizationsPerDomainPerAccount`)
- `TestCheckCertificatesPerNameLimit` (rl: `CertificatesPerDomain`)
- `TestCheckExactCertificateLimit` (rl: `CertificatesPerFQDNSet`)
- `TestExactPublicSuffixCertLimit` (rl: `CertificatesPerDomain`)
Rate limits in NewOrder are now enforced by the WFE, starting here:
5a9b4c4b18/wfe2/wfe.go (L781)
We collect a batch of transactions to check limits, check them all at
once, go through and find which one(s) failed, and serve the failure
with the Retry-After that's furthest in the future. All this code
doesn't really need to be tested again; what needs to be tested is that
we're returning the correct failure. That code is
`NewOrderLimitTransactions`, and the `ratelimits` package's tests cover
this.
The public suffix handling behavior is tested by
`TestFQDNsToETLDsPlusOne`:
5a9b4c4b18/ratelimits/utilities_test.go (L9)
Some other RA rate limit tests were deleted earlier, in #7869.
Part of #7671.
Clean up how we handle identifiers throughout the Boulder codebase by
- moving the Identifier protobuf message definition from sa.proto to
core.proto;
- adding support for IP identifier to the "identifier" package;
- renaming the "identifier" package's exported names to be clearer; and
- ensuring we use the identifier package's helper functions everywhere
we can.
This will make future work to actually respect identifier types (such as
in Authorization and Order protobuf messages) simpler and easier to
review.
Part of https://github.com/letsencrypt/boulder/issues/7311
Have the WFE ask the RA for authorizations, rather than asking the SA
directly. This extra layer of indirection allows us to filter out
challenges which have been disabled, so that clients don't think they
can attempt challenges that we have disabled.
Also shuffle the order of challenges within the authz objects rendered
by the API. We used to have code which does this at authz creation time,
but of course that was completely ineffectual once we stored the
challenges as just a bitmap in the database.
Update the WFE unit tests to mock RA.GetAuthorization instead of
SA.GetAuthorization2. This includes making the mock more accurate, so
that (e.g.) valid authorizations contain valid challenges, and the
challenges have their correct types (e.g. "http-01" instead of just
"http"). Also update the OTel tracing test to account for the new RPC.
Part of https://github.com/letsencrypt/boulder/issues/5913
Find all gRPC fields which represent DNS Names -- sometimes called
"identifier", "hostname", "domain", "identifierValue", or other things
-- and unify their naming. This naming makes it very clear that these
values are strings which may be included in the SAN extension of a
certificate with type dnsName.
As we move towards issuing IP Address certificates, all of these fields
will need to be replaced by fields which carry both an identifier type
and value, not just a single name. This unified naming makes it very
clear which messages and methods need to be updated to support
non-dnsName identifiers.
Part of https://github.com/letsencrypt/boulder/issues/7647
Replace all of Boulder's usage of the Go stdlib "math/rand" package with
the newer "math/rand/v2" package which first became available in go1.22.
This package has an improved API and faster performance across the
board.
See https://go.dev/blog/randv2 and https://go.dev/blog/chacha8rand for
details.
The name "now" was always misleading, because we never set the value to
be the actual current time, we always set it to be some time in the
future to avoid returning authzs which expire in the very near future.
Changing the name to "validUntil" matches the current naming in
GetPendingAuthorizationRequest.
Add the storage implementation for our new (account, hostname) pair
pausing feature.
- Add schema and model for for the new paused table
- Add SA service methods for interacting with the paused table
Part of #7406
Part of #7475
Update the version of protoc-gen-go-grpc that we use to generate Go gRPC
code from our proto files, and update the versions of other gRPC tools
and libraries that we use to match. Turn on the new
`use_generic_streams` code generation flag to change how
protoc-gen-go-grpc generates implementations of our streaming methods,
from creating a wholly independent implementation for every stream to
using shared generic implementations.
Take advantage of this code-sharing to remove our SA "wrapper" methods,
now that they have truly the same signature as the SARO methods which
they wrap. Also remove all references to the old-style stream names
(e.g. foopb.FooService_BarMethodClient) and replace them with the new
underlying generic names, for the sake of consistency. Finally, also
remove a few custom stream test mocks, replacing them with the generic
mocks.ServerStreamClient.
Note that this PR does not change the names in //mocks/sa.go, to avoid
conflicts with work happening in the pursuit of
https://github.com/letsencrypt/boulder/issues/7476. Note also that this
PR updates the version of protoc-gen-go-grpc that we use to a specific
commit. This is because, although a new release of grpc-go itself has
been cut, the codegen binary is a separate Go module with its own
releases, and it hasn't had a new release cut yet. Tracking for that is
in https://github.com/grpc/grpc-go/issues/7030.
Replace "mocks.StorageAuthority" with "sapb.StorageAuthorityClient" in
our test mocks. The improves them by removing implementations of the
methods the tests don't actually need, instead of inheriting lots of
extraneous methods from the huge and cumbersome mocks.StorageAuthority.
This reduces our usage of mocks.StorageAuthority to only the WFE tests
(which create one in the frequently-used setup() function), which will
make refactoring those mocks in the pursuit of
https://github.com/letsencrypt/boulder/issues/7476 much easier.
Part of https://github.com/letsencrypt/boulder/issues/7476
* Adds `CertProfileName` to the CAs `capb.IssuePrecertificateResponse`
so the RA can receive the CAs configured default profile name for audit
logging/metrics. This is useful for when the RA sends an empty string as
the profile name to the CA, but we want to know exactly what the profile
name chosen by the CA was, rather than just relying on comparing hashes
between CA and RA audit logs.
* Adds the profile name and hash to RA audit logs emitted after a
successful issuance.
* Adds new labels to the existing `new_certificates` metric exported by
the RA.
```
# HELP new_certificates A counter of new certificates including the certificate profile name and hexadecimal certificate profile hash
# TYPE new_certificates counter
new_certificates{profileHash="de4c8c8866ed46b1d4af0d79e6b7ecf2d1ea625e26adcbbd3979ececd8fbd05a",profileName="defaultBoulderCertificateProfile"} 2
```
Fixes https://github.com/letsencrypt/boulder/issues/7421
Upgrade from the old go-jose v2.6.1 to the newly minted go-jose v4.0.1.
Cleans up old code now that `jose.ParseSigned` can take a list of
supported signature algorithms.
Fixes https://github.com/letsencrypt/boulder/issues/7390
---------
Co-authored-by: Aaron Gable <aaron@letsencrypt.org>
Adds `certificateProfileName` to the `orders` database table. The
[maximum
length](https://github.com/letsencrypt/boulder/pull/7325/files#diff-a64a0af7cbf484da8e6d08d3eefdeef9314c5d9888233f0adcecd21b800102acR35)
of a profile name matches the `//issuance` package.
Adds a `MultipleCertificateProfiles` feature flag that, when enabled,
will store the certificate profile name from a `NewOrderRequest`. The
certificate profile name is allowed to be empty and the database will
treat that row as [NULL](https://mariadb.com/kb/en/null-values/). When
the SA retrieves this potentially NULL row, it will be cast as the
golang string zero value `""`.
SRE ticket IN-10145 has been filed to perform the database migration and
enable the new feature flag. The migration must be performed before
enabling the feature flag.
Part of https://github.com/letsencrypt/boulder/issues/7324
When the order object retrieved from the SA contains a profile name,
propagate that into the request for the CA to issue a precertificate.
Similarly, when the CA's precertificate issuance response contains a
profile hash, propagate that into the request for the CA to issue the
corresponding final certificate.
Fixes https://github.com/letsencrypt/boulder/issues/7366
Use two existing SA methods, KeyBlocked and GetSerialsByKey, to replace
the direct database access previously used by the blockSPKIHash method.
This is less efficient than before -- it now streams the whole set of
affected serials rather than just counting them -- but doing so prevents
us from needing an additional SA method just for counting.
Also update the default mock StorageAuthority and
StorageAuthorityReadOnly provided by the mocks package to return actual
stream objects (which stream zero results) instead of nil, so that tests
can attempt to read from the resulting stream without getting a nil
pointer exception.
Part of https://github.com/letsencrypt/boulder/issues/7350
Add two new methods to the SA, GetSerialsByKey and GetSerialsByAccount,
which use the same query as the admin tool has previously used to get
serials matching a given SPKI hash or a given registration ID. These two
new gRPC methods read the database row-by-row and produce streams of
results to keep SA memory usage low.
Use these methods in the admin tool so it no longer needs a direct
database connection for these actions.
Part of https://github.com/letsencrypt/boulder/issues/7350
Add a new "GetLintPrecertificate" method to the SA's gRPC service. This
acts identically to the existing "GetCertificate", but returns the
linting precertificate created just prior to the actual precertificate
instead. This is useful for revocation, where we need to be able to act
on a serial even if the corresponding (pre)certificate was never issued
or never saved to the database.
Part of https://github.com/letsencrypt/boulder/issues/7135
This is a cleanup PR finishing the migration from int64 timestamps to
protobuf `*timestamppb.Timestamps` by removing all usage of the old
int64 fields. In the previous PR
https://github.com/letsencrypt/boulder/pull/7121 all fields were
switched to read from the protobuf timestamppb fields.
Adds a new case to `core.IsAnyNilOrZero` to check various properties of
a `*timestamppb.Timestamp` reducing the visual complexity for receivers.
Fixes https://github.com/letsencrypt/boulder/issues/7060
* Adds new `google.protobuf.Timestamp` fields to each .proto file where
we had been using `int64` fields as a timestamp.
* Updates relevant gRPC messages to populate the new
`google.protobuf.Timestamp` fields in addition to the old `int64`
timestamp fields.
* Added tests for each `<x>ToPB` and `PBto<x>` functions to ensure that
new fields passed into a gRPC message arrive as intended.
* Removed an unused error return from `PBToCert` and `PBToCertStatus`
and cleaned up each call site.
Built on-top of https://github.com/letsencrypt/boulder/pull/7069
Part 2 of 4 related to
https://github.com/letsencrypt/boulder/issues/7060
Run staticcheck as a standalone binary rather than as a library via
golangci-lint. From the golangci-lint help out,
> staticcheck (megacheck): It's a set of rules from staticcheck. It's
not the same thing as the staticcheck binary. The author of staticcheck
doesn't support or approve the use of staticcheck as a library inside
golangci-lint.
We decided to disable ST1000 which warns about incorrect or missing
package comments.
For SA4011, I chose to change the semantics[1] of the for loop rather
than ignoring the SA4011 lint for that line.
Fixes https://github.com/letsencrypt/boulder/issues/6988
1. https://go.dev/ref/spec#Continue_statements
Add two new methods, LeaseCRLShard and UpdateCRLShard, to the SA gRPC
interface. These methods work in concert both to prevent multiple
instances of crl-updater from stepping on each others toes, and to lay
the groundwork for a less bursty version of crl-updater in the future.
Introduce a new database table, crlShards, which tracks the thisUpdate
and nextUpdate timestamps of each CRL shard for each issuer. It also has
a column "leasedUntil", which is also a timestamp. Grant the SA user
read-write access to this table.
LeaseCRLShard updates the leasedUntil column of the identified shard to
the given time. It returns an error if the identified shard's
leasedUntil timestamp is already in the future. This provides a
mechanism for crl-updater instances to "lick the cookie", so to speak,
marking CRL shards as "taken" so that multiple crl-updater instances
don't attempt to work on the same shard at the same time. Using a
timestamp has the added benefit that leases are guaranteed to expire,
ensuring that we don't accidentally fail to work on a shard forever.
LeaseCRLShard has a second mode of operation, when a range of potential
shards is given in the request, rather than a single shard. In this
mode, it returns the shard (within the given range) whose thisUpdate
timestamp is oldest. (Shards with no thisUpdate timestamp, including
because the requested range includes shard indices the database doesn't
yet know about, count as older than any shard with any thisUpdate
timestamp.) This allows crl-updater instances which don't care which
shard they're working on to do the most urgent work first.
UpdateCRLShard updates the thisUpdate and nextUpdate timestamps of the
identified shard. This closes the loop with the second mode of
LeaseCRLShard above: by updating the thisUpdate timestamp, the method
marks the shard as no longer urgently needing to be worked on.
IN-9220 tracks creating this table in staging and production
Part of #6897
Make minor, non-user-visible changes to how we structure the probs
package. Notably:
- Add new problem types for UnsupportedContact and
UnsupportedIdentifier, which are specified by RFC8555 and which we will
use in the future, but haven't been using historically.
- Sort the problem types and constructor functions to match the
(alphabetical) order given in RFC8555.
- Rename some of the constructor functions to better match their
underlying problem types (e.g. "TLSError" to just "TLS").
- Replace the redundant ProblemDetailsToStatusCode function with simply
always returning a 500 if we haven't properly set the problem's
HTTPStatus.
- Remove the ability to use either the V1 or V2 error namespace prefix;
always use the proper RFC namespace prefix.
In order to get rid of the orphan queue, we want to make sure that
before we sign a precertificate, we have enough data in the database
that we can fulfill our revocation-checking obligations even if storing
that precertificate in the database fails. That means:
- We should have a row in the certificateStatus table for the serial.
- But we should not serve "good" for that serial until we are positive
the precertificate was issued (BRs 4.9.10).
- We should have a record in the live DB of the proposed certificate's
public key, so the bad-key-revoker can mark it revoked.
- We should have a record in the live DB of the proposed certificate's
names, so it can be revoked if we are required to revoke based on names.
The SA.AddPrecertificate method already achieves these goals for
precertificates by writing to the various metadata tables. This PR
repurposes the SA.AddPrecertificate method to write "proposed
precertificates" instead.
We already create a linting certificate before the precertificate, and
that linting certificate is identical to the precertificate that will be
issued except for the private key used to sign it (and the AKID). So for
instance it contains the right pubkey and SANs, and the Issuer name is
the same as the Issuer name that will be used. So we'll use the linting
certificate as the "proposed precertificate" and store it to the DB,
along with appropriate metadata.
In the new code path, rather than writing "good" for the new
certificateStatus row, we write a new, fake OCSP status string "wait".
This will cause us to return internalServerError to OCSP requests for
that serial (but we won't get such requests because the serial has not
yet been published). After we finish precertificate issuance, we update
the status to "good" with SA.SetCertificateStatusReady.
Part of #6665
This was mostly unused. The only caller was orphan-finder, which used it
to determine if a certificate was already in the database. But this is
not particularly important functionality, so I've removed it.
[RFC 8555 section
7.4](https://www.rfc-editor.org/rfc/rfc8555.html#section-7.4) states
regarding Orders in the "processing" state:
> "processing": The certificate is being issued. Send a POST-as-GET
> request after the time given in the Retry-After header field of
> the response, if any.
Add a Retry-After header when serving Order objects that are in the
"processing" state. This may help control clients which implement Order
polling but without any built-in backoff. The retry interval is
hard-coded to be 3s, slightly above our current 99th percentile Finalize
latency.
Remove the GenerateOCSP and GenerateCRL methods from the
CertificateAuthority gRPC service. These methods are no longer called by
any clients; all clients use their respective OCSPGenerator and
CRLGenerator gRPC services instead.
In addition, remove the CRLGeneratorServer field from the caImpl, as it
no longer needs it to serve as a backing implementation for the
GenerateCRL pass-through method. Unfortunately, we can't remove the
OCSPGeneratorServer field until after ROCSPStage7 is complete, and the
CA is no longer generating an OCSP response during initial certificate
issuance.
Part of #6448
Delete the NewOrder and NewAuthorizations2 methods from the SA's gRPC
interface. These methods have been replaced by the unified
NewOrderAndAuthzs method, which performs both sets of insertions in a
single transaction.
Also update the SA and RA unittests to not rely on these methods for
setting up test data that other functions-under-test rely on. In most
cases, replace calls to NewOrder with calls to NewOrderAndAuthzs. In the
SA tests specifically, replace calls to NewAuthorizations2 with a
streamlined helper function that simply does the single necessary
database insert.
Fixes#6510Fixes#5816