Permit all valid identifier types in `wfe.NewOrder` and `csr.VerifyCSR`.
Permit certs with just IP address identifiers to skip
`sa.addIssuedNames`.
Check that URI SANs are empty in `csr.VerifyCSR`, which was previously
missed.
Use a real (Let's Encrypt) IP address range in integration testing, to
let challtestsrv satisfy IP address challenges.
Fixes#8192
Depends on #8154
Add support for managing and querying rate limit overrides in the
database.
- Add `sa.AddRateLimitOverride` to insert or update a rate limit
override. This will be used during Rate Limit Override Portal to commit
approved overrides to the database.
- Add `sa.DisableRateLimitOverride` and `sa.EnableRateLimitOverride` to
toggle override state. These will be used by the `admin` tool.
- Add `sa.GetRateLimitOverride` to retrieve a single override by limit
enum and bucket key. This will be used by the Rate Limit Portal to
prevent duplicate or downgrade requests but allow upgrade requests.
- Add `sa.GetEnabledRateLimitOverrides` to stream all currently enabled
overrides. This will be used by the rate limit consumers (`wfe` and
`ra`) to refresh the overrides in-memory.
- Implement test coverage for all new methods.
Deprecate the "InsertAuthzsIndividually" feature flag, which has been
set to true in both Staging and Production. Delete the code guarded
behind that flag being false, namely the ability of the MultiInserter to
return the newly-created IDs from all of the rows it has inserted. This
behavior is being removed because it is not supported in MySQL / Vitess.
Fixes https://github.com/letsencrypt/boulder/issues/7718
---
> [!WARNING]
> ~~Do not merge until IN-10737 is complete~~
Remove code using `certificatesPerName` & `newOrdersRL` tables.
Deprecate `DisableLegacyLimitWrites` & `UseKvLimitsForNewOrder` flags.
Remove legacy `ratelimit` package.
Delete these RA test cases:
- `TestAuthzFailedRateLimitingNewOrder` (rl:
`FailedAuthorizationsPerDomainPerAccount`)
- `TestCheckCertificatesPerNameLimit` (rl: `CertificatesPerDomain`)
- `TestCheckExactCertificateLimit` (rl: `CertificatesPerFQDNSet`)
- `TestExactPublicSuffixCertLimit` (rl: `CertificatesPerDomain`)
Rate limits in NewOrder are now enforced by the WFE, starting here:
5a9b4c4b18/wfe2/wfe.go (L781)
We collect a batch of transactions to check limits, check them all at
once, go through and find which one(s) failed, and serve the failure
with the Retry-After that's furthest in the future. All this code
doesn't really need to be tested again; what needs to be tested is that
we're returning the correct failure. That code is
`NewOrderLimitTransactions`, and the `ratelimits` package's tests cover
this.
The public suffix handling behavior is tested by
`TestFQDNsToETLDsPlusOne`:
5a9b4c4b18/ratelimits/utilities_test.go (L9)
Some other RA rate limit tests were deleted earlier, in #7869.
Part of #7671.
Create a new method on the gorm rows object which runs a small closure
for every row retrieved from the database. Use this new method to remove
20 lines of boilerplate from five different SA methods and rocsp-tool.
Run staticcheck as a standalone binary rather than as a library via
golangci-lint. From the golangci-lint help out,
> staticcheck (megacheck): It's a set of rules from staticcheck. It's
not the same thing as the staticcheck binary. The author of staticcheck
doesn't support or approve the use of staticcheck as a library inside
golangci-lint.
We decided to disable ST1000 which warns about incorrect or missing
package comments.
For SA4011, I chose to change the semantics[1] of the for loop rather
than ignoring the SA4011 lint for that line.
Fixes https://github.com/letsencrypt/boulder/issues/6988
1. https://go.dev/ref/spec#Continue_statements
This change replaces [gorp] with [borp].
The changes consist of a mass renaming of the import and comments / doc
fixups, plus modifications of many call sites to provide a
context.Context everywhere, since gorp newly requires this (this was one
of the motivating factors for the borp fork).
This also refactors `github.com/letsencrypt/boulder/db.WrappedMap` and
`github.com/letsencrypt/boulder/db.Transaction` to not embed their
underlying gorp/borp objects, but to have them as plain fields. This
ensures that we can only call methods on them that are specifically
implemented in `github.com/letsencrypt/boulder/db`, so we don't miss
wrapping any. This required introducing a `NewWrappedMap` method along
with accessors `SQLDb()` and `BorpDB()` to get at the internal fields
during metrics and logging setup.
Fixes#6944
Enable the errcheck linter. Update the way we express exclusions to use
the new, non-deprecated, non-regex-based format. Fix all places where we
began accidentally violating errcheck while it was disabled.
This reverts commit fdfea0d469.
With a Go security release out this week we prefer to do a single
release on the new Go version rather than trying to deploy the new
go-sql-driver version.
Add validation of input parameters as unquoted MariaDB identifiers, and
document the regex that does it.
Accept a narrower interface (Queryer) for `Insert()`.
Take a list of fields rather than a string containing multiple fields,
to make validation simpler. Rename retCol to returningColumn.
Document safety properties and requirements.
We use this pattern in several places: there is a query that needs to
have a variable number of placeholders (question marks) in it, depending
on how many items we are inserting or querying for. For instance, when
issuing a precertificate we add that precertificate's names to the
"issuedNames" table. To make things more efficient, we do that in a
single query, whether there is one name on the certificate or a hundred.
That means interpolating into the query string with series of question
marks that matches the number of names.
We have a helper type MultiInserter that solves this problem for simple
inserts, but it does not solve the problem for selects or more complex
inserts, and we still have a number of places that generate their
sequence of question marks manually.
This change updates addIssuedNames to use MultiInserter. To enable that,
it also narrows the interface required by MultiInserter.Insert, so it's
easier to mock in tests.
This change adds the new function db.QuestionMarks, which generates e.g.
`?,?,?` depending on the input N.
In a few places I had to rename a function parameter named `db` to avoid
shadowing the `db` package.
- Replace `-1` in return values with `0`. No callers were depending on
`-1`.
- Replace `count(` with `COUNT(` for the sake of readability.
- Replace `COUNT(1)` with `COUNT(*)` (https://mariadb.com/kb/en/count).
Both
versions provide identical outputs but let's standardize on the docs.
Fixes#6494
Add checkedRedisSource, a new OCSP Source which gets
responses from Redis, gets metadata from the database, and
only serves the Redis response if it matches the authoritative
metadata. If there is a mismatch, it requests a new OCSP
response from the CA, stores it in Redis, and serves the new
response.
This behavior is locked behind a new ROCSPStage3 feature flag.
Part of #6079
Create a new type `db.MappedSelector` which exposes a new
`Query` method. This method behaves similar to gorp's
`SelectFoo` methods, in that it uses the desired result type to
look up the correct table to query and uses reflection to map
the table columns to the struct fields. It behaves similarly to
the stdlib's `sql.Query` in that it returns a `Rows` object which
can be iterated over to get one row of results at a time. And it
improves both of those by using generics, rather than `interface{}`,
to provide a nicely-typed calling interface.
Use this new type to simplify the existing streaming query in
`SerialsForIncident`. Similarly use the new type to simplify
rocsp-tool's and ocsp-updater's streams of `CertStatusMetadata`.
This new type will also be used by the crl-updater's upcoming
`GetRevokedCerts` streaming query.
Fixes#6173
We have decided that we don't like the if err := call(); err != nil
syntax, because it creates confusing scopes, but we have not cleaned up
all existing instances of that syntax. However, we have now found a
case where that syntax enables a bug: It caused readers to believe that
a later err = call() statement was assigning to an already-declared err
in the local scope, when in fact it was assigning to an
already-declared err in the parent scope of a closure. This caused our
ineffassign and staticcheck linters to be unable to analyze the
lifetime of the err variable, and so they did not complain when we
never checked the actual value of that error.
This change standardizes on the two-line error checking syntax
everywhere, so that we can more easily ensure that our linters are
correctly analyzing all error assignments.
Add a new method to the SA's gRPC interface which takes both an Order
and a list of new Authorizations to insert into the database, and adds
both (as well as the various ancillary rows) inside a transaction.
To enable this, add a new abstraction layer inside the `db/` package
that facilitates inserting many rows at once, as we do for the `authz2`,
`orderToAuthz2`, and `requestedNames` tables in this operation.
Finally, add a new codepath to the RA (and a feature flag to control it)
which uses this new SA method instead of separately calling the
`NewAuthorization` method multiple times. Enable this feature flag in
the config-next integration tests.
This should reduce the failure rate of the new-order flow by reducing
the number of database operations by coalescing multiple inserts into a
single multi-row insert. It should also reduce the incidence of new
authorizations being created in the database but then never exposed to
the subscriber because of a failure later in the new-order flow, both by
reducing failures overall and by adding those authorizations in a
transaction which will be rolled back if there is a later failure.
Fixes#5577
Previously, configuration of the boulder-janitor was split into
two places: the actual json config file (which controlled which
jobs would be enabled, and what their rate limits should be), and
the janitor code itself (which controlled which tables and columns
those jobs should query). This resulted in significant duplicated
code, as most of the jobs were identical except for their table
and column names.
This change abstracts away the query which jobs use to find work.
Instead of having each job type parse its own config and produce
its own work query (in Go code), now each job supplies just a few
key values (the table name and two column names) in its JSON config,
and the Go code assembles the appropriate query from there. We are
able to delete all of the files defining individual job types, and
replace them with a single slightly smarter job constructor.
This enables further refactorings, namely:
* Moving all of the logic code into its own module;
* Ensuring that the exported interface of that module is safe (i.e.
that a client cannot create and run jobs without them being valid,
because the only exposed methods ensure validity);
* Collapsing validity checks into a single location;
* Various renamings.
This change adds two new test assertion helpers, `AssertErrorIs`
and `AssertErrorWraps`. The former is a wrapper around `errors.Is`,
and asserts that the error's wrapping chain contains a specific (i.e.
singleton) error. The latter is a wrapper around `errors.As`, and
asserts that the error's wrapping chain contains any error which is
of the given type; it also has the same unwrapping side effect as
`errors.As`, which can be useful for further assertions about the
contents of the error.
It also makes two small changes to our `berrors` package, namely
making `berrors.ErrorType` itself an error rather than just an int,
and giving `berrors.BoulderError` an `Unwrap()` method which
exposes that inner `ErrorType`. This allows us to use the two new
helpers above to make assertions about berrors, rather than
having to hand-roll equality assertions about their types.
Finally, it takes advantage of the two changes above to greatly
simplify many of the assertions in our tests, removing conditional
checks and replacing them with simple assertions.
New types and related infrastructure are added to the `db` package to allow
wrapping gorp DbMaps and Transactions.
The wrapped versions return a special `db.ErrDatabaseOp` error type when errors
occur. The new error type includes additional information such as the operation
that failed and the related table.
Where possible we determine the table based on the types of the gorp function
arguments. Where that isn't possible (e.g. with raw SQL queries) we try to use
a simple regexp approach to find the table name. This isn't great for general
SQL but works well enough for Boulder's existing SQL queries.
To get additional confidence my regexps work for all of Boulder's queries
I temporarily changed the `db` package's `tableFromQuery` function to panic if
the table couldn't be determined. I re-ran the full unit and integration test
suites with this configuration and saw no panics.
Resolves https://github.com/letsencrypt/boulder/issues/4559
This is a small clean-up I spotted while migrating the `WithTransaction` wrapper
out of the `sa` package into `db` during #4544.
The `admin-revoker` util. was using bare transactions with the `db.Rollback`
(prev `sa.Rollback`) helper function instead of the newly exported
`db.WithTransaction` wrapper. The latter is safer so we should use it here too.
After this change all of the external consumers of the `Rollback` function have
been switched to using `WithTransaction` so we can unexport `Rollback`.
The `boulder-janitor` is extended to cleanup rows from the `orders` table that
have expired beyond the configured grace period, and the associated referencing
rows in `requestedNames`, `orderFqdnSets`, and `orderToAuthz2`.
To make implementing the transaction work for the deletions easier/consistent
I lifted the SA's `WithTransaction` code and assoc. functions to a new shared
`db` package. This also let me drop the one-off `janitorDb` interface from the
existing code.
There is an associated change to the `GRANT` statements for the `janitor` DB
user to allow it to find/delete the rows related to orders.
Resolves https://github.com/letsencrypt/boulder/issues/4527