Staging and prod both deployed the PerformValidationRPC feature flag. All running WFE/WFE2 instances are using the more accurately named PerformValidation RPC and we can strip out the old UpdateAuthorization bits. The feature flag for PerformValidationRPC remains until we clean up the staging/prod configs.
Resolves#3947 and completes the last of #3930
When the `SimplifiedVAHTTP01` feature flag is enabled we need to
preserve query parameters when reconstructing a redirect URL for the
resolved IP address.
To add integration testing for this condition the Boulder tools images
are updated to in turn pull in an updated `pebble-challtestsrv` command
that tracks request history.
A new Python wrapper for the `pebble-challtestsrv` HTTP API is added to
centralize interacting with the chall test srv to add mock data and to
get the history of HTTP requests that have been processed.
`pebble-challtestsrv` added a `-defaultIPv4` arg we can use to simplify
the integration tests and fix FAKE_DNS usage outside of integration
tests.
A new boulder-tools image with an updated `pebble-challtestsrv` is used
and `test/startservers.py` is changed to populate `-defaultIPv4` via the
`FAKE_DNS` env var.
Now that Pebble has a `pebble-challtestsrv` we can remove the `challtestrv`
package and associated command from Boulder. I switched CI to use
`pebble-challtestsrv`. Notably this means that we have to add our expected mock
data using the HTTP management interface. The Boulder-tools images are
regenerated to include the `pebble-challtestsrv` command.
Using this approach also allows separating the TLS-ALPN-01 and HTTPS HTTP-01
challenges by binding each challenge type in the `pebble-challtestsrv` to
different interfaces both using the same VA
HTTPS port. Mock DNS directs the VA to the correct interface.
The load-generator command that was previously using the `challtestsrv` package
from Boulder is updated to use a vendored copy of the new
`github.org/letsencrypt/challtestsrv` package.
Vendored dependencies change in two ways:
1) Gomock is updated to the latest release (matching what the Bouldertools image
provides)
2) A couple of new subpackages in `golang.org/x/net/` are added by way of
transitive dependency through the challtestsrv package.
Unit tests are confirmed to pass for `gomock`:
```
~/go/src/github.com/golang/mock/gomock$ git log --pretty=format:'%h' -n 1
51421b9
~/go/src/github.com/golang/mock/gomock$ go test ./...
ok github.com/golang/mock/gomock 0.002s
? github.com/golang/mock/gomock/internal/mock_matcher [no test files]
```
For `/x/net` all tests pass except two `/x/net/icmp` `TestDiag.go` test cases
that we have agreed are OK to ignore.
Resolves https://github.com/letsencrypt/boulder/issues/3962 and
https://github.com/letsencrypt/boulder/issues/3951
To complete https://github.com/letsencrypt/boulder/issues/3956 the `challtestsrv` is updated such that its existing TLS-ALPN-01 challenge test server will serve HTTP-01 responses with a self-signed certificate when a non-TLS-ALPN-01 request arrives. This lets the TLS-ALPN-01 challenge server double as a HTTPS version of the HTTP challenge server. The `challtestsrv` now also supports adding/remove redirects that will be served to clients when requesting matching paths.
The existing chisel/chisel2 integration tests are updated to use the `challtestsrv` instead of starting their own standalone servers. This centralizes our mock challenge responses and lets us bind the `challtestsrv` to the VA's HTTP port in `startservers.py` without clashing ports later on.
New integration tests are added for HTTP-01 redirect scenarios using the updated `challtestserv`. These test cases cover:
* valid HTTP -> HTTP redirect
* valid HTTP -> HTTPS redirect
* Invalid HTTP -> non-HTTP/HTTPS port redirect
* Invalid HTTP-> non-HTTP/HTTPS protocol scheme redirect
* Invalid HTTP-> bare IP redirect
* Invalid HTTP redirect loop
The new integration tests shook out two fixes that were required for the legacy VA HTTP-01 code (afad22b) and one fix for the challtestsrv mock DNS (59b7d6d).
Resolves https://github.com/letsencrypt/boulder/issues/3956
The problem here was that we were doing revocation tests in the
v2 integration file that didn't block on getting the revoked OCSP
status. This meant that if the OCSP responder was running slow it
could execute a revoked cert tick between reseting the akamai test
server in the next test and sending another purge request which would
mean we saw two purge requests when we expected to see one.
The fix was to add the blocking and purge checking/reseting to the
v2 tests. Doing this without duplicating a bunch of code required
factoring a number of functions out into a third helpers file (I
think more code could be abstracted out to this file but just wanted
to start with what was needed for this change.)
The existing RA `UpdateAuthorization` RPC needs replacing for
two reasons:
1. The name isn't accurate - `PerformValidation` better captures
the purpose of the RPC.
2. The `core.Challenge` argument is superfluous since Key
Authorizations are not sent in the initiation POST from the client
anymore. The corresponding unmarshal and verification is now
removed. Notably this means broken clients that were POSTing
the wrong thing and failing pre-validation will now likely fail
post-validation.
To remove `UpdateAuthorization` the new `PerformValidation`
RPC is added alongside the old one. WFE and WFE2 are
updated to use the new RPC when the perform validation
feature flag is enabled. We can remove
`UpdateAuthorization` and its associated wrappers once all
WFE instances have been updated.
Resolves https://github.com/letsencrypt/boulder/issues/3930
Removes superfluous usage of `UpdatePendingAuthorization` in the RA to update the key authorization and test if the authorization is pending and instead uses the result of the initial `GetAuthorization` call in the WFE.
Fixes#3923.
This adds support for the account-uri CAA parameter as specified by
section 3 of https://tools.ietf.org/html/draft-ietf-acme-caa-04, allowing
issuance to be restricted to one or more ACME accounts as specified by CAA
records.
We see a fair number of ACME accounts/registrations with contact
addresses for the RFC2606 Section 3 "Reserved Example Second Level
Domain Names" (`example.com`, `example.net`, `example.org`). These are
not real contact addresses and are likely the result of the user
copy-pasting example configuration. These users will miss out on
expiration emails and other subscriber communications :-(
This commit updates the RA's `validateEmail` function to reject any
contact addresses for reserved example domain names. The corresponding
unit test is updated accordingly.
Resolves https://github.com/letsencrypt/boulder/issues/3719
When performing CAA checking respect the validation-methods parameter (if
present) and restrict the allowed authorization methods to those specified.
This allows a domain to restrict authorization methods that can be used with
Let's Encrypt.
This is largely based on PR #3003 (by @lukaslihotzki), which was landed and
then later reverted due to issue #3143. The bug the resulted in the previous
code being reverted has been addressed (likely inadvertently) by 76973d0f.
This implementation also includes integration tests for CAA validation-methods.
Fixes issue #3143.
We'd like to start using the DNS load balancer in the latest version of gRPC. That means putting all IPs for a service under a single hostname (or using a SRV record, but we're not taking that path). This change adds an sd-test-srv to act as our service discovery DNS service. It returns both Boulder IP addresses for any A lookup ending in ".boulder". This change also sets up the Docker DNS for our boulder container to defer to sd-test-srv when it doesn't know an answer.
sd-test-srv doesn't know how to resolve public Internet names like `github.com`. Resolving public names is required for the `godep-restore` test phase, so this change breaks out a copy of the boulder container that is used only for `godep-restore`.
This change implements a shim of a DNS resolver for gRPC, so that we can switch to DNS-based load balancing with the currently vendored gRPC, then when we upgrade to the latest gRPC we won't need a simultaneous config update.
Also, this change introduces a check at the end of the integration test that each backend received at least one RPC, ensuring that we are not sending all load to a single backend.
The existing CAA tests only test the CAA checks on the validation path and
not the CAA rechecking in the case where an existing authorization is present
(but older than the 8 hour window).
This extends the CAA integration tests to also cover the CAA rechecking
code path, by reusing older authorizations and rejecting issuance via CAA.
The test_expired_authz_404() test is currently broken in two ways - firstly,
there is no way for it to distinguish between a 404 from an expired authz
and a 404 from a non-existant authz. Secondly, the test_expired_authz_purger()
test runs and wipes out all of the existing authorizations, including the one
that was set up from setup_seventy_days_ago(), before the expired test runs.
Avoid this by running the expired authorization purger test from later in main().
Also, add a minimal canary that will detect if all authorizations have been purged
(although this still does not guarantee that we got a 404 due to expiration).
load generator: send correct ACMEv2 Content-Type on POST.
This PR updates the Boulder load-generator to send the correct ACMEv2 Content-Type header when POSTing the ACME server. This is required for ACMEv2 and without it all POST requests to the WFE2 running a test/config-next configuration result in malformed 400 errors. While only required by ACMEv2 this commit sends it for ACMEv1 requests as well. No harm no foul.
integration tests: allow running just the load generator.
Prior to this PR an omission in an if statement in integration-test.py meant that you couldn't invoke test/integration-test.py with just the --load argument to only run the load generator. This commit updates the if to allow this use case.
Previously we updated the RA's issueCertificateInner function to prefix errors returned from the CA with meaningful information about which CA RPC caused the failure. Unfortunately by using fmt.Errorf to do this we're discarding the underlying error type. This can cause unexpected server internal errors downstream if (for e.g.) the CA rejects a CSR with a malformed error (see #3632).
This PR updates the issueCertificateInner error message prefixing to maintain the error type if the underlying error is a berrors.BoulderError. A RA unit test with several mock CAs is added to test the prefixing occurs as expected without loss of error type.
This PR also adds an integration test that ensures we reject a CSR with >100 names with a malformed error. This is not strictly related to this PR but since I wrote it while debugging the root issue I thought I'd include it. To allow this test to pass the pendingAuthorizationsPerAccount in test/rate-limit-policies.yml and associated tests had to be adjusted.
Resolves#3632
This allows these tools to easily be run in command line mode from
the host machine against a Boulder running inside docker-compose up
without modifying the FAKE_DNS field in docker-compose.yml. This
allows for easier testing of various conditions.
In publisher and in the integration test, check that SCTs are in a
reasonable range. Also, update CreateTestingSignedSCT (used by
ct-test-srv) to produce SCTs correctly with a timetamp in Unix epoch
milliseconds.
- Remove acme-v2 test phase.
- Rename integration-test-v2.py to v2_integration, so it can be imported.
- Import all symbols from v2_integration before running test_*.
- In chisel2:
- Rename DIRECTORY so it doesn't collide.
- Incidental logging and error fixes.
- Merge v1 and v2 load testing into a single function.
- Run cert-checker just once, after all other test cases.
- In v2_integration:
- Remove unnecessary imports.
- Import chisel2 methods in the chisel2 namespace so they don't
collide with chisel methods.
- Remove main and shutdown code.
Previously, each time we defined a new test case in integration-test.py, we had to explicitly call it.
This made it easy to leave out cases without realizing it. After this change, we will automatically
find all functions named "test_" and call them. As a result, I found that we weren't calling
`test_revoked_by_account`, and it was failing. So I fixed it as part of this PR.
Fixes#3518
Adds SCT embedding to the certificate issuance flow. When a issuance is requested a precertificate (the requested certificate but poisoned with the critical CT extension) is issued and submitted to the required CT logs. Once the SCTs for the precertificate have been collected a new certificate is issued with the poison extension replace with a SCT list extension containing the retrieved SCTs.
Fixes#2244, fixes#3492 and fixes#3429.
This commit adds short 15s runs of the load generator against the V1 and
V2 APIs during the three integration test runs (v1 config, v1
config-next, and v2). 15s was selected because 30s caused too much
output and the build log to be truncated.
Presently the latency output is *not* being checked for errors. This was
too flaky in practice.
A fix for a race condition in the load-generator code itself related to
HTTP status code tracking is included in this commit.
The pending authz rate limit also needed to be adjusted to keep the
load-generator from failing requests after hitting 429s.
The test for the certificates_per_name rate limit uses an exact domain
name that has an override in the rate limit config file to have a limit
of 0. This works correctly most of the time. However, if that mechanism
fails once (due to some bug), future runs of the integation test will
continue to fail, because there will now be an issued certificate for
"lim.it" in the DB, and subsequent attempts will be considered renewals.
This change adds a random subdomain to the test, so that it's not
eligible for the renewal exemption.
Boulder is fairly noisy about gRPC connection errors. This is a mixed
blessing: Our gRPC configuration will try to reconnect until it hits
an RPC deadline, and most likely eventually succeed. In that case,
we don't consider those to really be errors. However, in cases where
a connection is repeatedly failing, we'd like to see errors in the
logs about connection failure, rather than "deadline exceeded." So
we want to keep logging of gRPC errors.
However, right now we get a lot of these errors logged during
integration tests. They make the output hard to read, and may disguise
more serious errors. So we'd like to avoid causing such errors in
normal integration test operation.
This change reorders the startup of Boulder components by their gRPC
dependencies, so everything's backend is likely to be up and running
before it starts. It also reverses that order for clean shutdowns,
and waits for each process to exit before signalling the next one.
With these changes, I still got connection errors. Taking listenbuddy
out of the gRPC path fixed them. I believe the issue is that
listenbuddy is not a truly transparent proxy. In particular, it
accepts an inbound TCP connection before opening an outbound TCP
connection. If opening that outbound connection results in "connection
refused," it closes the inbound connection. That means gRPC sees a
"connection closed" (or "connection reset"?) rather than "connection
refused". I'm guessing it handles those cases differently, explaining
the different error results.
We've been using listenbuddy to trigger disconnects while Boulder is
running, to ensure that gRPC's reconnect code works. I think we can
probably rely on gRPC's reconnect to work. The initial problem that
led us to start testing this was a configuration problem; now that
we have the configuration we want, we should be fine and don't need
to keep testing reconnects on every integration test run.
Previously, there was a disagreement between WFE and CA as to what the correct
issuer certificate was. Consolidate on test-ca2.pem (h2ppy h2cker fake CA).
Also, the CA configs contained an outdated entry for "IssuerCert", which was not
being used: The CA configs now use an "Issuers" array to allow signing by
multiple issuer certificates at once (for instance when rolling intermediates).
Removed this outdated entry, and the config code for CA to load it. I've
confirmed these changes match what is currently in production.
Added an integration test to check for this problem in the future.
Fixes#3309, thanks to @icing for bringing the issue to our attention!
This also includes changes from #3321 to clarify certificates for WFE.
Now, rather than LIMIT / OFFSET, this uses the highest id from the last batch in each new batch's query. This makes efficient use of the index, and means the database does not have to scan over a large number of non-expired rows before starting to find any expired rows.
This also changes the structure of the purge function to continually push ids for deletion onto a channel, to be processed by goroutines consuming that channel.
Also, remove the --yes flag and prompting.
The go-grpc-prometheus package by default registers its metrics with Prometheus' global registry. In #3167, when we stopped using the global registry, we accidentally lost our gRPC metrics. This change adds them back.
Specifically, it adds two convenience functions, one for clients and one for servers, that makes the necessary metrics object and registers it. We run these in the main function of each server.
I considered adding these as part of StatsAndLogging, but the corresponding ClientMetrics and ServerMetrics objects (defined by go-grpc-prometheus) need to be subsequently made available during construction of the gRPC clients and servers. We could add them as fields on Scope, but this seemed like a little too much tight coupling.
Also, update go-grpc-prometheus to get the necessary methods.
```
$ go test github.com/grpc-ecosystem/go-grpc-prometheus/...
ok github.com/grpc-ecosystem/go-grpc-prometheus 0.069s
? github.com/grpc-ecosystem/go-grpc-prometheus/examples/testproto [no test files]
```
There were two bugs in #3167:
All process-level stats got prefixed with "boulder", which broke dashboards.
All request_time stats got dropped, because measured_http was using the prometheus DefaultRegisterer.
To fix, this PR plumbs through a scope object to measured_http, and uses an empty prefix when calling NewProcessCollector().
In 2fb247488f we consolidated the
`regModelV2` and `regModelv1` structs to one `regModel` type. In the
process we accidentally lost the explicit assignment of the
to-be-updated registration model's `LockCol` with the value of the
existing registration's `LockCol`. This meant that the Update was
occurring with a where clause `LockCol=0` (the default value).
In practice this meant that the first reg update would succeed (since
the reg row starts with LockCol=0) but any regs that had already been
updated once before would modify 0 rows in the update (because the where
clause on `LockCol` failed) and this in turn was translated into
a ServerInternal error since we knew the reg being updated did exist.
This commit updates the SA's `UpdateRegistration` function to properly
set the `LockCol` on the to-be-updated row.
This commit additionally adds an integration test for registration
contact information updating to ensure we don't fall into this trap in
the future.
Previously, CAA problems were lumped in under "ConnectionProblem" or
"Unauthorized". This should make things clearer and easier to differentiate.
Fixes#3043
Fixes#3020.
In order to write integration tests for some features, especially related to rate limiting, rechecking of CAA, and expiration of authzs, orders, and certs, we need to be able to fake the passage of time in integration tests.
To do so, this change switches out all clock.Default() instances for cmd.Clock(), which can be set manually with the FAKECLOCK environment variable. integration-test.py now starts up all servers once before the main body of tests, with FAKECLOCK set to a date 70 days ago, and does some initial setup for a new integration test case. That test case tries to fetch a 70-day-old authz URL, and expects it to 404.
In order to make this work, I also had to change a number of our test binaries to shut down cleanly in response to SIGTERM. Without that change, stopping the servers between the setup phase and the main tests caused startservers.check() to fail, because some processes exited with nonzero status.
Note: This is an initial stab at things, to prove out the technique. Long-term, I think we will want to use an idiom where test cases are classes that have a number of optional setup phases that may be run at e.g. 70 days prior and 5 days prior. This could help us avoid a proliferation of global state as we add more time-dependent test cases.
As described in Boulder issue #2800 the implementation of the SA's
`countCertificates` function meant that the renewal exemption for the
Certificates Per Domain rate limit was difficult to work with. To
maximize allotted certificates clients were required to perform all new
issuances first, followed by the "free" renewals. This arrangement was
difficult to coordinate.
In this PR `countCertificates` is updated such that renewals are
excluded from the count reliably. To do so the SA takes the serials it
finds for a given domain from the issuedNames table and cross references
them with the FQDN sets it can find for the associated serials. With the
FQDN sets a second query is done to find all the non-renewal FQDN sets
for the serials, giving a count of the total non-renewal issuances to
use for rate limiting.
Resolves#2800
The `submissions_b` count in the integration test `test_ct_submission` function was being populated initially by using `url_a` when it _should_ be initialized using `url_b` since it's the count of submissions to log b.
This resolves https://github.com/letsencrypt/boulder/issues/2723
I tested this fix with a branch that ran this test 12 times per build. Prior to this fix multiple builds out of 20 (~4-5) would fail. With this fix, all 20 passed.
In 18f4c5c we introduced a workaround for the CT submission integration
test to allow exactly expected, or twice as many CT log submissions as
expected to account for the case where the ocsp-updater and the CA race.
This didn't completely patch over the issue because the number of
submissions can fall between `n` and `2n`.
This commit updates the hack to be even hackier (twice as hacky or your
money back). Now we consider any value *between* `n` and `2n` as a test
pass.
Instead of running it at the current time to clean out left over cruft run it with a FAKECLOCK of +1 year so that we catch everything that could get in the way.
Fixes#2148.
Instead of just doing a blanket `DELETE FROM ...` this changes the `expired-authz-purger` to select all of the expired IDs (for both pending and finalized authorizations) then loop over them deleting each and its associated challenges from their respective tables.
Local testing indicates the performance of this is not awful but we should do a test run on staging to verify. If it ends up taking way too long to run there the easiest optimization would be to turn the slice of IDs into a channel and run multiple workers looping over the channel deleting stuff instead of just a single one.
Makes a few small integration test changes in order to facilitate deleting both pending and finalized authorizations.