Removes superfluous usage of `UpdatePendingAuthorization` in the RA to update the key authorization and test if the authorization is pending and instead uses the result of the initial `GetAuthorization` call in the WFE.
Fixes#3923.
Removes the checks for a handful of deployed feature flags in preparation for removing the flags entirely. Also moves all of the currently deprecated flags to a separate section of the flags list so they can be more easily removed once purged from production configs.
Fixes#3880.
Required a little bit of rework of the RA issuance flow (to add parsing of the precert to determine the expiration date, and moving final cert parsing before final cert submission) and RA tests, but I think it shouldn't create any issues...
Fixes#3197.
In db01b0b we removed email validation from the RA. This was the only
use of the `bdns` package by the RA and so we can go one step further
and delete the remaining setup, configuration and `bdns` fields.
Performing DNS lookups to check the A/AAAA/MX records of a provided contact e-mail address adds variability to the RA's NewRegistration/UpdateRegistration functions and requires that the RA be able to reach out to the EFN. Since this is simply a convenience to prevent some classes of registration errors we can remove it to improve performance and to tighten up our security posture slightly.
Resolves https://github.com/letsencrypt/boulder/issues/3849
If an order is expired the status is invalid and we don't need to get any of the order's authorizations. Its important to exit early in this case because expired authorizations may be purged from the DB. Fetching the authz's for an expired order may return less authz objects than expected, triggering a 500 error response.
Resolves https://github.com/letsencrypt/boulder/issues/3839
Things removed:
* features.EmbedSCTs (and all the associated RA/CA/ocsp-updater code etc)
* ca.enablePrecertificateFlow (and all the associated RA/CA code)
* sa.AddSCTReceipt and sa.GetSCTReceipt RPCs
* publisher.SubmitToCT and publisher.SubmitToSingleCT RPCs
Fixes#3755.
We see a fair number of ACME accounts/registrations with contact
addresses for the RFC2606 Section 3 "Reserved Example Second Level
Domain Names" (`example.com`, `example.net`, `example.org`). These are
not real contact addresses and are likely the result of the user
copy-pasting example configuration. These users will miss out on
expiration emails and other subscriber communications :-(
This commit updates the RA's `validateEmail` function to reject any
contact addresses for reserved example domain names. The corresponding
unit test is updated accordingly.
Resolves https://github.com/letsencrypt/boulder/issues/3719
When performing CAA checking respect the validation-methods parameter (if
present) and restrict the allowed authorization methods to those specified.
This allows a domain to restrict authorization methods that can be used with
Let's Encrypt.
This is largely based on PR #3003 (by @lukaslihotzki), which was landed and
then later reverted due to issue #3143. The bug the resulted in the previous
code being reverted has been addressed (likely inadvertently) by 76973d0f.
This implementation also includes integration tests for CAA validation-methods.
Fixes issue #3143.
When rechecking CAA, the existing code maps all failures to a CAAError.
This means that any other non-CAA failure (for example, an internal server
error) gets hidden.
Avoid this by reworking recheckCAA to return errors and if we find a
non-CAAError, we return that directly. Revise tests to cover both
situations.
Updates issue #3143.
Remove various unnecessary uses of fmt.Sprintf - in particular:
- Avoid calls like t.Error(fmt.Sprintf(...)), where t.Errorf can be used directly.
- Use strconv when converting an integer to a string, rather than using
fmt.Sprintf("%d", ...). This is simpler and can also detect type errors at
compile time.
- Instead of using x.Write([]byte(fmt.Sprintf(...))), use fmt.Fprintf(x, ...).
* Randomize order of CT logs when submitting precerts so we maximize the chances we actually exercise all of the logs in a group and not just the first in the list.
* Add metrics for winning logs
In #3614 we adjusted the `SA.NewOrder` function to conditionally call `ssa.statusForOrder` on the new order when `features.OrderReadyStatus` was enabled. Unfortunately this call to `ssa.statusForOrder` happened *before* the `req.BeganProcessing` field was initialized with a pointer to a `false` bool. The `ssa.statusForOrder` function (correctly) assumes that `req.BeganProcessing == nil` is illegal and doesn't correspond to a known status. This results in `NewOrder` requests returning a 500 error
of the form:
> Internal error - Error creating new order - Order XXX is in an invalid state. No state known for this order's authorizations
Our integration tests missed this because we didn't have a test case that issued for a set of names with one account, and then issued again for the same set of names with the same account.
This PR fixes the original bug by moving the `BeganProcessing` initialization before the call to `statusForOrder`. This PR also adds an integration test to catch this sort of bug again in the future.
Prior to the SA fix this test failed with the 500 server internal error observed by the Certbot team. With the SA fix in place the test passes again.
Finally, this PR disables the `OrderReadyStatus` feature flag in `test/config-next/sa.json`. Certbot's ACME implementation breaks when this flag is enabled (See https://github.com/certbot/certbot/issues/5856). Since Certbot runs integration tests against Boulder with config-next we should be courteous and leave this flag disabled until we are closer to being able to turn it on for staging/prod.
The `TotalCertificates` rate limit serves to ensure we don't
accidentally exceed our OCSP signing capacity by issuing too many
certificates within a fixed period. In practice this rate limit has been
fragile and the associated queries have been linked to performance
problems.
Since we now have better means of monitoring our OCSP signing capacity
this commit removes the rate limit and associated code.
* SA: Add Order "Ready" status, feature flag.
This commit adds the new "Ready" status to `core/objects.go` and updates
`sa.statusForOrder` to use it conditionally for orders with all valid
authorizations that haven't been finalized yet. This state is used
conditionally based on the `features.OrderReadyStatus` feature flag
since it will likely break some existing clients that expect status
"Processing" for this state. The SA unit test for `statusForOrder` is
updated with a "ready" status test case.
* RA: Enforce order ready status conditionally.
This commit updates the RA to conditionally expect orders that are being
finalized to be in the "ready" status instead of "pending". This is
conditionally enforced based on the `OrderReadyStatus` feature flag.
Along the way the SA was changed to calculate the order status for the
order returned in `sa.NewOrder` dynamically now that it could be
something other than "pending".
* WFE2: Conditionally enforce order ready status for finalization.
Similar to the RA the WFE2 should conditionally enforce that an order's
status is either "ready" or "pending" based on the "OrderReadyStatus"
feature flag.
* Integration: Fix `test_order_finalize_early`.
This commit updates the V2 `test_order_finalize_early` test for the
"ready" status. A nice side-effect of the ready state change is that we
no longer invalidate an order when it is finalized too soon because we
can reject the finalization in the WFE. Subsequently the
`test_order_finalize_early` testcase is also smaller.
* Integration: Test classic behaviour w/o feature flag.
In the previous commit I fixed the integration test for the
`config/test-next` run that has the `OrderReadyStatus` feature flag set
but broke it for the `config/test` run without the feature flag.
This commit updates the `test_order_finalize_early` test to work
correctly based on the feature flag status in both cases.
Previously we updated the RA's issueCertificateInner function to prefix errors returned from the CA with meaningful information about which CA RPC caused the failure. Unfortunately by using fmt.Errorf to do this we're discarding the underlying error type. This can cause unexpected server internal errors downstream if (for e.g.) the CA rejects a CSR with a malformed error (see #3632).
This PR updates the issueCertificateInner error message prefixing to maintain the error type if the underlying error is a berrors.BoulderError. A RA unit test with several mock CAs is added to test the prefixing occurs as expected without loss of error type.
This PR also adds an integration test that ensures we reject a CSR with >100 names with a malformed error. This is not strictly related to this PR but since I wrote it while debugging the root issue I thought I'd include it. To allow this test to pass the pendingAuthorizationsPerAccount in test/rate-limit-policies.yml and associated tests had to be adjusted.
Resolves#3632
This commit updates the `boulder-ra` and `boulder-ca` commands to refuse
to start if their configured `MaxNames` is 0 (the default value). This
should always be set to a positive number.
This commit also updates `csr/csr.go` to always apply the max names
check since it will never be 0 after the change above.
Also refactor `FailOnError` to pull out a separate `Fail` function.
Related to https://github.com/letsencrypt/boulder/issues/3632
This PR updates the RA such that certificateRequestEvent objects created during issuance and written to the audit log as JSON also include a new Authorizations field. This field is a map of the form map[string]certificateRequestAuthz and can be used to map from an identifier name appearing in the associated certificate to a certificateRequestAuthz object. Each of the certificateRequestAuthz objects holds an authorization ID and the type of challenge that made the authorization valid.
Example Audit log output (with the JSON pulled out and pretty-printed):
{
"ID":"0BjPk94KlxExRRIQ3kslRVSJ68KMaTh416chRvq0wyA",
"Requester":666,
"SerialNumber":"ff699d91cab5bc84f1bc97fc71e4e27abc1a",
"VerifiedFields":["subject.commonName","subjectAltName"],
"CommonName":"rand.44634cbf.xyz",
"Names":["rand.44634cbf.xyz"],
"NotBefore":"2018-03-28T19:50:07Z",
"NotAfter":"2018-06-26T19:50:07Z",
"RequestTime":"2018-03-28T20:50:07.234038859Z",
"ResponseTime":"2018-03-28T20:50:07.278848954Z",
"Authorizations": {
"rand.44634cbf.xyz" : {
"ID":"jGt37Rnvfr0nhZn-wLkxrQxc2HBfV4t6TSraRiGnNBM",
"ChallengeType":"http-01"
}
}
}
Resolves#3253
Prior to this commit an order's expiry was set based on
ra.orderLiftime while pending and valid authorization expiry was set
based on ra.pendingAuthorizationLifetime and
ra.authorizationLifetime. Since orders reused existing valid/pending
authorizations this can lead to a case where an order has an expiry
beyond the associated authorization expiries. In this case when an
authorization expired the order becomes inactionable and the extra order
lifetime is not useful.
This commit addresses this problem in two ways:
1. The SA GetAuthorizations function used to find authzs to reuse for
ra.NewOrder is adjusted to only return authorizations at min 24hr
away from expiry.
2. Order expiry is now calculated by the RA in newOrder
as the min of the order's own expiry or the soonest
authorization expiry. This properly reflects the order's true
lifetime based on the authorization lifetime.
The RA/SA unit tests are updated accordingly.
Resolves#3498
Adds SCT embedding to the certificate issuance flow. When a issuance is requested a precertificate (the requested certificate but poisoned with the critical CT extension) is issued and submitted to the required CT logs. Once the SCTs for the precertificate have been collected a new certificate is issued with the poison extension replace with a SCT list extension containing the retrieved SCTs.
Fixes#2244, fixes#3492 and fixes#3429.
Requesting a certificate with "*.example.com" and "www.example.com" as
separate SANs doesn't make sense, because "www.example.com" is covered
by the wildcard.
#3524
This commit updates the RA to make the notion of submitting
a KeyAuthorization value as part of the ra.UpdateAuthorization call
optional. If set, the value is enforced against expected and an error is
returned if the provided authorization isn't correct. If it isn't set
the RA populates the field with the computed authorization for the VA to
enforce against the value it sees in challenges. This retains the legacy
behaviour of the V1 API. The V2 API will never unmarshal a provided
key authorization.
The ACMEv2/WFEv2 prepChallengeForDisplay function is updated to strip
the ProvidedKeyAuthorization field before sending the challenge object
back to a client. ACMEv1/WFEv1 continue to return the KeyAuthorization
in challenges to avoid breaking clients that are relying on this legacy
behaviour.
For deployability ease this commit retains the name of the
core.Challenge.ProvidedKeyAuthorization field even though it should
be called core.Challenge.ComputedKeyAuthorization now that it isn't
set based on the client's provided key authz. This will be easier as
a follow-up change.
Resolves#3514
This commit adds short 15s runs of the load generator against the V1 and
V2 APIs during the three integration test runs (v1 config, v1
config-next, and v2). 15s was selected because 30s caused too much
output and the build log to be truncated.
Presently the latency output is *not* being checked for errors. This was
too flaky in practice.
A fix for a race condition in the load-generator code itself related to
HTTP status code tracking is included in this commit.
The pending authz rate limit also needed to be adjusted to keep the
load-generator from failing requests after hitting 429s.
This race was uncovered by running the load generator as part of our CI.
Also, update ra_test.go. It was previously testing that the returned authz
and the stored authz should be identical, which is not actually a property
of UpdateAuthorization; in general, they will not be identical.
Previously we introduced the concept of a "pending orders per account
ID" rate limit. After struggling with making an implementation of this
rate limit perform well we reevaluated the problem and decided a "new
orders per account per time window" rate limit would be a better fit for
ACMEv2 overall.
This commit introduces the new newOrdersPerAccount rate limit. The RA
now checks this before creating new pending orders in ra.NewOrder. It
does so after order reuse takes place ensuring the rate limit is only
applied in cases when a distinct new pending order row would be created.
To accomplish this a migration for a new orders field (created) and an
index over created and registrationID is added. It would be possible to
use the existing expires field for this like we've done in the past, but that
was primarily to avoid running a migration on a large table in prod. Since
we don't have that problem yet for V2 tables we can Do The Right Thing
and add a column.
For deployability the deprecated pendingOrdersPerAccount code & SA
gRPC bits are left around. A follow-up PR will be needed to remove
those (#3502).
Resolves#3410
Add a set of logs which will be submitted to but not relied on for their SCTs,
this allows us to test submissions to a particular log or submit to a log which
is not yet approved by a browser/root program.
Also add a feature which stops cancellations of remaining submissions when racing
to get a SCT from a group of logs.
Additionally add an informational log that always times out in config-next.
Fixes#3464 and fixes#3465.
Removes usage of the `EnforceChallengeDisable` feature, the feature itself is not removed as it is still configured in staging/production, once that is fixed I'll submit another PR removing the actual flag.
This keeps the behavior that when authorizations are retrieved from the SA they have their challenges populated, because that seems to make the most sense to me? It also retains TLS re-validation.
Fixes#3441.
Prior to this commit a logical error in the RA's `NewOrder` caused
a safety check that prevents authorization reuse with a non-wildcard
authz for a wildcard name to not work. This commit adds a test for the
condition that the safety check is designed for and fixes the logical
error. Prior to fixing the logical error the test fails. With the
corrected safety check the test passes.
Prior to this commit when building up the authorizations for a new-order
request we looked for any unexpired pending/valid authorizations owned
by the account and used them for the order. This allows a client to use
the V1 new-authz endpoint in combination with the V2 new-order endpoint
and we do not want to support this behaviour. All V2 authorizations
should be sourced from other V2 orders. This commit implements a new
parameter for the SA's getAuthorizations function that allows filtering
out legacy V1 authorizations by doing a JOIN on the order to
authorizations join table.
Resolves#3328
This commit resolves the case where an error during finalization occurs.
Prior to this commit if an error (expected or otherwise) occurred after
setting an order to status processing at the start of order
finalization the order would be stuck processing forever.
The SA now has a `SetOrderError` RPC that can be used by the RA to
persist an error onto an order. The order status calculation can use
this error to decide if the order is invalid. The WFE is updated to
write the error to the order JSON when displaying the order information.
Prior to this commit the order protobuf had the error field as
a `[]byte`. It doesn't seem like this is the right decision, we have
a specific protobuf type for ProblemDetails and so this commit switches
the error field to use it. The conversion to/from `[]byte` is done with
the model by the SA.
An integration test is included that prior to this commit left an order
in a stuck processing state. With this commit the integration test
passes as expected.
Resolves https://github.com/letsencrypt/boulder/issues/3403
This PR is a rework of what was originally https://github.com/letsencrypt/boulder/pull/3382, integrating the design feedback proposed by @jsha: https://github.com/letsencrypt/boulder/pull/3382#issuecomment-359912549
This PR removes the stored Order status field and replaces it with a value that is calculated on-the-fly by the SA when fetching an order, based on the order's associated authorizations.
In summary (and order of precedence):
* If any of the order's authorizations are invalid, the order is invalid.
* If any of the order's authorizations are deactivated, the order is deactivated.
* If any of the order's authorizations are pending, the order is pending.
* If all of the order's authorizations are valid, and there is a certificate serial, the order is valid.
* If all of the order's authorizations are valid, and we have began processing, but there is no certificate serial, the order is processing.
* If all of the order's authorizations are valid, and we haven't processing, then the order is pending waiting a finalization request.
This avoids having to explicitly update the order status when an associated authorization changes status.
The RA's implementation of new-order is updated to only reuse an existing order if the calculated status is pending. This avoids giving back invalid or deactivated orders to clients.
Resolves#3333
Fixes#3368.
Basically just adds a `csr.VerifyCSR` call in `ra.FinalizeOrder` that mirrors what we have in `ra.NewCertificate`, this moves the CN to SAN as expected if included.
Per
https://community.letsencrypt.org/t/2018-01-11-update-regarding-acme-tls-sni-and-shared-hosting-infrastructure/50188/3,
we are planning to treat prior issuance by an account as reason to whitelist
that account for reissuance via TLS-SNI. By extension, reusing validations that
occurred prior to disclosure of the TLS-SNI issue is reasonably safe, so this
change removes the issuance-time check for whether a challenge has been
disabled. This saves us significant complexity and database load in implementing
TLSSNIRevalidation (https://github.com/letsencrypt/boulder/pull/3361), since
ChallengeTypeEnabled returns false, so we'd have to plumb through data about
whether an issuance was based on a revalidation. Instead, we can safely delete
this code.
Note that "EnforceChallengeDisable" is implemented in three places: new-authz,
validation time, and issuance time. We're keeping it in place at new-authz for
now because it's intertwined with the account whitelisting code. We're keeping
it in place at validation time, because there's a small chance that someone
could have created a pending authz for a domain they don't control before the
TLS-SNI issue was announced, and that authz could still be pending, and they
could find out that that domain is hosted on a vulnerable provider, and use the
vulnerability now that they know about it. A tiny chance, but may as well be
careful.
This change adds a feature flag, TLSSNIRevalidation. When it is enabled, Boulder
will create new authorization objects with TLS-SNI challenges if the requesting
account has issued a certificate with the relevant domain name, and was the most
recent account to do so*. This setting overrides the configured list of
challenges in the PolicyAuthority, so even if TLS-SNI is disabled in general, it
will be enabled for revalidation.
Note that this interacts with EnforceChallengeDisable. Because
EnforceChallengeDisable causes additional checked at validation time and at
issuance time, we need to update those two places as well. We'll send a
follow-up PR with that.
*We chose to make this work only for the most recent account to issue, even if
there were overlapping certificates, because it significantly simplifies the
database access patterns and should work for 95+% of cases.
Note that this change will let an account revalidate and reissue for a domain
even if the previous issuance on that account used http-01 or dns-01. This also
simplifies implementation, and fits within the intent of the mitigation plan: If
someone previously issued for a domain using http-01, we have high confidence
that they are actually the owner, and they are not going to "steal" the domain
from themselves using tls-sni-01.
Also note: This change also doesn't work properly with ReusePendingAuthz: true.
Specifically, if you attempted issuance in the last couple days and failed
because there was no tls-sni challenge, you'll still have an http-01 challenge
lying around, and we'll reuse that; then your client will fail due to lack of
tls-sni challenge again.
This change was joint work between @rolandshoemaker and @jsha.
This updates the PA component to allow authorization challenge types that are globally disabled if the account ID owning the authorization is on a configured whitelist for that challenge type.