We intend to delete the v1 API (i.e. `wfe` and its associated codepaths)
in the near future, and as such are not giving it new features or
capabilities. However, before then we intend to allow the v2 API to
provide issuance both from our RSA and from our ECDSA intermediates.
The v1 API cannot gain such capability at the same time.
The CA doesn't know which frontend originated any given issuance
request, so we can't simply gate the single- or double-issuer behavior
based on that. Instead, this change introduces the ability for the
WFE (and the RA, which sits between the WFE and the CA) to request
issuance from a specific intermediate. If the specified intermediate is
not available in the CA, issuance will fail. If no intermediate is
specified (as is the case in requests coming from wfe2), it falls back
to selecting the issuer based on the algorithm of the public key to
be signed.
Fixes#5216
The database stores order expiry values as type DATETIME, which only
supports values with second-level accuracy. (Contrast with type
DATETIME(6), which allows microsecond accuracy.) But the order object
being written to the database does not have its expiry value rewritten
by the insert process. This leads to Boulder returning different
values for `expires` depending on whether the order was created fresh
(nanosecond accuracy) or retrieved from the db (second accuracy).
There appears to be only one codepath which suffers from this
discrepancy. Although Authorization objects also have an `expires`
field, they are never returned to the client immediately upon creation;
they are first exposed to the user as URLs within an Order object, and
so are always retrieved from the database.
Fixes#5166
This error class is only used in one instance, and when returned to
the user it is transformed into a `probs.Malformed` anyway. It does
more harm than good to keep this one-off BoulderError around, as
it introduces confusion about what sorts of errors we expose to the
client.
Fixes#5167
The RA is responsible for contacting Akamai to purge cached OCSP
responses when a certificate is revoked and fresh OCSP responses need to
be served ASAP. In order to do so, it needs to construct the same OCSP
URLs that clients would construct, and that Akamai would cache. In order
to do that, it needs access to the issuing certificate to compute a hash
across its Subject Info and Public Key.
Currently, the RA holds a single issuer certificate in memory, and uses
that cert to compute all OCSP URLs, on the assumption that all certs
we're being asked to revoke were issued by the same issuer.
In order to support issuance from multiple intermediates at the same
time (e.g. RSA and ECDSA), and to support rollover between different
issuers of the same type (we may need to revoke certs issued by two
different issuers for the 90 days in which their end-entity certs
overlap), this commit changes the configuration to provide a list of
issuer certificates instead.
In order to support efficient lookup of issuer certs, this change also
introduces a new concept, the Chain ID. The Chain ID is a truncated hash
across the raw bytes of either the Issuer Info or the Subject Info of a
given cert. As such, it can be used to confirm issuer/subject
relationships between certificates. In the future, this may be a
replacement for our current IssuerID (a truncated hash over the whole
issuer certificate), but for now it is used to map revoked certs to
their issuers inside the RA.
Part of #5120
This change adds two new test assertion helpers, `AssertErrorIs`
and `AssertErrorWraps`. The former is a wrapper around `errors.Is`,
and asserts that the error's wrapping chain contains a specific (i.e.
singleton) error. The latter is a wrapper around `errors.As`, and
asserts that the error's wrapping chain contains any error which is
of the given type; it also has the same unwrapping side effect as
`errors.As`, which can be useful for further assertions about the
contents of the error.
It also makes two small changes to our `berrors` package, namely
making `berrors.ErrorType` itself an error rather than just an int,
and giving `berrors.BoulderError` an `Unwrap()` method which
exposes that inner `ErrorType`. This allows us to use the two new
helpers above to make assertions about berrors, rather than
having to hand-roll equality assertions about their types.
Finally, it takes advantage of the two changes above to greatly
simplify many of the assertions in our tests, removing conditional
checks and replacing them with simple assertions.
The only caller of this function is the RA's `revokeCertificate`
method, which already has the hydrated `x509.Certificate`
version of the cert. There's no need to pass the raw version
and re-parse the DER again, just pass a reference to the
existing cert.
This was already part done: There is an ID() method in issuance. This
change extends that by:
- Defining a type alias indicating something is an IssuerID.
- Defining issuance.Certificate, which also has an ID() method,
so that components that aren't the CA can load certificates and
use the type system to mark them as issuers (and get their IDs).
- Converting akamai-purger and ca to use the new types.
- Removing idForIssuer from ca.go.
These were used during the transition to authzv2. The SA side of these
RPCs already ignores these booleans. This is just cleaning up the
protobufs and call sites.
As part of #5050, I'm updating some of the code in grpc/pb-marshaling.go
to move from nil checks to zero checks. In the process I'm introducing some
new zero checks, on things like challenge type, status, and token. This is
shaking out some places where our mocks have taken shortcuts by not
creating a "full" object including all fields that are normally present.
This PR updates our mocks and tests to provide more realistic objects in
all the places that broke when introducing those zero checks.
One slightly surprising / interesting thing: Since core types like
Order and Registration are still proto2 and have pointer fields,
there are actually some places in this PR where I had to add
a `*` rather than delete an `&`, because I was taking a pointer
field from one of those core types and passing it as a field in
an SA RPC request.
Fixes#5037.
Updates the Registration Authority to use proto3 for its
RPC methods. This turns out to be a fairly minimal change,
as many of the RA's request and response messages are
defined in core.proto, and are therefore still proto2.
Fixes#4955
Any field which can be zero must be allowed to be nil,
so that a proto2 server receiving requests from a proto3
client is willing to process messages with zero-value fields
encoded as missing.
Part of #4955
As part of the migration to proto3, any fields in requests that may be
zero should also be allowed to be nil. That's because proto3 will
represent those fields as absent when they have their zero value.
This is based on a manual review of the wrappers for the SA, plus
a pair of integration test runs. For the integration test runs I took these
steps:
1. Copy sa/proto to sa/proto2
2. Change sa/proto to use proto3 and regenerate.
3. In sa/*.go and cmd/boulder-sa/main.go, update the imports to use the
proto2 version.
4. Split grpc/sa-wrappers.go into sa-server-wrappers.go and sa-wrappers.go
(containing the client code)
5. In sa-server-wrappers.go, change the import to use sa/proto2.
6. In sa-server-wrappers.go, make a local copy of the core.StorageAuthority
interface that uses the sa/proto2 types. This was necessary as
a temporary kludge because of how the server wrapper internally
uses the core.StorageAuthority interface.
7. Fix all the pointer-vs-value build errors in every other package.
8. Run integration tests.
I also performed those steps with proto2 and proto3 swapped, to confirm the
behavior when a proto2 client talks to a proto3 SA.
This updates va.proto to use proto3 syntax, and updates
all clients of the autogenerated code to use the new types.
In particular, it removes indirection from built-in types
(proto3 uses ints, rather than pointers to ints, for example).
Depends on #5003Fixes#4956
ra.PerformValidation's goroutine surfaces errors not by returning them,
but by accumulating them into the `prob`variable and saving them to
the database. This makes it possible for processing to continue even
in error cases when it should (mostly) halt. This change fixes a bug
where we would try to access a member of the result returned from
va.PerformValidation, even if that function call had returned an error.
ACME Challenges are well-known strings ("http-01", "dns-01", and
"tlsalpn-01") identifying which kind of challenge should be used
to verify control of a domain. Because they are well-known and
only certain values are valid, it is better to represent them as
something more akin to an enum than as bare strings. This also
improves our ability to ensure that an AcmeChallenge is not
accidentally used as some other kind of string in a different
context. This change also brings them closer in line with the
existing core.AcmeResource and core.OCSPStatus string enums.
Fixes#5009
Updates the type of the ValidationAuthority's PerformValidation
method to be identical to that of the corresponding auto-generated
grpc method, i.e. directly taking and returning proto message
types, rather than exploded arguments.
This allows all logic to be removed from the VA wrappers, which
will allow them to be fully removed after the migration to proto3.
Also updates all tests and VA clients to adopt the new interface.
Depends on #4983 (do not review first four commits)
Part of #4956
Our proto files had a variety of indentation styles: 2 spaces,
4 spaces, 8 spaces, and tabs; sometimes mixed within the same
file. The proto3 style guide[1] says to use 2-space indents,
so this change standardizes on that.
[1] https://developers.google.com/protocol-buffers/docs/style
We'd like to issue certs with no CN eventually, but it's not
going to happen any time soon. In the mean time, the existing
code never gets exercised and is rather complex, so this
removes it.
This is the only method on the ca which uses a non-proto
type as its request or response value. Changing this to
use a proto removes the last logic from the wrappers,
allowing them to be removed in a future CL. It also makes
the interface more uniform and easier to reason about.
Issue: #4940
We previously used mixed case names for proto imports
(e.g. both `caPB` and `rapb`), sometimes in the same file.
This change standardizes on the all-lowercase spelling,
which was predominant throughout the codebase.
This updates va.proto to use proto3 syntax, and updates
all clients of the autogenerated code to use the new types. In
particular, it removes indirection from built-in types (proto3
uses ints, rather than pointers to ints, for example).
Fixes#4956
This updates the ca.proto to use proto3 syntax, and updates
all clients of the autogenerated code to use the new types. In
particular, it removes indirection from built-in types (proto3
uses ints, rather than pointers to ints, for example).
It also updates a few instances where tests were being
conducted to see if various object fields were nil to instead
check for those fields' new zero-value.
Fixes#4940
This commit consists of three classes of changes:
1) Changing various command main.go files to always behave as they
would have when features.BlockedKeyTable was true. Also changing
one test in the same manner.
2) Removing the BlockedKeyTable flag from configuration in config-next,
because the flag is already live.
3) Moving the BlockedKeyTable flag to the "deprecated" section of
features.go, and regenerating featureflag_strings.go.
A future change will remove the BlockedKeyTable flag (and other
similarly deprecated flags) from features.go entirely.
Fixes#4873
This reverts commit 6454513ded.
We actually need to wait 90 days to ensure the issuerID field of the
certificateStatus table is non-nil for all extant certificates.
As part of that, add support for issuer IDs in orphan-finder's
and RA's calls to GenerateOCSP.
This factors out the idForIssuer logic from ca/ca.go into a new
issuercerts package.
orphan-finder refactors:
Add a list of issuers in config.
Create an orphanFinder struct to hold relevant fields, including the
newly added issuers field.
Factor out a storeDER function to reduce duplication between the
parse-der and parse-ca-log cases.
Use test certificates generated specifically for orphan-finder tests.
This was necessary because the issuers of these test certificates have
to be configured for the orphan finder.
This required a refactoring: Move validateEmail from the RA to ValidEmail
in the `policy` package. I also moved `ValidDomain` from a method on
PolicyAuthority to a standalone function so that ValidEmail can call it.
notify-mailer will now log invalid addresses and skip them without
attempting to send mail. Since @example.com addresses are invalid,
I updated the notify-mailer test, which used a lot of such addresses.
Also, now when notify-mailer receives an unrecoverable error sending
mail, it logs the email address and what offset within the list it was.
When we originally added this package (4 years ago) x/crypto/ocsp didn't
have its own list of revocation reasons, so we added our own. Now it does
have its own list, so just use that list instead of duplicating code for
no real reason.
Also we build a list of the revocation reasons we support so that we can
tell users when they try to use an unsupported one. Instead of building
this string every time, just build it once it during package initialization.
Finally return the same error message in wfe that we use in wfe2 when a
user requests an unsupported reason.
There are some changes to the code generated in the latest version, so
this modifies every .pb.go file.
Also, the way protoc-gen-go decides where to put files has changed, so
each generate.go gets the --go_opt=paths=source_relative flag to
tell protoc to continue placing output next to the input.
Remove staticcheck from build.sh; we get it via golangci-lint now.
Pass --no-document to gem install fpm; this is recommended in the fpm docs.
Adds a daemon which monitors the new blockedKeys table and checks for any unexpired, unrevoked certificates that are associated with the added SPKI hashes and revokes them, notifying the user that issued the certificates.
Fixes#4772.
https://tools.ietf.org/html/rfc8555#section-7.3
Clients MUST NOT
provide a "mailto" URL in the "contact" field that contains "hfields"
[RFC6068] or more than one "addr-spec" in the "to" component. If a
server encounters a "mailto" contact URL that does not meet these
criteria, then it SHOULD reject it as invalid.
The RA should set the expiry of valid authorizations based only on the current time and the configured authorizationLifetime. It should not extend the pending authorization's lifetime by the authorizationLifetime.
Resolves#4617
I didn't gate this with a feature flag. If we think this needs an API announcement and gradual rollout (I don't personally think this change deserves that) then I think we should change the RA config's authorizationLifetimeDays value to 37 days instead of adding a feature flag that we'll have to clean up after the flag date. We can change it back to 30 after the flag date.
In a handful of places I've nuked old stats which are not used in any alerts or dashboards as they either duplicate other stats or don't provide much insight/have never actually been used. If we feel like we need them again in the future it's trivial to add them back.
There aren't many dashboards that rely on old statsd style metrics, but a few will need to be updated when this change is deployed. There are also a few cases where prometheus labels have been changed from camel to snake case, dashboards that use these will also need to be updated. As far as I can tell no alerts are impacted by this change.
Fixes#4591.
In the deep dark history of Boulder we ended up jamming contacts into
a VARCHAR db field. We need to make sure that when contacts are
marshaled the resulting bytes will fit into the column or a 500 will
be returned to the user when the SA RPC fails.
One day we should fix this properly and not return a hacky error message
that's hard for users to understand. Unfortunately that will likely
require a migration or a new DB table. In the shorter term this hack
will prevent 500s which is a clear improvement.
Prev. we weren't checking the domain portion of an email contact address
very strictly in the RA. This updates the PA to export a function that
can be used to validate the domain the same way we validate domain
portions of DNS type identifiers for issuance.
This also changes the RA to use the `invalidEmail` error type in more
places.
A new Go integration test is added that checks these errors end-to-end
for both account creation and account update.
We need the RA's `NewOrder` RPC to return a `berrors.Malformed` instance
when there are too many identifiers. A bare error will be turned into
a server internal problem by the WFE2's `web.ProblemDetailsForError`
call while a `berrors.Malformed` will produce the expected malformed
problem.
This commit fixes the err, updates the unit test, and adds an end-to-end
integration test so we don't mess this up again.
LE is popular and aims to popularise certificate issuance. End users who see
error messages cannot be assumed to be as DNS-experienced as previously. The
user-facing error messages in the policy authority file are terse and unobvious
to the point that they are often unlikely to be well understood by those they
are intended to inform, who may be "just trying to get a LE cert for their
domain".
We need to apply some fixes for bugs introduced in #4476 before it can be deployed, as such we need to revert #4495 as there needs to be a full deploy cycle between these two changes.
This reverts commit 3ae1ae1.
😭
This change set makes the authz2 storage format the default format. It removes
most of the functionality related to the previous storage format, except for
the SA fallbacks and old gRPC methods which have been left for a follow-up
change in order to make these changes deployable without introducing
incompatibilities.
Fixes#4454.
This also adds the badCSR error type specified by RFC 8555. It is a natural fit for the errors in VerifyCSR that aren't covered by badPublicKey. The web package function for converting a berror to
a problem is updated for the new badCSR error type.
The callers (RA and CA) are updated to return the berrors from VerifyCSR as is instead of unconditionally wrapping them as a berrors.MalformedError instance. Unit/integration tests are updated accordingly.
Resolves#4418
When the `features.PrecertificateRevocation` feature flag is enabled the WFE2
will allow revoking certificates for a submitted precertificate. The legacy WFE1
behaviour remains unchanged (as before (pre)certificates issued through the V1
API will be revocable with the V2 API).
Previously the WFE2 vetted the certificate from the revocation request by
looking up a final certificate by the serial number in the requested
certificate, and then doing a byte for byte comparison between the stored and
requested certificate.
Rather than adjust this logic to handle looking up and comparing stored
precertificates against requested precertificates (requiring new RPCs and an
additional round-trip) we choose to instead check the signature on the requested
certificate or precertificate and consider it valid for revocation if the
signature validates with one of the WFE2's known issuers. We trust the integrity
of our own signatures.
An integration test that performs a revocation of a precertificate (in this case
one that never had a final certificate issued due to SCT embedded errors) with
all of the available authentication mechanisms is included.
Resolves https://github.com/letsencrypt/boulder/issues/4414
This change adds two tables and two methods in the SA, to store precertificates
and serial numbers.
In the CA, when the feature flag is turned on, we generate a serial number, store it,
sign a precertificate and OCSP, store them, and then return the precertificate. Storing
the serial as an additional step before signing the certificate adds an extra layer of
insurance against duplicate serials, and also serves as a check on database availability.
Since an error storing the serial prevents going on to sign the precertificate, this decreases
the chance of signing something while the database is down.
Right now, neither table has read operations available in the SA.
To make this work, I needed to remove the check for duplicate certificateStatus entry
when inserting a final certificate and its OCSP response. I also needed to remove
an error that can occur when expiration-mailer processes a precertificate that lacks
a final certificate. That error would otherwise have prevented further processing of
expiration warnings.
Fixes#4412
This change builds on #4417, please review that first for ease of review.
In #4179 we added a different method of counting the certificatesPerName
rate limit that can provide the correct behavior for exact public suffix
matches without the need for a separate RPC call. This cleans up the
separate code paths in the SA and RA that are no longer necesary.
In the RA's recheckCAA function we loop through a list of *core.Authorizations, dispatching each to a Go routine that checks CAA for the authz and writes an error to a results channel.
Later, we iterate the same *core.Authorization list and read errors from the channel. If we get a non-nil error, then the current iteration's *core.Authorization is used as the identifier for the suberror created with the non-nil error.
This is a flawed approach and relies on the scheduling of recheck goroutines matching the iteration of the authorizations. When the goroutines write error results to the channel in an order that doesn't match the loop over the authorizations the RA will construct a suberror with the wrong identifier. This manifests as making the TestRecheckCAAFail unit test appear flaky, because it specifically checks the expected identifiers in the returned subproblems.
The fix involves writing both the checked authorization and the error result to the results channel. Later instead of iterating the authorizations we just read the correct number of results from the channel and use the attached authorization from the result when constructing a suberror.
Resolves#4248
Take away lessons:
Write unit tests and always verify expected values!
Always investigate flaky unit tests! Sometimes there's a real bug and not just a subpar test :-)
When Boulder's RA rechecks CAA for a set of authorization identifiers it
should use suberrors to make it easy to identify which of a possible 100
identifiers had a CAA issue at order finalization time.
Updates #4193Resolves#4235
This will allow implementing sub-problems without creating a cyclic
dependency between `core` and `problems`.
The `identifier` package is somewhat small/single-purpose and in the
future we may want to move more "ACME" bits beyond the `identifier`
types into a dedicated package outside of `core`.
The RA now updates a `names_per_cert` Prometheus histogram, sliced by
a "type" label. NewOrder requests will observe the number of identifiers
in the new order with the type label == "requested". Successful order
finalization will observe the number of names in the issued certificate
with the type label == "issued".
This follows up on some refactoring we had done previously but not
completed. This removes various binary-specific config structs from the
common cmd package, and moves them into their appropriate packages. In
the case of CT configs, they had to be moved into their own package to
avoid a dependency loop between RA and ctpolicy.
In general the RA test has an anti-pattern of too much global state. This piece
in particular got a little outdated over time, and there were some incorrect
comments littered around the code about what SupportedChallenges did or did not
contain. This change removes most cases where it was overridden, and moves the
main definition into initAuthorities.
Previously we'd have to look up authorizations by name, then re-fetch
them by ID for return to the WFE, because some SA calls did not include
challenge objects in the authorizations they return. However, now all SA
calls do include challenge objects, so we can delete this code and save
some lookups.
We introduced a test case called "TestUpdateAuthorizationNewRPC" when we
added the "PerformValidation" RPC at the SA. It mostly duplicated the
TestUpdateAuthorization test case that already existed. They evolved
alongside over the years, but we should have cleaned up the "NewRPC"
variant and removed the old one it was copied from a long time ago.
This PR implements new SA methods for handling authz2 style authorizations and updates existing SA methods to count and retrieve them where applicable when the `NewAuthorizationSchema` feature is enabled.
Fixes#4093Fixes#4082
Updates #4078
Updates #4077
Checking the "certificates per name" rate limit is moderately expensive, particularly
for domains that have lots of certificates on their subdomains. By checking the renewal
exemption first, we can save some database queries in a lot of cases.
Part of #4152
Previously only "pending" orders were returned by `sa.GetOrderForNames` for the RA to reuse in `ra.NewOrder`. When we added the "ready" status to match late RFC 8555 developments we forgot to update `GetOrderForNames` to return "ready" orders. Prior to the "ready" status existing a fully validated order would have been "pending" and reused. This branch updates the reuse logic to restore reuse of validated orders.
Resolves https://github.com/letsencrypt/boulder/issues/4117
Early ACME drafts supported a notion of "combinations" of challenges
that had to be completed together. This was removed from subsequent
drafts. Boulder has only ever supported "combinations" that exactly map
to the list of challenges, 1 for 1.
This removes all the plumbing for combinations, and adds a list of
combinations to the authz JSON right before marshaling it in WFE1.
Precursor to #4116. Since some of our dependencies impose a minimum
version on these two packages higher than what we have in Godeps, we'll
have to bump them anyhow. Bumping them independently of the modules
update should keep things a little simpler.
In order to get protobuf tests to pass, I had to update protoc-gen-go in
boulder-tools. Now we download a prebuilt binary instead of using the
Ubuntu package, which is stuck on 3.0.0. This also meant I needed to
re-generate our pb.go files, since the new version generates somewhat
different output.
This happens to change the tag for pbutil, but it's not a substantive change - they just added a tagged version where there was none.
$ go test github.com/miekg/dns/...
ok github.com/miekg/dns 4.675s
ok github.com/miekg/dns/dnsutil 0.003s
ok github.com/golang/protobuf/descriptor (cached)
ok github.com/golang/protobuf/jsonpb (cached)
? github.com/golang/protobuf/jsonpb/jsonpb_test_proto [no test files]
ok github.com/golang/protobuf/proto (cached)
? github.com/golang/protobuf/proto/proto3_proto [no test files]
? github.com/golang/protobuf/proto/test_proto [no test files]
ok github.com/golang/protobuf/protoc-gen-go (cached)
? github.com/golang/protobuf/protoc-gen-go/descriptor [no test files]
ok github.com/golang/protobuf/protoc-gen-go/generator (cached)
ok github.com/golang/protobuf/protoc-gen-go/generator/internal/remap (cached)
? github.com/golang/protobuf/protoc-gen-go/grpc [no test files]
? github.com/golang/protobuf/protoc-gen-go/plugin [no test files]
ok github.com/golang/protobuf/ptypes (cached)
? github.com/golang/protobuf/ptypes/any [no test files]
? github.com/golang/protobuf/ptypes/duration [no test files]
? github.com/golang/protobuf/ptypes/empty [no test files]
? github.com/golang/protobuf/ptypes/struct [no test files]
? github.com/golang/protobuf/ptypes/timestamp [no test files]
? github.com/golang/protobuf/ptypes/wrappers [no test files]
Previously we relied on each instance of `features.Set` to have a
corresponding `defer features.Reset()`. If we forget that, we can wind
up with unexpected behavior where features set in one test case leak
into another test case. This led to the bug in
https://github.com/letsencrypt/boulder/issues/4118 going undetected.
Fix#4120
* Remove the challenge whitelist
* Reduce the signature for ChallengesFor and ChallengeTypeEnabled
* Some unit tests in the VA were changed from testing TLS-SNI to testing the same behavior
in TLS-ALPN, when that behavior wasn't already tested. For instance timeouts during connect
are now tested.
Fixes#4109
This changeset implements the logic required for the WFE to retrieve v2 authorizations and their associated challenges while still maintaining the logic to retrieve old authorizations/challenges. Challenge IDs for v2 authorizations are obfuscated using a pretty simply scheme in order to prevent hard coding of indexes. A `V2` field is added to the `core.Authorization` object and populated using the existing field of the same name from the protobuf for convenience. v2 authorizations and challenges use a `v2` prefix in all their URLs in order to easily differentiate between v1 and v2 URLs (e.g. `/acme/authz/v2/asd` and `/acme/challenge/v2/asd/123`), once v1 authorizations cease to exist this prefix can be safely removed. As v2 authorizations use int IDs this change switches from string IDs to int IDs, this mainly only effects tests.
Integration tests are put off for #4079 as they really need #4077 and #4078 to be properly effective.
Fixes#4041.
If an order for a given set of names will fail finalization because of certificate rate limits (certs per domain, certs per fqdn set) there isn't any point in allowing an order for those names to be created. We can stop a lot of requests earlier by enforcing the cert rate limits at new order time as well as finalization time. A new RA `EarlyOrderRateLimit` feature flag controls whether this is done or not.
Resolves#3975
The existing (but undeployed) `AllowRenewalFirstRL` feature flag is used to gate whether the SA `CountCertificatesByNames` and `CountCertificatesByExactNames` RPCs will exclude renewals from the returned counts using the `issuedNames` table's `renewal` field.
The previous implementation of `AllowRenewalFirstRL` is deleted. It wasn't performant in specific corner cases.
There's no new integration test in this branch because the existing `test_renewal_exemption` integration test from the first `AllowRenewalFirstRL` implementation provides the required coverage for `config-next` runs.
Resolves#4060Resolves#4006
Implements a feature that enables immediate revocation instead of marking a certificate revoked and waiting for the OCSP-Updater to generate the OCSP response. This means that as soon as the request returns from the WFE the revoked OCSP response should be available to the user. This feature requires that the RA be configured to use the standalone Akamai purger service.
Fixes#4031.
* in boulder-ra we connected to the publisher and created a publisher gRPC client twice for no apparent reason
* in the SA we ignored errors from `getChallenges` in `GetAuthorizations` which could result in a nil challenge being returned in an authorization
Before fixing `ra.NewOrder` the `TestNewOrderAuthzReuseDisabled` test
from this branch fails as expected based on #4026:
```
=== RUN TestNewOrderAuthzReuseDisabled
--- FAIL: TestNewOrderAuthzReuseDisabled (0.24s)
ra_test.go:2477: "reused-valid-authz" == "reused-valid-authz"
FAIL
FAIL github.com/letsencrypt/boulder/ra 0.270s
```
Afterwards it passes:
```
=== RUN TestNewOrderAuthzReuseDisabled
--- PASS: TestNewOrderAuthzReuseDisabled (0.26s)
PASS
ok github.com/letsencrypt/boulder/ra 1.291s
```
Adds a feature which gates creation of authorizations following the style required for the new schema (and which can be used for gating the reset of our new schema code later down the road).
There was an internal discussion about an issue this creates regarding a predictable ordering of challenges within a challenge due to sequential challenge IDs which will always be static for each challenge type. It was suggested we could add some kind of obfuscation to the challenge ID when presented to the user to prevent this. This hasn't been done in this PR as it would only be focused in the WFE and would be better suited as its own changeset.
Fixes#3981.
Staging and prod both deployed the PerformValidationRPC feature flag. All running WFE/WFE2 instances are using the more accurately named PerformValidation RPC and we can strip out the old UpdateAuthorization bits. The feature flag for PerformValidationRPC remains until we clean up the staging/prod configs.
Resolves#3947 and completes the last of #3930
The existing RA `UpdateAuthorization` RPC needs replacing for
two reasons:
1. The name isn't accurate - `PerformValidation` better captures
the purpose of the RPC.
2. The `core.Challenge` argument is superfluous since Key
Authorizations are not sent in the initiation POST from the client
anymore. The corresponding unmarshal and verification is now
removed. Notably this means broken clients that were POSTing
the wrong thing and failing pre-validation will now likely fail
post-validation.
To remove `UpdateAuthorization` the new `PerformValidation`
RPC is added alongside the old one. WFE and WFE2 are
updated to use the new RPC when the perform validation
feature flag is enabled. We can remove
`UpdateAuthorization` and its associated wrappers once all
WFE instances have been updated.
Resolves https://github.com/letsencrypt/boulder/issues/3930
Removes superfluous usage of `UpdatePendingAuthorization` in the RA to update the key authorization and test if the authorization is pending and instead uses the result of the initial `GetAuthorization` call in the WFE.
Fixes#3923.
At one point, the POST to a challenge object was in theory updating a
resource, and might have a "Type" field. We considered adding a check
that the "Type" field matched the stored "Type" field. In ACMEv2, the POST
body is always simply `{}` so this check is irrelevant. And we're unlikely
to change the behavior in ACMEv1, so let's get rid of the counter and TODO.
Resolves https://github.com/letsencrypt/boulder/issues/3872
**Note to reviewers**: There's an outstanding bug that I've tracked down to the `--load` stage of the integration tests that results in one of the remote VA instances in the `test/config-next` configuration under Go 1.11.1 to fail to cleanly shut down. I'm working on finding the root cause but in the meantime I've disabled `--load` during CI so we can unblock moving forward with getting Go 1.11.1 in dev/CI. Tracking this in https://github.com/letsencrypt/boulder/issues/3889
Removes the checks for a handful of deployed feature flags in preparation for removing the flags entirely. Also moves all of the currently deprecated flags to a separate section of the flags list so they can be more easily removed once purged from production configs.
Fixes#3880.
Required a little bit of rework of the RA issuance flow (to add parsing of the precert to determine the expiration date, and moving final cert parsing before final cert submission) and RA tests, but I think it shouldn't create any issues...
Fixes#3197.
In db01b0b we removed email validation from the RA. This was the only
use of the `bdns` package by the RA and so we can go one step further
and delete the remaining setup, configuration and `bdns` fields.
Performing DNS lookups to check the A/AAAA/MX records of a provided contact e-mail address adds variability to the RA's NewRegistration/UpdateRegistration functions and requires that the RA be able to reach out to the EFN. Since this is simply a convenience to prevent some classes of registration errors we can remove it to improve performance and to tighten up our security posture slightly.
Resolves https://github.com/letsencrypt/boulder/issues/3849
If an order is expired the status is invalid and we don't need to get any of the order's authorizations. Its important to exit early in this case because expired authorizations may be purged from the DB. Fetching the authz's for an expired order may return less authz objects than expected, triggering a 500 error response.
Resolves https://github.com/letsencrypt/boulder/issues/3839
Things removed:
* features.EmbedSCTs (and all the associated RA/CA/ocsp-updater code etc)
* ca.enablePrecertificateFlow (and all the associated RA/CA code)
* sa.AddSCTReceipt and sa.GetSCTReceipt RPCs
* publisher.SubmitToCT and publisher.SubmitToSingleCT RPCs
Fixes#3755.
This adds support for the account-uri CAA parameter as specified by
section 3 of https://tools.ietf.org/html/draft-ietf-acme-caa-04, allowing
issuance to be restricted to one or more ACME accounts as specified by CAA
records.
We see a fair number of ACME accounts/registrations with contact
addresses for the RFC2606 Section 3 "Reserved Example Second Level
Domain Names" (`example.com`, `example.net`, `example.org`). These are
not real contact addresses and are likely the result of the user
copy-pasting example configuration. These users will miss out on
expiration emails and other subscriber communications :-(
This commit updates the RA's `validateEmail` function to reject any
contact addresses for reserved example domain names. The corresponding
unit test is updated accordingly.
Resolves https://github.com/letsencrypt/boulder/issues/3719
When performing CAA checking respect the validation-methods parameter (if
present) and restrict the allowed authorization methods to those specified.
This allows a domain to restrict authorization methods that can be used with
Let's Encrypt.
This is largely based on PR #3003 (by @lukaslihotzki), which was landed and
then later reverted due to issue #3143. The bug the resulted in the previous
code being reverted has been addressed (likely inadvertently) by 76973d0f.
This implementation also includes integration tests for CAA validation-methods.
Fixes issue #3143.
When rechecking CAA, the existing code maps all failures to a CAAError.
This means that any other non-CAA failure (for example, an internal server
error) gets hidden.
Avoid this by reworking recheckCAA to return errors and if we find a
non-CAAError, we return that directly. Revise tests to cover both
situations.
Updates issue #3143.
So I took a quick stab at one possible solution for the authz expiration variance discussed over at community.letsencrypt.org/t/inconsistent-datetime-format-in-some-responses/61452
Golang's nanosecond precision results in newly created pending authz having one expires datetime, but subsequent requests have a different expires datetime due to database storage throwing away fractional second information.
"expires": "2018-04-14T04:43:00.105818284Z",
...
"expires": "2018-04-14T04:43:00Z",
I am not a Go expert so there might be some more widely accepted approach to accomplishing the same thing, please let me know if you would prefer a different solution.
Using both a sync.WaitGroup and channel is somewhat unnecessary - instead
synchronize directly on the channel. Additionally, strings in Go are
immutable - as such using string concatentation in a loop results in
reallocation. Use a string slice and strings.Join, which not only avoids
this, but also cuts down on the lines of code needed.
Remove various unnecessary uses of fmt.Sprintf - in particular:
- Avoid calls like t.Error(fmt.Sprintf(...)), where t.Errorf can be used directly.
- Use strconv when converting an integer to a string, rather than using
fmt.Sprintf("%d", ...). This is simpler and can also detect type errors at
compile time.
- Instead of using x.Write([]byte(fmt.Sprintf(...))), use fmt.Fprintf(x, ...).
A very large number of the logger calls are of the form log.Function(fmt.Sprintf(...)).
Rather than sprinkling fmt.Sprintf at every logger call site, provide formatting versions
of the logger functions and call these directly with the format and arguments.
While here remove some unnecessary trailing newlines and calls to String/Error.
Now that #3638 has been deployed to all of the RA instances there are no
more RPC clients using the SA's `CountCertificatesRange` RPC.
This commit deletes the implementation, the RPC definition & wrappers,
and all the test code/mocks.
* Randomize order of CT logs when submitting precerts so we maximize the chances we actually exercise all of the logs in a group and not just the first in the list.
* Add metrics for winning logs
Submits final certificates to any configured CT logs. This doesn't introduce a feature flag as it is config gated, any log we want to submit final certificates to needs to have it's log description updated to include the `"submitFinalCerts": true` field.
Fixes#3605.
In #3614 we adjusted the `SA.NewOrder` function to conditionally call `ssa.statusForOrder` on the new order when `features.OrderReadyStatus` was enabled. Unfortunately this call to `ssa.statusForOrder` happened *before* the `req.BeganProcessing` field was initialized with a pointer to a `false` bool. The `ssa.statusForOrder` function (correctly) assumes that `req.BeganProcessing == nil` is illegal and doesn't correspond to a known status. This results in `NewOrder` requests returning a 500 error
of the form:
> Internal error - Error creating new order - Order XXX is in an invalid state. No state known for this order's authorizations
Our integration tests missed this because we didn't have a test case that issued for a set of names with one account, and then issued again for the same set of names with the same account.
This PR fixes the original bug by moving the `BeganProcessing` initialization before the call to `statusForOrder`. This PR also adds an integration test to catch this sort of bug again in the future.
Prior to the SA fix this test failed with the 500 server internal error observed by the Certbot team. With the SA fix in place the test passes again.
Finally, this PR disables the `OrderReadyStatus` feature flag in `test/config-next/sa.json`. Certbot's ACME implementation breaks when this flag is enabled (See https://github.com/certbot/certbot/issues/5856). Since Certbot runs integration tests against Boulder with config-next we should be courteous and leave this flag disabled until we are closer to being able to turn it on for staging/prod.
The `TotalCertificates` rate limit serves to ensure we don't
accidentally exceed our OCSP signing capacity by issuing too many
certificates within a fixed period. In practice this rate limit has been
fragile and the associated queries have been linked to performance
problems.
Since we now have better means of monitoring our OCSP signing capacity
this commit removes the rate limit and associated code.
* SA: Add Order "Ready" status, feature flag.
This commit adds the new "Ready" status to `core/objects.go` and updates
`sa.statusForOrder` to use it conditionally for orders with all valid
authorizations that haven't been finalized yet. This state is used
conditionally based on the `features.OrderReadyStatus` feature flag
since it will likely break some existing clients that expect status
"Processing" for this state. The SA unit test for `statusForOrder` is
updated with a "ready" status test case.
* RA: Enforce order ready status conditionally.
This commit updates the RA to conditionally expect orders that are being
finalized to be in the "ready" status instead of "pending". This is
conditionally enforced based on the `OrderReadyStatus` feature flag.
Along the way the SA was changed to calculate the order status for the
order returned in `sa.NewOrder` dynamically now that it could be
something other than "pending".
* WFE2: Conditionally enforce order ready status for finalization.
Similar to the RA the WFE2 should conditionally enforce that an order's
status is either "ready" or "pending" based on the "OrderReadyStatus"
feature flag.
* Integration: Fix `test_order_finalize_early`.
This commit updates the V2 `test_order_finalize_early` test for the
"ready" status. A nice side-effect of the ready state change is that we
no longer invalidate an order when it is finalized too soon because we
can reject the finalization in the WFE. Subsequently the
`test_order_finalize_early` testcase is also smaller.
* Integration: Test classic behaviour w/o feature flag.
In the previous commit I fixed the integration test for the
`config/test-next` run that has the `OrderReadyStatus` feature flag set
but broke it for the `config/test` run without the feature flag.
This commit updates the `test_order_finalize_early` test to work
correctly based on the feature flag status in both cases.
Previously we updated the RA's issueCertificateInner function to prefix errors returned from the CA with meaningful information about which CA RPC caused the failure. Unfortunately by using fmt.Errorf to do this we're discarding the underlying error type. This can cause unexpected server internal errors downstream if (for e.g.) the CA rejects a CSR with a malformed error (see #3632).
This PR updates the issueCertificateInner error message prefixing to maintain the error type if the underlying error is a berrors.BoulderError. A RA unit test with several mock CAs is added to test the prefixing occurs as expected without loss of error type.
This PR also adds an integration test that ensures we reject a CSR with >100 names with a malformed error. This is not strictly related to this PR but since I wrote it while debugging the root issue I thought I'd include it. To allow this test to pass the pendingAuthorizationsPerAccount in test/rate-limit-policies.yml and associated tests had to be adjusted.
Resolves#3632
This commit updates the `boulder-ra` and `boulder-ca` commands to refuse
to start if their configured `MaxNames` is 0 (the default value). This
should always be set to a positive number.
This commit also updates `csr/csr.go` to always apply the max names
check since it will never be 0 after the change above.
Also refactor `FailOnError` to pull out a separate `Fail` function.
Related to https://github.com/letsencrypt/boulder/issues/3632
This PR updates the RA such that certificateRequestEvent objects created during issuance and written to the audit log as JSON also include a new Authorizations field. This field is a map of the form map[string]certificateRequestAuthz and can be used to map from an identifier name appearing in the associated certificate to a certificateRequestAuthz object. Each of the certificateRequestAuthz objects holds an authorization ID and the type of challenge that made the authorization valid.
Example Audit log output (with the JSON pulled out and pretty-printed):
{
"ID":"0BjPk94KlxExRRIQ3kslRVSJ68KMaTh416chRvq0wyA",
"Requester":666,
"SerialNumber":"ff699d91cab5bc84f1bc97fc71e4e27abc1a",
"VerifiedFields":["subject.commonName","subjectAltName"],
"CommonName":"rand.44634cbf.xyz",
"Names":["rand.44634cbf.xyz"],
"NotBefore":"2018-03-28T19:50:07Z",
"NotAfter":"2018-06-26T19:50:07Z",
"RequestTime":"2018-03-28T20:50:07.234038859Z",
"ResponseTime":"2018-03-28T20:50:07.278848954Z",
"Authorizations": {
"rand.44634cbf.xyz" : {
"ID":"jGt37Rnvfr0nhZn-wLkxrQxc2HBfV4t6TSraRiGnNBM",
"ChallengeType":"http-01"
}
}
}
Resolves#3253
Also instead of repeating the same bucket definitions everywhere just use a single top level var in the metrics package in order to discourage copy/pasting.
Fixes#3607.
This commit updates CTPolicy & the RA to return a distinct error when
the RA is unable to fetch the required SCTs for a certificate when
processing an issuance. This error type is plumbed up to the WFE/WFE2
where the `web/probs.go` code converts it into a server internal error
with a suitable user facing error.
Prior to this commit an order's expiry was set based on
ra.orderLiftime while pending and valid authorization expiry was set
based on ra.pendingAuthorizationLifetime and
ra.authorizationLifetime. Since orders reused existing valid/pending
authorizations this can lead to a case where an order has an expiry
beyond the associated authorization expiries. In this case when an
authorization expired the order becomes inactionable and the extra order
lifetime is not useful.
This commit addresses this problem in two ways:
1. The SA GetAuthorizations function used to find authzs to reuse for
ra.NewOrder is adjusted to only return authorizations at min 24hr
away from expiry.
2. Order expiry is now calculated by the RA in newOrder
as the min of the order's own expiry or the soonest
authorization expiry. This properly reflects the order's true
lifetime based on the authorization lifetime.
The RA/SA unit tests are updated accordingly.
Resolves#3498
Adds SCT embedding to the certificate issuance flow. When a issuance is requested a precertificate (the requested certificate but poisoned with the critical CT extension) is issued and submitted to the required CT logs. Once the SCTs for the precertificate have been collected a new certificate is issued with the poison extension replace with a SCT list extension containing the retrieved SCTs.
Fixes#2244, fixes#3492 and fixes#3429.
Requesting a certificate with "*.example.com" and "www.example.com" as
separate SANs doesn't make sense, because "www.example.com" is covered
by the wildcard.
#3524
Right now we check safe browsing at new-authz time, which introduces a possible
external dependency when calling new-authz. This is usually fine, since most safe
browsing checks can be satisfied locally, but when requests have to go external,
it can create variance in new-authz timing.
Fixes#3491.
The logEvent setup we had in issueCertificate depending on all error
handling code setting logEvent.Error in addition to returning the error.
Since this is not the common pattern in Go, there were a few places
where logEvent.Error wasn't set, making it hard to find the root cause
of errors in the RA logs (though the errors would get propagated to the
WFE for logging). This change wraps issueCertificate such that all
errors returned get logged.
This commit updates the RA to make the notion of submitting
a KeyAuthorization value as part of the ra.UpdateAuthorization call
optional. If set, the value is enforced against expected and an error is
returned if the provided authorization isn't correct. If it isn't set
the RA populates the field with the computed authorization for the VA to
enforce against the value it sees in challenges. This retains the legacy
behaviour of the V1 API. The V2 API will never unmarshal a provided
key authorization.
The ACMEv2/WFEv2 prepChallengeForDisplay function is updated to strip
the ProvidedKeyAuthorization field before sending the challenge object
back to a client. ACMEv1/WFEv1 continue to return the KeyAuthorization
in challenges to avoid breaking clients that are relying on this legacy
behaviour.
For deployability ease this commit retains the name of the
core.Challenge.ProvidedKeyAuthorization field even though it should
be called core.Challenge.ComputedKeyAuthorization now that it isn't
set based on the client's provided key authz. This will be easier as
a follow-up change.
Resolves#3514
This commit adds short 15s runs of the load generator against the V1 and
V2 APIs during the three integration test runs (v1 config, v1
config-next, and v2). 15s was selected because 30s caused too much
output and the build log to be truncated.
Presently the latency output is *not* being checked for errors. This was
too flaky in practice.
A fix for a race condition in the load-generator code itself related to
HTTP status code tracking is included in this commit.
The pending authz rate limit also needed to be adjusted to keep the
load-generator from failing requests after hitting 429s.
This race was uncovered by running the load generator as part of our CI.
Also, update ra_test.go. It was previously testing that the returned authz
and the stored authz should be identical, which is not actually a property
of UpdateAuthorization; in general, they will not be identical.