This updates va.proto to use proto3 syntax, and updates
all clients of the autogenerated code to use the new types.
In particular, it removes indirection from built-in types
(proto3 uses ints, rather than pointers to ints, for example).
Depends on #5003Fixes#4956
ra.PerformValidation's goroutine surfaces errors not by returning them,
but by accumulating them into the `prob`variable and saving them to
the database. This makes it possible for processing to continue even
in error cases when it should (mostly) halt. This change fixes a bug
where we would try to access a member of the result returned from
va.PerformValidation, even if that function call had returned an error.
ACME Challenges are well-known strings ("http-01", "dns-01", and
"tlsalpn-01") identifying which kind of challenge should be used
to verify control of a domain. Because they are well-known and
only certain values are valid, it is better to represent them as
something more akin to an enum than as bare strings. This also
improves our ability to ensure that an AcmeChallenge is not
accidentally used as some other kind of string in a different
context. This change also brings them closer in line with the
existing core.AcmeResource and core.OCSPStatus string enums.
Fixes#5009
Updates the type of the ValidationAuthority's PerformValidation
method to be identical to that of the corresponding auto-generated
grpc method, i.e. directly taking and returning proto message
types, rather than exploded arguments.
This allows all logic to be removed from the VA wrappers, which
will allow them to be fully removed after the migration to proto3.
Also updates all tests and VA clients to adopt the new interface.
Depends on #4983 (do not review first four commits)
Part of #4956
We'd like to issue certs with no CN eventually, but it's not
going to happen any time soon. In the mean time, the existing
code never gets exercised and is rather complex, so this
removes it.
This is the only method on the ca which uses a non-proto
type as its request or response value. Changing this to
use a proto removes the last logic from the wrappers,
allowing them to be removed in a future CL. It also makes
the interface more uniform and easier to reason about.
Issue: #4940
We previously used mixed case names for proto imports
(e.g. both `caPB` and `rapb`), sometimes in the same file.
This change standardizes on the all-lowercase spelling,
which was predominant throughout the codebase.
This updates va.proto to use proto3 syntax, and updates
all clients of the autogenerated code to use the new types. In
particular, it removes indirection from built-in types (proto3
uses ints, rather than pointers to ints, for example).
Fixes#4956
This updates the ca.proto to use proto3 syntax, and updates
all clients of the autogenerated code to use the new types. In
particular, it removes indirection from built-in types (proto3
uses ints, rather than pointers to ints, for example).
It also updates a few instances where tests were being
conducted to see if various object fields were nil to instead
check for those fields' new zero-value.
Fixes#4940
This commit consists of three classes of changes:
1) Changing various command main.go files to always behave as they
would have when features.BlockedKeyTable was true. Also changing
one test in the same manner.
2) Removing the BlockedKeyTable flag from configuration in config-next,
because the flag is already live.
3) Moving the BlockedKeyTable flag to the "deprecated" section of
features.go, and regenerating featureflag_strings.go.
A future change will remove the BlockedKeyTable flag (and other
similarly deprecated flags) from features.go entirely.
Fixes#4873
This reverts commit 6454513ded.
We actually need to wait 90 days to ensure the issuerID field of the
certificateStatus table is non-nil for all extant certificates.
As part of that, add support for issuer IDs in orphan-finder's
and RA's calls to GenerateOCSP.
This factors out the idForIssuer logic from ca/ca.go into a new
issuercerts package.
orphan-finder refactors:
Add a list of issuers in config.
Create an orphanFinder struct to hold relevant fields, including the
newly added issuers field.
Factor out a storeDER function to reduce duplication between the
parse-der and parse-ca-log cases.
Use test certificates generated specifically for orphan-finder tests.
This was necessary because the issuers of these test certificates have
to be configured for the orphan finder.
This required a refactoring: Move validateEmail from the RA to ValidEmail
in the `policy` package. I also moved `ValidDomain` from a method on
PolicyAuthority to a standalone function so that ValidEmail can call it.
notify-mailer will now log invalid addresses and skip them without
attempting to send mail. Since @example.com addresses are invalid,
I updated the notify-mailer test, which used a lot of such addresses.
Also, now when notify-mailer receives an unrecoverable error sending
mail, it logs the email address and what offset within the list it was.
When we originally added this package (4 years ago) x/crypto/ocsp didn't
have its own list of revocation reasons, so we added our own. Now it does
have its own list, so just use that list instead of duplicating code for
no real reason.
Also we build a list of the revocation reasons we support so that we can
tell users when they try to use an unsupported one. Instead of building
this string every time, just build it once it during package initialization.
Finally return the same error message in wfe that we use in wfe2 when a
user requests an unsupported reason.
Adds a daemon which monitors the new blockedKeys table and checks for any unexpired, unrevoked certificates that are associated with the added SPKI hashes and revokes them, notifying the user that issued the certificates.
Fixes#4772.
https://tools.ietf.org/html/rfc8555#section-7.3
Clients MUST NOT
provide a "mailto" URL in the "contact" field that contains "hfields"
[RFC6068] or more than one "addr-spec" in the "to" component. If a
server encounters a "mailto" contact URL that does not meet these
criteria, then it SHOULD reject it as invalid.
The RA should set the expiry of valid authorizations based only on the current time and the configured authorizationLifetime. It should not extend the pending authorization's lifetime by the authorizationLifetime.
Resolves#4617
I didn't gate this with a feature flag. If we think this needs an API announcement and gradual rollout (I don't personally think this change deserves that) then I think we should change the RA config's authorizationLifetimeDays value to 37 days instead of adding a feature flag that we'll have to clean up after the flag date. We can change it back to 30 after the flag date.
In a handful of places I've nuked old stats which are not used in any alerts or dashboards as they either duplicate other stats or don't provide much insight/have never actually been used. If we feel like we need them again in the future it's trivial to add them back.
There aren't many dashboards that rely on old statsd style metrics, but a few will need to be updated when this change is deployed. There are also a few cases where prometheus labels have been changed from camel to snake case, dashboards that use these will also need to be updated. As far as I can tell no alerts are impacted by this change.
Fixes#4591.
In the deep dark history of Boulder we ended up jamming contacts into
a VARCHAR db field. We need to make sure that when contacts are
marshaled the resulting bytes will fit into the column or a 500 will
be returned to the user when the SA RPC fails.
One day we should fix this properly and not return a hacky error message
that's hard for users to understand. Unfortunately that will likely
require a migration or a new DB table. In the shorter term this hack
will prevent 500s which is a clear improvement.
Prev. we weren't checking the domain portion of an email contact address
very strictly in the RA. This updates the PA to export a function that
can be used to validate the domain the same way we validate domain
portions of DNS type identifiers for issuance.
This also changes the RA to use the `invalidEmail` error type in more
places.
A new Go integration test is added that checks these errors end-to-end
for both account creation and account update.
We need the RA's `NewOrder` RPC to return a `berrors.Malformed` instance
when there are too many identifiers. A bare error will be turned into
a server internal problem by the WFE2's `web.ProblemDetailsForError`
call while a `berrors.Malformed` will produce the expected malformed
problem.
This commit fixes the err, updates the unit test, and adds an end-to-end
integration test so we don't mess this up again.
We need to apply some fixes for bugs introduced in #4476 before it can be deployed, as such we need to revert #4495 as there needs to be a full deploy cycle between these two changes.
This reverts commit 3ae1ae1.
😭
This change set makes the authz2 storage format the default format. It removes
most of the functionality related to the previous storage format, except for
the SA fallbacks and old gRPC methods which have been left for a follow-up
change in order to make these changes deployable without introducing
incompatibilities.
Fixes#4454.
This also adds the badCSR error type specified by RFC 8555. It is a natural fit for the errors in VerifyCSR that aren't covered by badPublicKey. The web package function for converting a berror to
a problem is updated for the new badCSR error type.
The callers (RA and CA) are updated to return the berrors from VerifyCSR as is instead of unconditionally wrapping them as a berrors.MalformedError instance. Unit/integration tests are updated accordingly.
Resolves#4418
In #4179 we added a different method of counting the certificatesPerName
rate limit that can provide the correct behavior for exact public suffix
matches without the need for a separate RPC call. This cleans up the
separate code paths in the SA and RA that are no longer necesary.
In the RA's recheckCAA function we loop through a list of *core.Authorizations, dispatching each to a Go routine that checks CAA for the authz and writes an error to a results channel.
Later, we iterate the same *core.Authorization list and read errors from the channel. If we get a non-nil error, then the current iteration's *core.Authorization is used as the identifier for the suberror created with the non-nil error.
This is a flawed approach and relies on the scheduling of recheck goroutines matching the iteration of the authorizations. When the goroutines write error results to the channel in an order that doesn't match the loop over the authorizations the RA will construct a suberror with the wrong identifier. This manifests as making the TestRecheckCAAFail unit test appear flaky, because it specifically checks the expected identifiers in the returned subproblems.
The fix involves writing both the checked authorization and the error result to the results channel. Later instead of iterating the authorizations we just read the correct number of results from the channel and use the attached authorization from the result when constructing a suberror.
Resolves#4248
Take away lessons:
Write unit tests and always verify expected values!
Always investigate flaky unit tests! Sometimes there's a real bug and not just a subpar test :-)
When Boulder's RA rechecks CAA for a set of authorization identifiers it
should use suberrors to make it easy to identify which of a possible 100
identifiers had a CAA issue at order finalization time.
Updates #4193Resolves#4235
This will allow implementing sub-problems without creating a cyclic
dependency between `core` and `problems`.
The `identifier` package is somewhat small/single-purpose and in the
future we may want to move more "ACME" bits beyond the `identifier`
types into a dedicated package outside of `core`.