Commit Graph

384 Commits

Author SHA1 Message Date
Roland Bracewell Shoemaker 2adf5a54ab Move CN to SAN in v2 API (#3394)
Fixes #3368.

Basically just adds a `csr.VerifyCSR` call in `ra.FinalizeOrder` that mirrors what we have in `ra.NewCertificate`, this moves the CN to SAN as expected if included.
2018-01-29 13:21:12 -08:00
Roland Bracewell Shoemaker cdab3a2ef8 Improve wildcard error (#3398) 2018-01-29 10:49:31 -08:00
Roland Bracewell Shoemaker 3e33d56d03 Remove test/config-next gating from unittests (#3395)
The migrations are all applied and the tests can run unconditionally.
2018-01-25 09:08:07 -05:00
Roland Bracewell Shoemaker 7e4d44e172 Don't mask sa.GetValidAuthorization error in ra.NewAuthorization (#3381) 2018-01-18 15:53:14 -05:00
Jacob Hoffman-Andrews cfc7823cdd
Remove EnforceChallengeDisable check at issuance. (#3362)
Per
https://community.letsencrypt.org/t/2018-01-11-update-regarding-acme-tls-sni-and-shared-hosting-infrastructure/50188/3,
we are planning to treat prior issuance by an account as reason to whitelist
that account for reissuance via TLS-SNI. By extension, reusing validations that
occurred prior to disclosure of the TLS-SNI issue is reasonably safe, so this
change removes the issuance-time check for whether a challenge has been
disabled. This saves us significant complexity and database load in implementing
TLSSNIRevalidation (https://github.com/letsencrypt/boulder/pull/3361), since
ChallengeTypeEnabled returns false, so we'd have to plumb through data about
whether an issuance was based on a revalidation. Instead, we can safely delete
this code.

Note that "EnforceChallengeDisable" is implemented in three places: new-authz,
validation time, and issuance time. We're keeping it in place at new-authz for
now because it's intertwined with the account whitelisting code. We're keeping
it in place at validation time, because there's a small chance that someone
could have created a pending authz for a domain they don't control before the
TLS-SNI issue was announced, and that authz could still be pending, and they
could find out that that domain is hosted on a vulnerable provider, and use the
vulnerability now that they know about it. A tiny chance, but may as well be
careful.
2018-01-12 11:35:23 -08:00
Jacob Hoffman-Andrews 8153b919be
Implement TLSSNIRevalidation (#3361)
This change adds a feature flag, TLSSNIRevalidation. When it is enabled, Boulder
will create new authorization objects with TLS-SNI challenges if the requesting
account has issued a certificate with the relevant domain name, and was the most
recent account to do so*. This setting overrides the configured list of
challenges in the PolicyAuthority, so even if TLS-SNI is disabled in general, it
will be enabled for revalidation.

Note that this interacts with EnforceChallengeDisable. Because
EnforceChallengeDisable causes additional checked at validation time and at
issuance time, we need to update those two places as well. We'll send a
follow-up PR with that.

*We chose to make this work only for the most recent account to issue, even if
there were overlapping certificates, because it significantly simplifies the
database access patterns and should work for 95+% of cases.

Note that this change will let an account revalidate and reissue for a domain
even if the previous issuance on that account used http-01 or dns-01. This also
simplifies implementation, and fits within the intent of the mitigation plan: If
someone previously issued for a domain using http-01, we have high confidence
that they are actually the owner, and they are not going to "steal" the domain
from themselves using tls-sni-01.

Also note: This change also doesn't work properly with ReusePendingAuthz: true.
Specifically, if you attempted issuance in the last couple days and failed
because there was no tls-sni challenge, you'll still have an http-01 challenge
lying around, and we'll reuse that; then your client will fail due to lack of
tls-sni challenge again.

This change was joint work between @rolandshoemaker and @jsha.
2018-01-12 11:00:06 -08:00
Maciej Dębski 44984cd84a Implement regID whitelist for allowed challenge types. (#3352)
This updates the PA component to allow authorization challenge types that are globally disabled if the account ID owning the authorization is on a configured whitelist for that challenge type.
2018-01-10 13:44:53 -05:00
Roland Shoemaker 400ffede3d More fixes 2018-01-09 20:48:16 -08:00
Roland Shoemaker 1a3a76438c Fix tests and GetOrderAuthorizations 2018-01-09 20:38:52 -08:00
Roland Shoemaker dcd2b438f4 Fix previous impl, add valid authz reuse fix and existing authz validation fix 2018-01-09 19:53:48 -08:00
Roland Shoemaker 5ca646c5dd Disallow the use of valid authorizations that used currently disabled challenges for issuance 2018-01-09 18:52:29 -08:00
Daniel McCarney 7bb16ff21e ACMEv2: Add pending order reuse (#3290)
This commit adds pending order reuse. Subsequent to this commit multiple
add-order requests from the same account ID for the same set of order
names will result in only one order being created. Orders are only
reused while they are not expired. Finalized orders will not be reused
for subsequent new-order requests allowing for duplicate order issuance.

Note that this is a second level of reuse, building on the pending
authorization reuse that's done between separate orders already.

To efficiently find an appropriate order ID given a set of names,
a registration ID, and the current time a new orderFqdnSets table is
added with appropriate indexes and foreign keys.

Resolves #3258
2018-01-02 13:27:16 -08:00
Jacob Hoffman-Andrews e60251d8ca Add documentation link to rate limit errors. (#3286)
Resolves #2951
2017-12-19 15:46:18 -08:00
Daniel McCarney de5fbbdb67
Implement CAA issueWild enforcement for wildcard names (#3266)
This commit implements RFC 6844's description of the "CAA issuewild
property" for CAA records.

We check CAA in two places: at the time of validation, and at the time
of issuance when an authorization is more than 8hours old. Both
locations have been updated to properly enforce issuewild when checking
CAA for a domain corresponding to a wildcard name in a certificate
order.

Resolves https://github.com/letsencrypt/boulder/issues/3211
2017-12-13 12:09:33 -05:00
Daniel McCarney 0684d5fc73
Add pending orders rate limit to new-order. (#3257)
This commit adds a new rate limit to restrict the number of outstanding
pending orders per account. If the threshold for this rate limit is
crossed subsequent new-order requests will return a 429 response.

Note: Since this the rate limit object itself defines an `Enabled()`
test based on whether or not it has been configured there is **not**
a feature flag for this change.

Resolves https://github.com/letsencrypt/boulder/issues/3246
2017-12-04 16:36:48 -05:00
Daniel McCarney 1c99f91733 Policy based issuance for wildcard identifiers (Round two) (#3252)
This PR implements issuance for wildcard names in the V2 order flow. By policy, pending authorizations for wildcard names only receive a DNS-01 challenge for the base domain. We do not re-use authorizations for the base domain that do not come from a previous wildcard issuance (e.g. a normal authorization for example.com turned valid by way of a DNS-01 challenge will not be reused for a *.example.com order).

The wildcard prefix is stripped off of the authorization identifier value in two places:

When presenting the authorization to the user - ACME forbids having a wildcard character in an authorization identifier.
When performing validation - We validate the base domain name without the *. prefix.
This PR is largely a rewrite/extension of #3231. Instead of using a pseudo-challenge-type (DNS-01-Wildcard) to indicate an authorization & identifier correspond to the base name of a wildcard order name we instead allow the identifier to take the wildcard order name with the *. prefix.
2017-12-04 12:18:10 -08:00
Daniel McCarney 171da33513
Don't 500 on missing pendingAuthz (#3248)
As described in #3201  a concurrent challenge POST would result in 500 errors if the pending authz row was deleted by a promotion to the authz table underneath another request. 

This PR adjusts the SA & RA so that if a pending authz is promoted to a final authz between updates a not found error will be returned instead of a server internal error.

Resolves https://github.com/letsencrypt/boulder/issues/3201
2017-11-22 15:48:42 -05:00
Daniel McCarney 2f263f8ed5 ACME v2 Finalize order support (#3169)
This PR implements order finalization for the ACME v2 API.

In broad strokes this means:

* Removing the CSR from order objects & the new-order flow
* Adding identifiers to the order object & new-order
* Providing a finalization URL as part of orders returned by new-order
* Adding support to the WFE's Order endpoint to receive finalization POST requests with a CSR
* Updating the RA to accept finalization requests and to ensure orders are fully validated before issuance can proceed
* Updating the SA to allow finding order authorizations & updating orders.
* Updating the CA to accept an Order ID to log when issuing a certificate corresponding to an order object

Resolves #3123
2017-11-01 12:39:44 -07:00
JP Phillips e83480f86b Return accurate error description for invalid authz limit (#3156)
Fixes #3144
2017-10-11 22:21:51 -07:00
Daniel McCarney 1794c56eb8 Revert "Add CAA parameter to restrict challenge type (#3003)" (#3145)
This reverts commit 23e2c4a836.
2017-10-04 12:00:44 -07:00
Jacob Hoffman-Andrews dbfb48226d Add parallelism to SA CountCertificatesByNames. (#3133)
Since we can make up to 100 SQL queries from this method (based on the 100-SAN
limit), sometimes it is too slow and we get a timeout for large certificates. By
running some of those queries in parallel, we can speed things up and stop
getting timeouts.
2017-10-02 15:45:08 -04:00
lukaslihotzki 23e2c4a836 Add CAA parameter to restrict challenge type (#3003)
This commit adds CAA `issue` paramter parsing and the `challenge` parameter to permit a single challenge type only. By setting `challenge=dns-01`, the nameserver keeps control over every issued certificate.
2017-10-02 11:59:47 -07:00
Roland Bracewell Shoemaker b7bca87134 Batch fetching of existing authorizations and creation of pending authorizations (#3058)
For the new-order endpoint only. This does some refactoring of the order of operations in `ra.NewAuthorization` as well in order to reduce the duplication of code relating to creating pending authorizations, existing tests still seem to work as intended... A close eye should be given to this since we don't have integration tests yet that test it end to end. This also changes the inner type of `grpc.StorageAuthorityServerWrapper` to `core.StorageAuthority` so that we can avoid a circular import that is created by needing to import `grpc.AuthzToPB` and `grpc.PBToAuthz` in `sa/sa.go`.

This is a big change but should considerably improve the performance of the new-order flow.

Fixes #2955.
2017-09-25 09:10:59 -07:00
Daniel McCarney f0a9e9aa0e Fix duplicate cert limit off-by-one. (#3117)
The RA's `checkCertificatesPerFQDNSetLimit` function was using `>` where
it should have been using `>=` when evaluating a FQDNSet count against
the rate limit threshold. This resulted in an off by one error where we
allowed 1 more duplicate certificate than intended.

This commit fixes the off-by-one error and adds a short unit test. The
unit test failed the
`TestCheckExactCertificateLimit/FQDN_set_issuances_equal_to_limit`
subtest before the fix was applied and passes afterwards.
2017-09-22 14:45:48 -07:00
Kleber Correia 72b1c69761 Remove RecheckCAA feature flag (#3103)
Updates #2692.
2017-09-18 11:59:58 -07:00
Jacob Hoffman-Andrews 9ab2ff4e03 Add CAA-specific error. (#3051)
Previously, CAA problems were lumped in under "ConnectionProblem" or
"Unauthorized". This should make things clearer and easier to differentiate.

Fixes #3043
2017-09-14 14:11:41 -07:00
Jacob Hoffman-Andrews 3b6a5ff63c Fix a bad error return in RA (#3061)
RA.NewRegistration checked for an error return from
SA.NewRegistration, but failed to properly propagate
the error. It was just setting the err variable and falling
through to the successful return case. This fixes that
issue and adds a unittest.
2017-09-08 13:15:09 -07:00
Daniel McCarney d18e1dbcff Add WrongAuthorizationState error code for UpdateAuthorization (#3053)
This commit adds a new boulder error type WrongAuthorizationState.
This error type is returned by the SA when UpdateAuthorization is
provided an authz that isn't pending. The RA and WFE are updated
accordingly such that this error percolates back through the API to the
user as a Malformed problem with a sensible description. Previously this
behaviour resulted in a ServerInternal error.

Resolves #3032
2017-09-07 11:22:02 -07:00
Kleber Correia 02864c11bf Remove AllowAccountDeactivation flag (#2927)
Part of #2712
2017-09-06 11:11:40 -07:00
Kleber Correia 710c814720 Remove AllowKeyRollover flag (#3037) 2017-09-06 08:43:15 -04:00
Jacob Hoffman-Andrews 45f42f6583 Switch recheckCAA error to Unauthorized. (#3040)
ConnectionFailure is only used during validation, and so isn't handled by WFE's
problemDetailsFromBoulderError. This led to returning ServerInternal instead of
the intended error code, and hiding the error detail. Unauthorized is probably
a better error type for now anyhow, but long-term we should switch to a specific
CAA error type.

This PR will allow clients to see the detailed list of problem domains when
new-cert returns an error due to CAA rechecking.
2017-09-05 10:53:47 -07:00
Jacob Hoffman-Andrews b0c7bc1bee Recheck CAA for authorizations older than 8 hours (#3014)
Fixes #2889.

VA now implements two gRPC services: VA and CAA. These both run on the same port, but this allows implementation of the IsCAAValid RPC to skip using the gRPC wrappers, and makes it easier to potentially separate the service into its own package in the future.

RA.NewCertificate now checks the expiration times of authorizations, and will call out to VA to recheck CAA for those authorizations that were not validated recently enough.
2017-08-28 16:40:57 -07:00
Roland Bracewell Shoemaker 90ba766af9 Add NewOrder RPCs + methods to SA and RA (#2907)
Fixes #2875, #2900 and #2901.
2017-08-11 14:24:25 -04:00
Jacob Hoffman-Andrews 8bc1db742c Improve recycling of pending authzs (#2896)
The existing ReusePendingAuthz implementation had some bugs:

It would recycle deactivated authorizations, which then couldn't be fulfilled. (#2840)
Since it was implemented in the SA, it wouldn't get called until after the RA checks the Pending Authorizations rate limit. Which means it wouldn't fulfill its intended purpose of making accounts less likely to get stuck in a Pending Authorizations limited state. (#2831)
This factors out the reuse functionality, which used to be inside an "if" statement in the SA. Now the SA has an explicit GetPendingAuthorization RPC, which gets called from the RA before calling NewPendingAuthorization. This happens to obsolete #2807, by putting the recycling logic for both valid and pending authorizations in the RA.
2017-07-26 14:00:30 -07:00
Daniel McCarney 2a84bc2495 Replace go-jose v1 with go-jose v2. (#2899)
This commit replaces the Boulder dependency on
gopkg.in/square/go-jose.v1 with gopkg.in/square/go-jose.v2. This is
necessary both to stay in front of bitrot and because the ACME v2 work
will require a feature from go-jose.v2 for JWS validation.

The largest part of this diff is cosmetic changes:

Changing import paths
jose.JsonWebKey -> jose.JSONWebKey
jose.JsonWebSignature -> jose.JSONWebSignature
jose.JoseHeader -> jose.Header
Some more significant changes were caused by updates in the API for
for creating new jose.Signer instances. Previously we constructed
these with jose.NewSigner(algorithm, key). Now these are created with
jose.NewSigner(jose.SigningKey{},jose.SignerOptions{}). At present all
signers specify EmbedJWK: true but this will likely change with
follow-up ACME V2 work.

Another change was the removal of the jose.LoadPrivateKey function
that the wfe tests relied on. The jose v2 API removed these functions,
moving them to a cmd's main package where we can't easily import them.
This function was reimplemented in the WFE's test code & updated to fail
fast rather than return errors.

Per CONTRIBUTING.md I have verified the go-jose.v2 tests at the imported
commit pass:

ok      gopkg.in/square/go-jose.v2      14.771s
ok      gopkg.in/square/go-jose.v2/cipher       0.025s
?       gopkg.in/square/go-jose.v2/jose-util    [no test files]
ok      gopkg.in/square/go-jose.v2/json 1.230s
ok      gopkg.in/square/go-jose.v2/jwt  0.073s

Resolves #2880
2017-07-26 10:55:14 -07:00
Jacob Hoffman-Andrews ef81f13f7b Remove "DONE" logging in unittest. (#2895)
This is not necessary because the framework prints a done message for us.
2017-07-25 13:37:58 -04:00
Roland Bracewell Shoemaker 05d869b005 Rename DNSResolver -> DNSClient (#2878)
Fixes #639.

This resolves something that has bugged me for two+ years, our DNSResolverImpl is not a DNS resolver, it is a DNS client. This change just makes that obvious.
2017-07-18 08:37:45 -04:00
Jacob Hoffman-Andrews d47d3c5066 Recycle pending authorizations (#2797)
If the feature flag "ReusePendingAuthz" is enabled, a request to create a new authorization object from an account that already has a pending authorization object for the same identifier will return the already-existing authorization object. This should make it less common for people to get stuck in the "too many pending authorizations" state, and reduce DB storage growth.

Fixes #2768
2017-06-19 13:35:36 -04:00
Daniel McCarney fbd87b1757 Splits CountRegistrationsByIP to exact-match and by /48. (#2782)
Prior to this PR the SA's `CountRegistrationsByIP` treated IPv6
differently than IPv4 by counting registrations within a /48 for IPv6 as
opposed to exact matches for IPv4. This PR updates
`CountRegistrationsByIP` to treat IPv4 and IPv6 the
same, always matching exactly. The existing RegistrationsPerIP rate
limit policy will be applied against this exact matching count.

A new `CountRegistrationsByIPRange` function is added to the SA that
performs the historic matching process, e.g. for IPv4 it counts exactly
the same as `CountRegistrationsByIP`, but for IPv6 it counts within
a /48.

A new `RegistrationsPerIPRange` rate limit policy is added to allow
configuring the threshold/window for the fuzzy /48 matching registration
limit. Stats for the "Exceeded" and "Pass" events for this rate limit are
separated into a separate `RegistrationsByIPRange` stats scope under
the `RateLimit` scope to allow us to track it separate from the exact 
registrations per IP rate limit.

Resolves https://github.com/letsencrypt/boulder/issues/2738
2017-05-30 15:12:20 -07:00
Daniel McCarney 7393db6d59 Fixes RPC bug for CountCertificatesExact feature flag. (#2759)
With the CountCertificatesExact feature flag enabled if the RA's
checkCertificatesPerNameLimit was called with names only containing
domains exactly matching a public suffix entry then the legacy
ra.enforceNameCounts function will be called with an empty tldNames
argument. This in turn will cause the RA->SA RPC to fail with an
"incomplete gRPC request message error".

This commit fixes this bug by only calling ra.enforceNameCounts when
len(tldNames) > 0.

Resolves #2758
2017-05-12 15:21:00 -04:00
Daniel McCarney b9369a4814 Uses `UniqueLowerNames` for domain/suffix rl funcs. (#2725)
Both `ra.domainsForRateLimiting` and `ra.suffixesForRateLimiting` were
doing their own "unique"ing with a `map[string]struct{}` when they could
have used `core.UniqueLowerNames`. This commit updates them both to do
so and adjusts the unit tests to expect the new sorted order return.
2017-05-04 12:37:05 +01:00
Daniel McCarney 1ed34a4a5d Fixes cert count rate limit for exact PSL matches. (#2703)
Prior to this PR if a domain was an exact match to a public suffix
list entry the certificates per name rate limit was applied based on the
count of certificates issued for that exact name and all of its
subdomains.

This PR introduces an exception such that exact public suffix
matches correctly have the certificate per name rate limit applied based
on only exact name matches.

In order to accomplish this a new RPC is added to the SA
`CountCertificatesByExactNames`. This operates similar to the existing
`CountCertificatesByNames` but does *not* include subdomains in the
count, only exact matches to the names provided. The usage of this new
RPC is feature flag gated behind the "CountCertificatesExact" feature flag.

The RA unit tests are updated to test the new code paths both with and
without the feature flag enabled.

Resolves #2681
2017-05-02 13:43:35 -07:00
David Calavera cc5ee3906b Refactor IsSane and IsSane* to return useful errors. (#2685)
This change changes the returning values from boolean to error.

It makes `checkConsistency` an internal function and removes the
optional argument in favor of making checks explicit where they are
used.

It also renames those functions to CheckConsistency* to not
give the impression of still returning boolean values.

Signed-off-by: David Calavera <david.calavera@gmail.com>
2017-04-19 12:08:47 -04:00
Roland Bracewell Shoemaker fd561ef842 Block issuance on first OCSP response generation (#2633)
Generate first OCSP response in ca.IssueCertificate instead of ocsp-updater.newCertificateTick
if features.GenerateOCSPEarly is enabled. Adds a new field to the sa.AddCertiifcate RPC for
the OCSP response and only adds it to the certificate status + sets ocspLastUpdated if it is a
non-empty slice. ocsp-updater.newCertificateTick stays the same so we can catch certificates
that were successfully signed + stored but a OCSP response couldn't be generated (for whatever
reason).

Fixes #2477.
2017-04-04 11:28:09 -07:00
Roland Bracewell Shoemaker e2b2511898 Overhaul internal error usage (#2583)
This patch removes all usages of the `core.XXXError` and almost all usages of `probs` outside of the WFE and VA and replaces them with a unified internal error type. Since the VA uses `probs.ProblemDetails` quite extensively in challenges, and currently stores them in the DB I've saved this change for another change (it'll also require a migration). Since `ProblemDetails` should only ever be exposed to end-users all of its related logic should be moved into the `WFE` but since it still needs to be exposed to the VA and SA I've left it in place for now.

The new internal `errors` package offers the same convenience functions as `probs` does as well as a new simpler type testing method. A few small changes have also been made to error messages, mainly adding the library and function name to internal server errors for easier debugging (i.e. where a number of functions return the exact same errors and there is no other way to distinguish which method threw the error).

Also adds proper encoding of internal errors transferred over gRPC (the current encoding scheme is kept for `core` and `probs` errors since it'll be ideally be removed after we deploy this and follow-up changes) using `grpc/metadata` instead of the gRPC status codes.

Fixes #2507. Updates #2254 and #2505.
2017-03-22 23:27:31 -07:00
Jacob Hoffman-Andrews 6c93b41f20 Add a limit on failed authorizations (#2513)
Fixes #976.

This implements a new rate limit, InvalidAuthorizationsPerAccount. If a given account fails authorization for a given hostname too many times within the window, subsequent new-authz attempts for that account and hostname will fail early with a rateLimited error. This mitigates the misconfigured clients that constantly retry authorization even though they always fail (e.g., because the hostname no longer resolves).

For the new rate limit, I added a new SA RPC, CountInvalidAuthorizations. I chose to implement this only in gRPC, not in AMQP-RPC, so checking the rate limit is gated on gRPC. See #2406 for some description of the how and why. I also chose to directly use the gRPC interfaces rather than wrapping them in core.StorageAuthority, as a step towards what we will want to do once we've moved fully to gRPC.

Because authorizations don't have a created time, we need to look at the expires time instead. Invalid authorizations retain the expiration they were given when they were created as pending authorizations, so we use now + pendingAuthorizationLifetime as one side of the window for rate limiting, and look backwards from there. Note that this means you could maliciously bypass this rate limit by stacking up pending authorizations over time, then failing them all at once.

Similarly, since this limit is by (account, hostname) rather than just (hostname), you can bypass it by creating multiple accounts. It would be more natural and robust to limit by hostname, like our certificate limits. However, we currently only have two indexes on the authz table: the primary key, and

(`registrationID`,`identifier`,`status`,`expires`)

Since this limit is intended mainly to combat misconfigured clients, I think this is sufficient for now.

Corresponding PR for website: letsencrypt/website#125
2017-01-23 11:22:51 -08:00
Josh Soref 8adf9d41cf Spelling (#2500)
Various spelling fixes.
2017-01-16 10:44:52 -05:00
Daniel 5f7ec2217e
Spelling fixes 2016-11-30 13:17:56 -05:00
Roland Bracewell Shoemaker c5f99453a9 Switch CT submission RPC from CA -> RA (#2304)
With the current gRPC design the CA talks directly to the Publisher when calling SubmitToCT which crosses security bounadries (secure internal segment -> internet facing segment) which is dangerous if (however unlikely) the Publisher is compromised and there is a gRPC exploit that allows memory corruption on the caller end of a RPC which could expose sensitive information or cause arbitrary issuance.

Instead we move the RPC call to the RA which is in a less sensitive network segment. Switching the call site from the CA -> RA is gated on adding the gRPC PublisherService object to the RA config.

Fixes #2202.
2016-11-08 11:39:02 -08:00
Daniel McCarney a6f2b0fafb Updates `go-jose` dep to v1.1.0 (#2314)
This commit updates the `go-jose` dependency to [v1.1.0](https://github.com/square/go-jose/releases/tag/v1.1.0) (Commit: aa2e30fdd1fe9dd3394119af66451ae790d50e0d). Since the import path changed from `github.com/square/...` to `gopkg.in/square/go-jose.v1/` this means removing the old dep and adding the new one.

The upstream go-jose library added a `[]*x509.Certificate` member to the `JsonWebKey` struct that prevents us from using a direct equality test against two `JsonWebKey` instances. Instead we now must compare the inner `Key` members.

The `TestRegistrationContactUpdate` function from `ra_test.go` was updated to populate the `Key` members used in testing instead of only using KeyID's to allow the updated comparisons to work as intended.

The `Key` field of the `Registration` object was switched from `jose.JsonWebKey` to `*jose.JsonWebKey ` to make it easier to represent a registration w/o a Key versus using a value with a nil `JsonWebKey.Key`.

I verified the upstream unit tests pass per contributing.md:
```
daniel@XXXXX:~/go/src/gopkg.in/square/go-jose.v1$ git show
commit aa2e30fdd1fe9dd3394119af66451ae790d50e0d
Merge: 139276c e18a743
Author: Cedric Staub <cs@squareup.com>
Date:   Thu Sep 22 17:08:11 2016 -0700

    Merge branch 'master' into v1
    
    * master:
      Better docs explaining embedded JWKs
      Reject invalid embedded public keys
      Improve multi-recipient/multi-sig handling

daniel@XXXXX:~/go/src/gopkg.in/square/go-jose.v1$ go test ./...
ok  	gopkg.in/square/go-jose.v1	17.599s
ok  	gopkg.in/square/go-jose.v1/cipher	0.007s
?   	gopkg.in/square/go-jose.v1/jose-util	[no test files]
ok  	gopkg.in/square/go-jose.v1/json	1.238s
```
2016-11-08 13:56:50 -05:00
Daniel McCarney 840badb498 Reverts pending auth/authz table merge. (#2297)
This PR reverts 27d531101f and undoes the merge of the `pendingAuthorizations` and `authz` table.  This change had unintended performance impacts on the `CountPendingAuthorizations` query that exacerbated load issues and need to be addressed.
2016-10-31 10:31:19 -07:00
Daniel McCarney eb67ad4f88 Allow `validateEmail` to timeout w/o error. (#2288)
This PR reworks the validateEmail() function from the RA to allow timeouts during DNS validation of MX/A/AAAA records for an email to be non-fatal and match our intention to verify emails best-effort.

Notes:

bdns/problem.go - DNSError.Timeout() was changed to also include context cancellation and timeout as DNS timeouts. This matches what DNSError.Error() was doing to set the error message and supports external callers to Timeout not duplicating the work.

bdns/mocks.go - the LookupMX mock was changed to support always.error and always.timeout in a manner similar to the LookupHost mock. Otherwise the TestValidateEmail unit test for the RA would fail when the MX lookup completed before the Host lookup because the error wouldn't be correct (empty DNS records vs a timeout or network error).

test/config/ra.json, test/config-next/ra.json - the dnsTries and dnsTimeout values were updated such that dnsTries * dnsTimeout was <= the WFE->RA RPC timeout (currently 15s in the test configs). This allows the dns lookups to all timeout without the overall RPC timing out.

Resolves #2260.
2016-10-27 11:56:12 -07:00
Daniel McCarney 39c7ce7b69 Properly abort `NewAuthorization` when SA RPC fails. (#2286)
The RA performs an RPC to the SA's `GetValidAuthorizations` function when attempting to find existing valid authorizations to reuse. Prior to this commit, ff the RPC fails (e.g. due to a timeout) the calling code
logs the failure as a warning but fails to return the error and cease processing. This results in a nil panic when we later try to index `auths`

This commit inserts the missing `return` to ensure we don't process further, thereby resolving #2274.

A test for this fix is provided with `TestReuseAuthorizationFaultySA`. Without f52f340 applied this test recreates the panic observed in #2274 and produces:

```
go test -p 1 -v -race --test.run TestReuseAuthorizationFaultySA github.com/letsencrypt/boulder/ra
=== RUN   TestReuseAuthorizationFaultySA
--- FAIL: TestReuseAuthorizationFaultySA (0.04s)
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xb code=0x1 addr=0x20 pc=0x4be2b8]

```

With f52f340 it passes. Yay!
2016-10-27 08:58:26 -07:00
Roland Bracewell Shoemaker ce679bad41 Implement key rollover (#2231)
Fixes #503.

Functionality is gated by the feature flag `AllowKeyRollover`. Since this functionality is only specified in ACME draft-03 and we mostly implement the draft-02 style this takes some liberties in the implementation, which are described in the updated divergences doc. The `key-change` resource is used to side-step draft-03 `url` requirement.
2016-10-27 10:22:09 -04:00
Jacob Hoffman-Andrews 1958dc9065 Update total issued count asynchronously. (#2246)
Previously the lock on total issued count would exacerbate problems when the
count query was slow, which it often is.

Fixes #1809.
2016-10-20 14:17:34 -07:00
Daniel McCarney 27d531101f Store new authorizations in the `authz` table (#2219)
To remove challenges with expired/pending authz's when they are deleted we want to introduce a foreign key relationship to the challenges table's authorizationID field with instruction to cascade on delete (#2155). As pointed out in a comment this is made difficult by the current usage of a separate pendingAuthorizations table for pending authorizations.

To be able to remove the pendingAuthorizations table entirely (#2163) we need to first stop using it. This PR introduces the code changes required to achieve this.

Notes:

The SA's NewPendingAuthorization function was updated to store all new pending auths in the authz table and to ensure the status is StatusPending.
The SA's GetAuthorization, UpdatePendingAuthorization, FinalizeAuthorization, and RevokeAuthorizationsByDomain functions were updated to properly handle the fact that a pending authz could be in either the pendingAuthorizations table, or the authz table, and to do the right thing accordingly.
Several places in the RA unit tests created a pending authorization with a status "Valid", then finalized it later. This broke when NewPendingAuthorization was changed to enforce Pending status before creating the authz row since the FinalizeAuthorization code expected to only finalize Valid rows. To fix this some of the RA unit tests were changed to explicitly set status to Valid before calling FinalizeAuthorization. This matches the true intention of the tests to quickly create a pending & then finalized authorization.
The expired-authz-purger utility was updated to purge from both the pendingAuthorizations and authz table as required.
The return values of RevokeAuthorizationsByDomain have changed slightly. Previously it returned a 2 element array where the first element was the number of pending authorizations revoked and the second element was the number of finalized authorizations revoked. This is changed so that now it is the number of rows from the pendingAuthorizations and authz tables respectively. E.g. the second count for the authz table may now include non-finalized authzs in its count of affected rows. The admin-revoker is the only place that used this SA method and it was updated appropriately to describe the "rows" change.
The "purger" database user needs to have a new GRANT SELECT, DELETE for the authz table in addition to its existing GRANT for the pendingAuthorizations table.
This resolves #2162
2016-10-18 09:39:59 -07:00
Roland Bracewell Shoemaker c6e3ef660c Re-apply 2138 with proper gating (#2199)
Re-applies #2138 using the new style of feature-flag gated migrations. Account deactivation is gated behind `features.AllowAccountDeactivation`.
2016-09-29 17:16:03 -04:00
Roland Bracewell Shoemaker 239bf9ae0a Very basic feature flag impl (#1705)
Updates #1699.

Adds a new package, `features`, which exposes methods to set and check if various internal features are enabled. The implementation uses global state to store the features so that services embedded in another service do not each require their own features map in order to check if something is enabled.

Requires a `boulder-tools` image update to include `golang.org/x/tools/cmd/stringer`.
2016-09-20 16:29:01 -07:00
Roland Bracewell Shoemaker 2c966c61b2 Revert "Allow account deactivation (#2138)" (#2188)
This reverts commit 6f3d078414, reversing
changes made to c8f1fb3e2f.
2016-09-19 11:20:41 -07:00
Jacob Hoffman-Andrews 6f3d078414 Allow account deactivation (#2138)
Fixes #2011.
2016-09-07 19:36:54 -04:00
Roland Bracewell Shoemaker c8f1fb3e2f Remove direct usages of go-statsd-client in favor of using metrics.Scope (#2136)
Fixes #2118, fixes #2082.
2016-09-07 19:35:13 -04:00
Roland Shoemaker dbf9afa7d6 Review fixes pt. 1 2016-08-25 16:28:58 -07:00
Roland Shoemaker c7e5ed1262 RA/WFE tests 2016-08-24 12:36:41 -07:00
Roland Bracewell Shoemaker 51ee04e6a9 Allow authorization deactivation (#2116)
Implements `valid` and `pending` authz deactivation.
2016-08-23 16:25:06 -04:00
Roland Bracewell Shoemaker 91bfd05127 Revert #2088 (#2137)
* Remove oldx509 usage

* Un-vendor old crypto/x509, crypto/x509/pkix, and encoding/asn1
2016-08-23 14:01:37 -04:00
Ben Irving 8ed5b1e6a1 Replace *AcmeURL with string (#2117)
Removes core.AcmeURL from boulder and uses string instead.

Fixes #1996
2016-08-11 13:27:19 -07:00
Ben Irving 8b622c805a Move MergeUpdate out of core (#2114)
Fixes #2022
2016-08-08 17:12:52 -07:00
Jacob Hoffman-Andrews 474b76ad95 Import forked x509 for parsing of CSRs with empty integers (#2088)
Part of #2080.

This change vendors `crypto/x509`, `crypto/x509/pkix`, and `encoding/asn1` from  1d5f6a765d. That commit is a direct child of the Go 1.5.4 release tag, so it contains the same code as the current Go version we are using. In that commit I rewrote imports in those packages so they depend on each other internally rather than calling out to the standard library, which would cause type disagreements.

I changed the imports in each place where we're parsing CSRs, and imported under a different name `oldx509`, both to avoid collisions and make it clear what's going on. Places that only use `x509` to parse certificates are not changed, and will use the current standard library.

This will unblock us from moving to Go 1.6, and subsequently Go 1.7.
2016-07-28 10:38:33 -04:00
Patrick Figel 8cd74bf766 Make (pending)AuthorizationLifetime configurable (#2028)
Introduces the `authorizationLifetimeDays` and `pendingAuthorizationLifetimeDays` configuration options for `RA`.

If the values are missing from configuration, the code defaults back to the current values (300/7 days).

fixes #2024
2016-07-12 15:18:22 -04:00
Daniel McCarney 7e946eaacc Registration update optimizations (#2001)
This PR adds two optimizations to fix the optimistic lock errors observed in #1986.

First, the WFE now returns early for registration POST's (before invoking the RA and SA) when the POST body is the trivial update (`{"resource":"reg"}`). This prevents any DB operations from being performed when there is no work to be done.

Second, the RA now tracks whether a update actually changes the base registration's `Contact` slice, or `Agreement` string. If the proposed update doesn't change either of these fields then the RA will return early before handing the update to the SA. 

Both changes save database operations from being performed needlessly and will help avoid the optimistic lock errors we observed when a problematic client was POSTing the trivial update repeatedly in a short period.

The fix was verified as follows: I checked out master and artificially introduced lock contention into the SA by adding a 2s sleep into `UpdateRegistration` between fetching the `existingRegModel` to get the `LockCol` value and calling `ssa.dbMap.Update`. With the sleep in place & two certbot clients posting matching registration updates the lock contention error is produced as expected. After checking out the `empty-reg-updates` branch, re-adding the sleep to the SA, and performing the same two client reg updates no error is produced.
2016-07-07 13:40:55 -04:00
Roland Bracewell Shoemaker 97bc63c484 Fix RA test race (#1995)
The RA test uses a `DummyValidationAuthority` which is called from `ra.UpdateAuithorization` in a goroutine. Both the tester and the spawned  goroutine want to mutate/read fields on the `DummyValidationAuthority` which causes a race. This PR fixes the race by blocking on a `chan struct{}` until the `DummyValidationAuthority` has mutated the required state.

Fixes #1980.

https://github.com/letsencrypt/boulder/pull/1995
2016-06-30 15:14:38 -07:00
Simone Carletti 7172e49650 Replace x/net/publicsuffix with weppos/publicsuffix-go (#1969)
This PR replaces the `x/net/publicsuffix` package with `weppos/publicsuffix-go`.

The conversations that leaded to this decision are #1479 and #1374. To summarize the discussion, the main issue with `x/net/publicsuffix` is that the package compiles the list into the Go source code and doesn't provide a way to easily pull updates (e.g. by re-parsing the original PSL) unless the entire package is recompiled.

The PSL update frequency is almost daily, which makes very hard to recompile the official Golang package to stay up-to-date with all the changes. Moreover, Golang maintainers expressed some concerns about rebuilding and committing changes with a frequency that would keep the package in sync with the original PSL. See https://github.com/letsencrypt/boulder/issues/1374#issuecomment-182429297

`weppos/publicsuffix-go` contains a compiled version of the list that is updated weekly (or more frequently). Moreover, the package can read and parse a PSL from a String or a File which will effectively decouple the Boulder source code with the list itself. The main benefit is that it will be possible to update the definition by simply downloading the latest list and restarting the application (assuming the list is persisted in memory).
2016-06-30 15:03:14 -07:00
Maxime Boisvert 821d572366 Adds RejectedIdentifierError on Blacklisted error. (#1944)
Boulder uses MalformedRequestError as a universal error. This pull request adds the RejectedIdentifierError and use it on a blacklist error.

For a client implementation, it is easier and cleaner to use an exception than parse the error message.

1336c42813/policy/pa.go (L131)

Fixes #1938

PR in acme : ietf-wg-acme/acme#142
2016-06-29 13:53:38 -07:00
Daniel McCarney 3a6a254c5c Expose last issuance timestamp via expvar (#1982)
This PR adds a expvar.Int published under the key "lastIssuance" that contains the timestamp of the last successful certificate issuance. This allows easy creation of a script that monitors the RA debug server (port 8002) to ensure that there has been a successful issuance within a set period (e.g. last five minutes).

The underlying expvar.Int code uses the atomic package to ensure safe updates/reads across multiple goroutines.

This resolves #1945 and was selected in place of the more complex circular bucket design. While the timestamp approach doesn't provide the issuance volume as readily it is less complex and meets the immediate need of a reliable external monitoring process hook.

https://github.com/letsencrypt/boulder/pull/1982
2016-06-27 10:38:35 -07:00
Simone Carletti 7f7183fab2 Increase timeout to fix test failures on Mac OSX (#1966)
It looks like the filesystem on Mac OSX El Captain
(and potentially previous versions) require at least
one second before you get a different timestamp.

Any value lower than one second caused:

- a deadlock error in a test in the reloader module
- a rate limit error in the ra module

See GH-1955.
2016-06-23 14:13:05 -07:00
Ben Irving 77e64fef79 Disallow non-ASCII email addresses (#1953)
This PR, adds a check in registration authority for non-ASCII encoded characters in an email address. This is due to a 'funky email implementation'.

Fixes #1350
2016-06-21 17:53:38 -07:00
Jacob Hoffman-Andrews 4e0f96d924 Remove last vestiges of challenge.AccountKey. (#1949)
This is a followup from https://github.com/letsencrypt/boulder/pull/1942. That PR stopped setting challenge.AccountKey. This one removes it entirely.

Fixes #1948
2016-06-21 16:25:58 -07:00
Jacob Hoffman-Andrews 0535ac78d7 Stop setting AccountKey in challenges (#1942)
In https://github.com/letsencrypt/boulder/pull/774 we introduced and account key stored with the challenge. This was a stopgap fix to the now-defunct SimpleHTTP and DNS challenges in the face of https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJCV4dqWf-4Yc. However, we no longer offer or implement those challenges, so the extra field is unnecessary. It also take up a huge amount of space in the challenges table, which is our biggest table. SimpleHTTP and DNS challenges were removed in https://github.com/letsencrypt/boulder/pull/1247.

We can provide a follow-up migration to delete the column later, once we have a plan for large migrations without downtime.

Fixes #1909
2016-06-20 14:26:53 -07:00
Daniel McCarney 778c9bba86 Fix FQDNSet rate limit exemption. (#1935)
As reported in #1925 the Certificates per Domain rate limit was being
incorrectly enforced on certificate renewals for FQDN sets that have
been previously issued. This is counter to the described rate
limit policies[0] that detail a separate rate limit for certificates
issued for the "exact same set of Fully Qualified Domain Names".

The bug was caused by the result of `domainsForRateLimiting` overwriting
the original `names []string` provided to the RA's
`checkCertificatesPerNameLimit` function. This meant instead of looking
for an existing FQDN set for the full set of domain names being
requested we checked for an FQDN set for just the eTLD+1's of the
domains. (e.g "www.example.com, foo.example.com, bar.example.com" vs
"example.com").

This commit preserves the original `names` values for doing an FQDN set
lookup and uses the `tldNames` from `domainsForRateLimiting` elsewhere.
This fixes #1925.

A test is added to ensure that `checkCertificatesPerNameLimit` does the
correct thing both with and without an existing FQDN set.

[0] https://community.letsencrypt.org/t/rate-limits-for-lets-encrypt/6769
2016-06-16 13:50:39 -07:00
Daniel McCarney cd2d1c4f6b Allow removing registration contact. (#1923)
The RA UpdateRegistration function merges a base registration object with an update by calling Registration.MergeUpdate. Prior to this commit MergeUpdate only allowed the updated registration object to overwrite the Contact field of the existing registration if the updated reg. defined at least one AcmeURL. This prevented clients from being able to outright remove the contact associated with an existing registration.

This commit removes the len() check on the input.Contact in MergeUpdate to allow the r.Contact field to be overwritten by a []*core.AcmeURL(nil) Contact field. Subsequently clients can now send an empty contacts list in the update registration POST in order to remove their reg contact.

Fixes #1846

* Allow removing registration contact.
* Adds a test for `MergeUpdate` contact removal.
* Change `Registration.Contact` type to `*[]*core.AcmeURL`.
* End validateContacts early for empty contacts
* Test removing reg. contact more thoroughly.
2016-06-13 11:02:29 -07:00
Daniel McCarney 9abc212448 Reuse valid authz for subsequent new authz requests (#1921)
Presently clients may request a new AuthZ be created for a domain that they have already proved authorization over. This results in unnecessary bloat in the authorizations table and duplicated effort.

This commit alters the `NewAuthorization` function of the RA such that before going through the work of creating a new AuthZ it checks whether there already exists a valid AuthZ for the domain/regID that expires in more than 24 hours from the current date. If there is, then we short circuit creation and return the existing AuthZ. When this case occurs the `RA.ReusedValidAuthz` counter is incremented to provide visibility.

Since clients requesting a new AuthZ and getting an AuthZ back expect to turn around and post updates to the corresponding challenges we also return early in `UpdateAuthorization` when asked to update an AuthZ that is already valid. When this case occurs the `RA.ReusedValidAuthzChallenge` counter is incremented.

All of the above behaviour is gated by a new RA config flag `reuseValidAuthz`. In the default case (false) the RA does **not** reuse any AuthZ's and instead maintains the historic behaviour; always creating a new AuthZ when requested, irregardless of whether there are already valid AuthZ's that could be reused. In the true case (enabled only in `boulder-config-next.json`) the AuthZ reuse described above is enabled.

Resolves #1854
2016-06-10 16:44:16 -04:00
Ben Irving 438580f206 Remove last of UseNewVARPC (#1914)
`UseNewVARPC` is no longer necessary and is safe to be removed. We default to using the newer VA RPC code.
2016-06-09 10:12:46 -04:00
Daniel McCarney 4c289f2a8f Reload ratelimit policy automatically at runtime (#1894)
Resolves #1810 by automatically updating the RA ratelimit.RateLimitConfig whenever the backing config file is changed. Much like the Policy Authority uses a reloader instance to support updating the Hostname policy on the fly, this PR changes the Registration Authority to use a reloader for the rate limit policy file.

Access to the ra.rlPolicies member is protected with a RWMutex now that there is a potential for the values to be reloaded while a reader is active.

A test is introduced to ensure that writing a new policy YAML to the policy config file results in new values being set in the RA's rlPolicies instance.

https://github.com/letsencrypt/boulder/pull/1894
2016-06-08 12:11:46 -07:00
Jacob Hoffman-Andrews 92df4d0fc2 Rename authorities to shorter names. (#1878)
Fixes #1875.
2016-06-03 13:35:28 -07:00