This PR is a rework of what was originally https://github.com/letsencrypt/boulder/pull/3382, integrating the design feedback proposed by @jsha: https://github.com/letsencrypt/boulder/pull/3382#issuecomment-359912549
This PR removes the stored Order status field and replaces it with a value that is calculated on-the-fly by the SA when fetching an order, based on the order's associated authorizations.
In summary (and order of precedence):
* If any of the order's authorizations are invalid, the order is invalid.
* If any of the order's authorizations are deactivated, the order is deactivated.
* If any of the order's authorizations are pending, the order is pending.
* If all of the order's authorizations are valid, and there is a certificate serial, the order is valid.
* If all of the order's authorizations are valid, and we have began processing, but there is no certificate serial, the order is processing.
* If all of the order's authorizations are valid, and we haven't processing, then the order is pending waiting a finalization request.
This avoids having to explicitly update the order status when an associated authorization changes status.
The RA's implementation of new-order is updated to only reuse an existing order if the calculated status is pending. This avoids giving back invalid or deactivated orders to clients.
Resolves#3333
This change adds a feature flag, TLSSNIRevalidation. When it is enabled, Boulder
will create new authorization objects with TLS-SNI challenges if the requesting
account has issued a certificate with the relevant domain name, and was the most
recent account to do so*. This setting overrides the configured list of
challenges in the PolicyAuthority, so even if TLS-SNI is disabled in general, it
will be enabled for revalidation.
Note that this interacts with EnforceChallengeDisable. Because
EnforceChallengeDisable causes additional checked at validation time and at
issuance time, we need to update those two places as well. We'll send a
follow-up PR with that.
*We chose to make this work only for the most recent account to issue, even if
there were overlapping certificates, because it significantly simplifies the
database access patterns and should work for 95+% of cases.
Note that this change will let an account revalidate and reissue for a domain
even if the previous issuance on that account used http-01 or dns-01. This also
simplifies implementation, and fits within the intent of the mitigation plan: If
someone previously issued for a domain using http-01, we have high confidence
that they are actually the owner, and they are not going to "steal" the domain
from themselves using tls-sni-01.
Also note: This change also doesn't work properly with ReusePendingAuthz: true.
Specifically, if you attempted issuance in the last couple days and failed
because there was no tls-sni challenge, you'll still have an http-01 challenge
lying around, and we'll reuse that; then your client will fail due to lack of
tls-sni challenge again.
This change was joint work between @rolandshoemaker and @jsha.
This updates the PA component to allow authorization challenge types that are globally disabled if the account ID owning the authorization is on a configured whitelist for that challenge type.
This commit adds pending order reuse. Subsequent to this commit multiple
add-order requests from the same account ID for the same set of order
names will result in only one order being created. Orders are only
reused while they are not expired. Finalized orders will not be reused
for subsequent new-order requests allowing for duplicate order issuance.
Note that this is a second level of reuse, building on the pending
authorization reuse that's done between separate orders already.
To efficiently find an appropriate order ID given a set of names,
a registration ID, and the current time a new orderFqdnSets table is
added with appropriate indexes and foreign keys.
Resolves#3258
This commit adds a new rate limit to restrict the number of outstanding
pending orders per account. If the threshold for this rate limit is
crossed subsequent new-order requests will return a 429 response.
Note: Since this the rate limit object itself defines an `Enabled()`
test based on whether or not it has been configured there is **not**
a feature flag for this change.
Resolves https://github.com/letsencrypt/boulder/issues/3246
This PR implements issuance for wildcard names in the V2 order flow. By policy, pending authorizations for wildcard names only receive a DNS-01 challenge for the base domain. We do not re-use authorizations for the base domain that do not come from a previous wildcard issuance (e.g. a normal authorization for example.com turned valid by way of a DNS-01 challenge will not be reused for a *.example.com order).
The wildcard prefix is stripped off of the authorization identifier value in two places:
When presenting the authorization to the user - ACME forbids having a wildcard character in an authorization identifier.
When performing validation - We validate the base domain name without the *. prefix.
This PR is largely a rewrite/extension of #3231. Instead of using a pseudo-challenge-type (DNS-01-Wildcard) to indicate an authorization & identifier correspond to the base name of a wildcard order name we instead allow the identifier to take the wildcard order name with the *. prefix.
This PR implements order finalization for the ACME v2 API.
In broad strokes this means:
* Removing the CSR from order objects & the new-order flow
* Adding identifiers to the order object & new-order
* Providing a finalization URL as part of orders returned by new-order
* Adding support to the WFE's Order endpoint to receive finalization POST requests with a CSR
* Updating the RA to accept finalization requests and to ensure orders are fully validated before issuance can proceed
* Updating the SA to allow finding order authorizations & updating orders.
* Updating the CA to accept an Order ID to log when issuing a certificate corresponding to an order object
Resolves#3123
For the new-order endpoint only. This does some refactoring of the order of operations in `ra.NewAuthorization` as well in order to reduce the duplication of code relating to creating pending authorizations, existing tests still seem to work as intended... A close eye should be given to this since we don't have integration tests yet that test it end to end. This also changes the inner type of `grpc.StorageAuthorityServerWrapper` to `core.StorageAuthority` so that we can avoid a circular import that is created by needing to import `grpc.AuthzToPB` and `grpc.PBToAuthz` in `sa/sa.go`.
This is a big change but should considerably improve the performance of the new-order flow.
Fixes#2955.
* Remove all of the errors under core. Their purpose is now served by errors, and they were almost entirely unused. The remaining uses were switched to errors.
* Remove errors.NotSupportedError. It was used in only one place (ca.go), and that usage is more appropriately a ServerInternal error.
Stub out IssueCertificateForPrecertificate() enough so that we can continue with the PRs that implement & test it in parallel with PRs that implement and test the calling side (via mock implementations of the CA side).
* CA: Stub IssuePrecertificate gPRC method.
* CA: Implement IssuePrecertificate.
* CA: Test Precertificate flow in TestIssueCertificate().
move verification of certificate storage
IssuePrecertificate tests
Add CT precertificate poison extension to CFSSL whitelist.
CFSSL won't allow us to add an extension to a certificate unless that
certificate is in the whitelist.
According to its documentation, "Extensions requested in the CSR are
ignored, except for those processed by ParseCertificateRequest (mainly
subjectAltName)." Still, at least we need to add tests to make sure a
poison extension in a CSR isn't copied into the final certificate.
This allows us to avoid making invasive changes to CFSSL.
* CA: Test precertificate issuance in TestInvalidCSRs().
* CA: Only support IssuePrecertificate() if it is explicitly enabled.
* CA: Test that we produce CT poison extensions in the valid form.
The poison extension must be critical in order to work correctly. It probably wouldn't
matter as much what the value is, but the spec requires the value to be ASN.1 NULL, so
verify that it is.
Switch certificates and certificateStatus to use autoincrement primary keys to avoid performance problems with clustered indexes (fixes#2754).
Remove empty externalCerts and identifierData tables (fixes#2881).
Make progress towards deleting unnecessary LockCol and subscriberApproved fields (#856, #873) by making them NULLable and not including them in INSERTs and UPDATEs.
The existing ReusePendingAuthz implementation had some bugs:
It would recycle deactivated authorizations, which then couldn't be fulfilled. (#2840)
Since it was implemented in the SA, it wouldn't get called until after the RA checks the Pending Authorizations rate limit. Which means it wouldn't fulfill its intended purpose of making accounts less likely to get stuck in a Pending Authorizations limited state. (#2831)
This factors out the reuse functionality, which used to be inside an "if" statement in the SA. Now the SA has an explicit GetPendingAuthorization RPC, which gets called from the RA before calling NewPendingAuthorization. This happens to obsolete #2807, by putting the recycling logic for both valid and pending authorizations in the RA.
This commit replaces the Boulder dependency on
gopkg.in/square/go-jose.v1 with gopkg.in/square/go-jose.v2. This is
necessary both to stay in front of bitrot and because the ACME v2 work
will require a feature from go-jose.v2 for JWS validation.
The largest part of this diff is cosmetic changes:
Changing import paths
jose.JsonWebKey -> jose.JSONWebKey
jose.JsonWebSignature -> jose.JSONWebSignature
jose.JoseHeader -> jose.Header
Some more significant changes were caused by updates in the API for
for creating new jose.Signer instances. Previously we constructed
these with jose.NewSigner(algorithm, key). Now these are created with
jose.NewSigner(jose.SigningKey{},jose.SignerOptions{}). At present all
signers specify EmbedJWK: true but this will likely change with
follow-up ACME V2 work.
Another change was the removal of the jose.LoadPrivateKey function
that the wfe tests relied on. The jose v2 API removed these functions,
moving them to a cmd's main package where we can't easily import them.
This function was reimplemented in the WFE's test code & updated to fail
fast rather than return errors.
Per CONTRIBUTING.md I have verified the go-jose.v2 tests at the imported
commit pass:
ok gopkg.in/square/go-jose.v2 14.771s
ok gopkg.in/square/go-jose.v2/cipher 0.025s
? gopkg.in/square/go-jose.v2/jose-util [no test files]
ok gopkg.in/square/go-jose.v2/json 1.230s
ok gopkg.in/square/go-jose.v2/jwt 0.073s
Resolves#2880
This is a step towards the long-term goal of eliminating wrappers and a step
towards the short-term goal of making it easier to refactor ca/ca_test.go to
add testing of precertificate-based issuance.
Prior to this PR the SA's `CountRegistrationsByIP` treated IPv6
differently than IPv4 by counting registrations within a /48 for IPv6 as
opposed to exact matches for IPv4. This PR updates
`CountRegistrationsByIP` to treat IPv4 and IPv6 the
same, always matching exactly. The existing RegistrationsPerIP rate
limit policy will be applied against this exact matching count.
A new `CountRegistrationsByIPRange` function is added to the SA that
performs the historic matching process, e.g. for IPv4 it counts exactly
the same as `CountRegistrationsByIP`, but for IPv6 it counts within
a /48.
A new `RegistrationsPerIPRange` rate limit policy is added to allow
configuring the threshold/window for the fuzzy /48 matching registration
limit. Stats for the "Exceeded" and "Pass" events for this rate limit are
separated into a separate `RegistrationsByIPRange` stats scope under
the `RateLimit` scope to allow us to track it separate from the exact
registrations per IP rate limit.
Resolves https://github.com/letsencrypt/boulder/issues/2738
This PR introduces a new feature flag "IPv6First".
When the "IPv6First" feature is enabled the VA's HTTP dialer and TLS SNI
(01 and 02) certificate fetch requests will attempt to automatically
retry when the initial connection was to IPv6 and there is an IPv4
address available to retry with.
This resolves https://github.com/letsencrypt/boulder/issues/2623
This PR removes two berrors that aren't used anywhere in the codebase:
TooManyRequests , a holdover from AMQP, and is no longer used.
UnsupportedIdentifier, used just for rejecting IDNs, which we no longer do.
In addition, the SignatureValidation error was only used by the WFE so it is moved there and unexported.
Note for reviewers: To remove berrors.UnsupportedIdentifierError I replaced the errIDNNotSupported error in policy/pa.go with a berrors.MalformedError with the same name. This allows removing UnsupportedIdentifierError ahead of #2712 which removes the IDNASupport feature flag. This seemed OK to me, but I can restore UnsupportedIdentifierError and clean it up after 2712 if that's preferred.
Resolves#2709
Prior to this PR if a domain was an exact match to a public suffix
list entry the certificates per name rate limit was applied based on the
count of certificates issued for that exact name and all of its
subdomains.
This PR introduces an exception such that exact public suffix
matches correctly have the certificate per name rate limit applied based
on only exact name matches.
In order to accomplish this a new RPC is added to the SA
`CountCertificatesByExactNames`. This operates similar to the existing
`CountCertificatesByNames` but does *not* include subdomains in the
count, only exact matches to the names provided. The usage of this new
RPC is feature flag gated behind the "CountCertificatesExact" feature flag.
The RA unit tests are updated to test the new code paths both with and
without the feature flag enabled.
Resolves#2681
This change changes the returning values from boolean to error.
It makes `checkConsistency` an internal function and removes the
optional argument in favor of making checks explicit where they are
used.
It also renames those functions to CheckConsistency* to not
give the impression of still returning boolean values.
Signed-off-by: David Calavera <david.calavera@gmail.com>
Generate first OCSP response in ca.IssueCertificate instead of ocsp-updater.newCertificateTick
if features.GenerateOCSPEarly is enabled. Adds a new field to the sa.AddCertiifcate RPC for
the OCSP response and only adds it to the certificate status + sets ocspLastUpdated if it is a
non-empty slice. ocsp-updater.newCertificateTick stays the same so we can catch certificates
that were successfully signed + stored but a OCSP response couldn't be generated (for whatever
reason).
Fixes#2477.
This patch removes all usages of the `core.XXXError` and almost all usages of `probs` outside of the WFE and VA and replaces them with a unified internal error type. Since the VA uses `probs.ProblemDetails` quite extensively in challenges, and currently stores them in the DB I've saved this change for another change (it'll also require a migration). Since `ProblemDetails` should only ever be exposed to end-users all of its related logic should be moved into the `WFE` but since it still needs to be exposed to the VA and SA I've left it in place for now.
The new internal `errors` package offers the same convenience functions as `probs` does as well as a new simpler type testing method. A few small changes have also been made to error messages, mainly adding the library and function name to internal server errors for easier debugging (i.e. where a number of functions return the exact same errors and there is no other way to distinguish which method threw the error).
Also adds proper encoding of internal errors transferred over gRPC (the current encoding scheme is kept for `core` and `probs` errors since it'll be ideally be removed after we deploy this and follow-up changes) using `grpc/metadata` instead of the gRPC status codes.
Fixes#2507. Updates #2254 and #2505.
I think these are all the necessary changes to implement TLS-SNI-02 validations, according to the section 7.3 of draft 05:
https://tools.ietf.org/html/draft-ietf-acme-acme-05#section-7.3
I don't have much experience with this code, I'll really appreciate your feedback.
Signed-off-by: David Calavera <david.calavera@gmail.com>
This PR introduces the ability for the ocsp-updater to only resubmit certificates to logs that we are missing SCTs from. Prior to this commit when a certificate was missing one or more SCTs we would submit it to every log, causing unnecessary overhead for us and the log operator.
To accomplish this a new RPC endpoint is added to the Publisher service "SubmitToSingleCT". Unlike the existing "SubmitToCT" this RPC endpoint accepts a log URI and public key in addition to the certificate DER bytes. The certificate is submitted directly to that log, and a cache of constructed resources is maintained so that subsequent submissions to the same log can reuse the stat name, verifier, and submission client.
Resolves#1679
Adds a gRPC server to the SA and SA gRPC Clients to the WFE, RA, CA, Publisher, OCSP updater, orphan finder, admin revoker, and expiration mailer.
Also adds a CA gRPC client to the OCSP Updater which was missed in #2193.
Fixes#2347.
The only consumer of `core/reverse-name.go` and the `core.ReverseName`
function was the SA. In keeping with minimizing the junk that collects
up in `core` this commit moves `core.ReverseName` into the SA as the
unexported `reverseName` function. There was no unit test for
`ReverseName` so this commit also adds a unit test to `sa_test.go`.
This commit updates the `go-jose` dependency to [v1.1.0](https://github.com/square/go-jose/releases/tag/v1.1.0) (Commit: aa2e30fdd1fe9dd3394119af66451ae790d50e0d). Since the import path changed from `github.com/square/...` to `gopkg.in/square/go-jose.v1/` this means removing the old dep and adding the new one.
The upstream go-jose library added a `[]*x509.Certificate` member to the `JsonWebKey` struct that prevents us from using a direct equality test against two `JsonWebKey` instances. Instead we now must compare the inner `Key` members.
The `TestRegistrationContactUpdate` function from `ra_test.go` was updated to populate the `Key` members used in testing instead of only using KeyID's to allow the updated comparisons to work as intended.
The `Key` field of the `Registration` object was switched from `jose.JsonWebKey` to `*jose.JsonWebKey ` to make it easier to represent a registration w/o a Key versus using a value with a nil `JsonWebKey.Key`.
I verified the upstream unit tests pass per contributing.md:
```
daniel@XXXXX:~/go/src/gopkg.in/square/go-jose.v1$ git show
commit aa2e30fdd1fe9dd3394119af66451ae790d50e0d
Merge: 139276c e18a743
Author: Cedric Staub <cs@squareup.com>
Date: Thu Sep 22 17:08:11 2016 -0700
Merge branch 'master' into v1
* master:
Better docs explaining embedded JWKs
Reject invalid embedded public keys
Improve multi-recipient/multi-sig handling
daniel@XXXXX:~/go/src/gopkg.in/square/go-jose.v1$ go test ./...
ok gopkg.in/square/go-jose.v1 17.599s
ok gopkg.in/square/go-jose.v1/cipher 0.007s
? gopkg.in/square/go-jose.v1/jose-util [no test files]
ok gopkg.in/square/go-jose.v1/json 1.238s
```
Protobuf files need to be regenerated because (I think) Golang 1.7.3 uses a somewhat different method of ordering fields in a struct when marshaling to bytes.
Fixes#503.
Functionality is gated by the feature flag `AllowKeyRollover`. Since this functionality is only specified in ACME draft-03 and we mostly implement the draft-02 style this takes some liberties in the implementation, which are described in the updated divergences doc. The `key-change` resource is used to side-step draft-03 `url` requirement.
This PR adds a migration to create two new fields on the `certificateStatus` table: `notAfter` and `isExpired`. The rationale for these fields is explained in #1864. Usage of these fields is gated behind `features.CertStatusOptimizationsMigrated` per [CONTRIBUTING.md](https://github.com/letsencrypt/boulder/blob/master/CONTRIBUTING.md#gating-migrations). This flag should be set to true **only** when the `20160817143417_CertStatusOptimizations.sql` migration has been applied.
Points of difference from #2132 (the initial preparatory "all-in-one go" PR):
**Note 1**: Updating the `isExpired` field in the OCSP updater can not be done yet, the `notAfter` field needs to be fully populated first - otherwise a separate query or a messy `JOIN` would have to be used to determine if a certStatus `isExpired` by using the `certificates` table's `expires` field.
**Note 2**: Similarly we can't remove the `JOIN` on `certificates` from the `findStaleOCSPResponse` query yet until all DB rows have `notAfter` populated. This will happen in a separate **Part Two** PR.
Fixes#140.
This patch allows users to specify the following revocation reasons based on my interpretation of the meaning of the codes but could use confirmation from others.
* unspecified (0)
* keyCompromise (1)
* affiliationChanged (3)
* superseded (4)
* cessationOfOperation (5)
Part of #2080.
This change vendors `crypto/x509`, `crypto/x509/pkix`, and `encoding/asn1` from 1d5f6a765d. That commit is a direct child of the Go 1.5.4 release tag, so it contains the same code as the current Go version we are using. In that commit I rewrote imports in those packages so they depend on each other internally rather than calling out to the standard library, which would cause type disagreements.
I changed the imports in each place where we're parsing CSRs, and imported under a different name `oldx509`, both to avoid collisions and make it clear what's going on. Places that only use `x509` to parse certificates are not changed, and will use the current standard library.
This will unblock us from moving to Go 1.6, and subsequently Go 1.7.
This PR adds two optimizations to fix the optimistic lock errors observed in #1986.
First, the WFE now returns early for registration POST's (before invoking the RA and SA) when the POST body is the trivial update (`{"resource":"reg"}`). This prevents any DB operations from being performed when there is no work to be done.
Second, the RA now tracks whether a update actually changes the base registration's `Contact` slice, or `Agreement` string. If the proposed update doesn't change either of these fields then the RA will return early before handing the update to the SA.
Both changes save database operations from being performed needlessly and will help avoid the optimistic lock errors we observed when a problematic client was POSTing the trivial update repeatedly in a short period.
The fix was verified as follows: I checked out master and artificially introduced lock contention into the SA by adding a 2s sleep into `UpdateRegistration` between fetching the `existingRegModel` to get the `LockCol` value and calling `ssa.dbMap.Update`. With the sleep in place & two certbot clients posting matching registration updates the lock contention error is produced as expected. After checking out the `empty-reg-updates` branch, re-adding the sleep to the SA, and performing the same two client reg updates no error is produced.
The `letsencrypt/boulder-tools` image was recently updated, pulling in version
0.8.0 of certbot. That version stores the output of `certonly` requests in a
different path. In test.sh, we check out a specific tagged release of certbot in
order to get its integration tests. Prior to this commit, we were using
certbot 0.8.0 with the integration tests from version 0.6.0 of certbot,
which looked for `certonly` output in the wrong place, and failed.
This commit changes test.sh to checkout the 0.8.0 branch, and also removes a
temporary shim we used to make the `certbot` command call out to the
`letsencrypt` command.
Also, since the latest version of `letsencrypt/boulder-tools` includes an updated
`protoc-gen-go`, this change also updates the support packages to match.
Adds a test for CSRs generated using a pre-1.0.2 version of OpenSSL and a buggy client which will fail to parse with Golang 1.6+.
This test checks the values of the bytes in the 8th and 9th offsets, which in a properly formatted CSR should be the version integer declaration bytes, and if the malformed values are present will return a error to the user informing them that they are using an old version of OpenSSL and/or a client which doesn't explicitly set the CSR version.
Fixes#1902.
The `regID` parameter in the PA's `WillingToIssue` function was originally used for whitelisting purposes, but is not used any longer. This PR removes it.
This PR, adds a check in registration authority for non-ASCII encoded characters in an email address. This is due to a 'funky email implementation'.
Fixes#1350
In https://github.com/letsencrypt/boulder/pull/774 we introduced and account key stored with the challenge. This was a stopgap fix to the now-defunct SimpleHTTP and DNS challenges in the face of https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJCV4dqWf-4Yc. However, we no longer offer or implement those challenges, so the extra field is unnecessary. It also take up a huge amount of space in the challenges table, which is our biggest table. SimpleHTTP and DNS challenges were removed in https://github.com/letsencrypt/boulder/pull/1247.
We can provide a follow-up migration to delete the column later, once we have a plan for large migrations without downtime.
Fixes#1909
The RA UpdateRegistration function merges a base registration object with an update by calling Registration.MergeUpdate. Prior to this commit MergeUpdate only allowed the updated registration object to overwrite the Contact field of the existing registration if the updated reg. defined at least one AcmeURL. This prevented clients from being able to outright remove the contact associated with an existing registration.
This commit removes the len() check on the input.Contact in MergeUpdate to allow the r.Contact field to be overwritten by a []*core.AcmeURL(nil) Contact field. Subsequently clients can now send an empty contacts list in the update registration POST in order to remove their reg contact.
Fixes#1846
* Allow removing registration contact.
* Adds a test for `MergeUpdate` contact removal.
* Change `Registration.Contact` type to `*[]*core.AcmeURL`.
* End validateContacts early for empty contacts
* Test removing reg. contact more thoroughly.
Several of the `ProblemType`s had convenience functions to instantiate `ProblemDetails`s using their type and a detail message. Where these existed I did a quick scan of the codebase to convert places where callers were explicitly constructing the `ProblemDetails` to use the convenience function.
For the `ProblemType`s that did not have such a function, I created one and then converted callers to use it.
Solves #1837.
* Split CSR testing and name hoisting into own functions, verify CSR in RA & CA
* Move tests around and various other fixes
* 1.5.3 doesn't have the needed stringer
* Move functions to their own lib
* Remove unused imports
* Move MaxCNLength and BadSignatureAlgorithms to csr package
* Always normalizeCSR in VerifyCSR and de-export it
* Update comments
When a CAA request to Unbound times out, fall back to checking CAA via Google Public DNS' HTTPS API, through multiple proxies so as to hit geographically distributed paths. All successful multipath responses must be identical in order to succeed, and at most one can fail.
Fixes#1618
* Fix all errcheck errors
* Add errcheck to test.sh
* Add a new sa.Rollback method to make handling errors in rollbacks easier.
This also causes a behavior change in the VA. If a HTTP connection is
abruptly closed after serving the headers for a non-200 response, the
reported error will be the read failure instead of the non-200.
- Remove error signatures from log methods. This means fewer places where errcheck will show ignored errors.
- Pull in latest cfssl to be compatible with errorless log messages.
- Reduce the number of message priorities we support to just those we actually use.
- AuditNotice -> AuditInfo
- Remove InfoObject (only one use, switched to Info)
- Remove EmergencyExit and related functions in favor of panic
- Remove SyslogWriter / AuditLogger separate types in favor of a single interface, Logger, that has all the logging methods on it.
- Merge mock log into logger. This allows us to unexport the internals but still override them in the mock.
- Shorten names to be compatible with Go style: New, Set, Get, Logger, NewMock, etc.
- Use a shorter log format for stdout logs.
- Remove "... Starting" log messages. We have better information in the "Versions" message logged at startup.
Motivation: The AuditLogger / SyslogWriter distinction was confusing and exposed internals only necessary for tests. Some components accepted one type and some accepted the other. This made it hard to consistently use mock loggers in tests. Also, the unnecessarily fat interface for AuditLogger made it hard to meaningfully mock out.
The RPC call returns only the updated Challenge object instead of the
whole authz, because the VA shouldn't be allowed to update any of the
other fields of the Authorization object.
In ra.checkCertificatesPerName allow a bypass of the rate limit
if the exact name set has previously been issued for. This should
make a few current scenarios people have been running into slightly
less painful.
Adds a new rate limit, certficatesPerFQDNSet, which counts certificates
with the same set of FQDNS using a table containing the hash of the dNSNames
mapped to a certificate serial. A new method is added to the SA in AddCertificate
to add this hash to the fqdnSets table, which is gated by a config bool.
Server *MAY* return an authority section, especially on NXDOMAIN
the server will return an SOA authority response in order to
provide the nxdomain ttl value.
Otherwise there is no need for such section.
Dns client should be checking the header aa flags to check if the
response is authoritative and not check the presence of authority
section.
B64enc and B64dec can be replaced by base64.RawURLEncoding.
Thumbprint is now implemented in go-jose, and we have the relevant version
imported already, so we can use that.
SyntaxError isn't used anywhere and can be deleted.
Adds a dns-01 type validation to test.js and reworks dns-test-srv to allow changing TXT record values.
Also makes some changes to how integration-test.py works in order to reduce complexity now the
ct-test-srv is working again.
This uses ProblemDetails throughout the wfe. This is the last step in
allowing the backend services to pass ProblemDetails from RPCs through
to the user.
Updates #1153.
Fixes#1161.
Moves the DNS code from core to dns and renames the dns package to bdns
to be clearer.
Fixes#1260 and will be good to have while we add retries and such.
This is a change to ValidationRecord. This case is unlikely to be
trigged by code, but was allowing tests to pass in a branch that deleted
the simpleHttp and dvsni challenge types and is a good check to have in
place.
Updates #894
Generally Dial will be very fast because our resolver is local, so there's no
need to override its default of 2s. However, since our resolver recurses more or
less every time, getting the answer back is very slow. So we want to be able to
set a high ReadTimeout.
Since Boulder always requests DNSSEC records, in practice DNS responses often
exceed the IP MTU.
Boulder installations expect to have a local DNS resolver, and all modern DNS
resolvers support TCP connections. Since miekg/dns does not perform an
"attempt udp, timeout, retry via tcp" approach, it's simpler and more reliable
to always use TCP for internal DNS resolution. This makes failures more
obvious as well.
Also change the integration test DNS server to TCP.
This allows us to call the Google Safe Browsing calls through the VA.
If the RA config's boolean UseIsSafeDomain is true, the RA will make the RPC
call to the VA during its NewAuthorization.
If the VA config's GoogleSafeBrowsingConfig struct is not nil, the VA
will check the Google Safe Browsing API in
VA.IsSafeDomain. If the GoogleSafeBrowsingConfig struct is nil, it will
always return true.
In order to actually make requests, the VA's GoogleSafeBrowsingConfig
will need to have a directory on disk it can store the local GSB hashes
it will check first and a working Google API key for the GSB API.
Fixes#1058
If a ServiceUnavailableError is returned from GenerateOCSP backoff before
attempting to retry the call as to not to overwhelm the CA with calls that
may instantly fail.
* Moves revocation from the CA to the OCSP-Updater, the RA will mark certificates as
revoked then wait for the OCSP-Updater to create a new (final) revoked response
* Merges the ocspResponses table with the certificateStatus table and only use UPDATES
to update the OCSP response (vs INSERT-only since this happens quite often and will
lead to an extremely large table)
This change lowercases domains before they are stored in the database
and makes policy.WillingToIssue reject any domains with uppercase
letters.
Fixes#927.
B64enc makes some nasty allocations through its use of strings.Replace
in unpad. This changes that strings.Replace into a simple for-loop.
B64enc gets used in many places, including the rpc library on every
request and response. While we should probably not use it in the rpc
library (#909), there are enough other places it's used (now or in the near
future) that make this valuable.
Was a performance problem found during early load-testing (#20) of the
CA. More to come.
Currently, the whitelisted registration ID is one that is impossible for the
database to return. Once the partner's registration is in place, we can
deploy a change to it.
Fixes#810
instead of submitted key. This minimizes the chances of unexpected JWK fields in
the submitted key altering its interpretation without altering the lookup in the
registrations table.
In the process, fix handling of NoSuchRegistration responses.
Fixes https://github.com/letsencrypt/boulder/issues/865.
* Rewrite JSONDuration as ConfigDuration that can handle both JSON and YAML unmarshaling
* Factor out RPC certificate count request struct
* Return 429 to WFE on rate limit exceeded
* Fix wonky RateLimitPolicy comment
To keep the change small, I have not yet completely removed the
GetCertificateByShortSerial method from interfaces and the RPC. I will do taht
in a follow up change.
Adds a new service, Publisher, which exists to submit issued certificates to various Certificate Transparency logs. Once submitted the Publisher will also parse and store the returned SCT (Signed Certificate Timestamp) receipts that are used to prove inclusion in a specific log in the SA database. A SA migration adds the new SCT receipt table.
The Publisher only exposes one method, SubmitToCT, which is called in a goroutine by ca.IssueCertificate as to not block any other issuance operations. This method will iterate through all of the configured logs attempting to submit the certificate, and any required intermediate certificates, to them. If a submission to a log fails it will be retried the pre-configured number of times and will either use a back-off set in a Retry-After header or a pre-configured back-off between submission attempts.
This changeset is the first of a number of changes ending with serving SCT receipts in OCSP responses and purposefully leaves out the following pieces for follow-up PRs.
* A fake CT server for integration testing
* A external tool to search the database for certificates lacking a full set of SCT receipts
* A method to construct X.509 v3 extensions containing receipts for the OCSP responder
* Returned SCT signature verification (beyond just checking that the signature is of the correct type so we aren't just serving arbitrary binary blobs to clients)
Resolves#95.
The WFE test relies on a pre-generated cert. Since there are some sanity checks
on the dates in certs, we were getting errors during the test.
One quick fix is to have those sanity checks rely on RA's clock object, which
can be replaced with a fake for testing. In order to do that, I had to move the
sanity check (MatchesCSR) into the registration authority package, where it
makes more sense anyhow.
I also removed a handful of equality testing functions in objects.go that were
only used by MatchesCSR and whose purpose is better served by reflect.DeepEqual.
This was to avoid having to also move those equality testing functions into the
registration authority.
Challenge URIs should be determined by the WFE at fetch time, rather than stored
alongside the challenge in the DB. This simplifies a lot of the logic, and
allows to to remove a code path in NewAuthorization where we create an
authorization, then immediately save it with modifications to the challenges.
This change also gives challenges their own endpoint, which contains the
challenge id rather than the challenge's offset within its parent authorization.
This is also a first step towards replacing UpdateAuthorization with
UpdateChallenge: https://github.com/letsencrypt/boulder/issues/760.
Add an easy script to build and run the Docker instance.
Update some out-of-date information in the README.
Add goose to the Docker image.
Remove unnecessary go install step from Dockerfile.
Allow dns-test-srv to return a hardcoded address other than localhost. This was
preventing a Dockerized Boulder from answering requests from a letsencrypt
client on the host.
Change allowLoopbackAddresses to allowRestrictedAddresses and make it cover all
the private IPv4 ranges. The host IP in Docker is commonly in the 172.* range.
Fix a couple of references to lets-encrypt-preview.
This was inspired by investigation into https://github.com/letsencrypt/boulder/issues/756.
To try and reproduce, I tried running Boulder inside a container, and found some
broken things.
This has required some substantive changes to the tests. Where
previously the foreign key constraints did not exist in the tests, now
that we use the actual production schema, they do. This has mostly led
to having to create real Registrations in the sa, ca, and ra tests. Long
term, it would be nice to fake this out better instead of needing a real
sa in the ca and ra tests.
The "goose" being referred to is <https://bitbucket.org/liamstask/goose>.
Database migrations are stored in a _db directory inside the relevant
owner service (namely, ca/_db, and sa/_db, today).
An example of migrating up with goose:
goose -path ./sa/_db -env test up
An example of creating a new migration with goose:
goose -path ./sa/_db -env test create NameOfNewMigration sql
Notice the "sql" at the end. It would be easier for us to manage sql
migrations. I would like us to stick to only them. In case we do use Go
migrations in the future, the underscore at the beginning of "_db" will
at least prevent build errors when using "..." with goose-created Go
files. Goose-created Go migrations do not compile with the go tool but
only with goose.
Fixes#111
Unblocks #623
Refactor DNS problem details use
Actually store and log resolved addresses
Less convuluted get adresses function/usage
Store redirects, reconstruct transport on redirect, add redirect + lookup tests
Add another test
Review fixes
Initial bulk of review fixes (cleanups inc)
Comment cleanup
Add some more tests
Cleanups
Give addrFilter a type and add the config wiring
Expose filters
LookupHost cleanups
Remove Resolved Addresses and Redirect chain from replies to client without breaking RPC layer
Switch address/redirect logging method, add redirect loop checking + test
Review fixes + remove IPv6
Remove AddressFilter remnant + constant-ize the VA timeout
Review fixes pt. 1
Initialize validation record
Don't blank out validation reocrds
Add validation record sanity checking
Switch to shared struct
Check port is in valid range
Review fixes