Fixes#140.
This patch allows users to specify the following revocation reasons based on my interpretation of the meaning of the codes but could use confirmation from others.
* unspecified (0)
* keyCompromise (1)
* affiliationChanged (3)
* superseded (4)
* cessationOfOperation (5)
Part of #2080.
This change vendors `crypto/x509`, `crypto/x509/pkix`, and `encoding/asn1` from 1d5f6a765d. That commit is a direct child of the Go 1.5.4 release tag, so it contains the same code as the current Go version we are using. In that commit I rewrote imports in those packages so they depend on each other internally rather than calling out to the standard library, which would cause type disagreements.
I changed the imports in each place where we're parsing CSRs, and imported under a different name `oldx509`, both to avoid collisions and make it clear what's going on. Places that only use `x509` to parse certificates are not changed, and will use the current standard library.
This will unblock us from moving to Go 1.6, and subsequently Go 1.7.
This PR adds two optimizations to fix the optimistic lock errors observed in #1986.
First, the WFE now returns early for registration POST's (before invoking the RA and SA) when the POST body is the trivial update (`{"resource":"reg"}`). This prevents any DB operations from being performed when there is no work to be done.
Second, the RA now tracks whether a update actually changes the base registration's `Contact` slice, or `Agreement` string. If the proposed update doesn't change either of these fields then the RA will return early before handing the update to the SA.
Both changes save database operations from being performed needlessly and will help avoid the optimistic lock errors we observed when a problematic client was POSTing the trivial update repeatedly in a short period.
The fix was verified as follows: I checked out master and artificially introduced lock contention into the SA by adding a 2s sleep into `UpdateRegistration` between fetching the `existingRegModel` to get the `LockCol` value and calling `ssa.dbMap.Update`. With the sleep in place & two certbot clients posting matching registration updates the lock contention error is produced as expected. After checking out the `empty-reg-updates` branch, re-adding the sleep to the SA, and performing the same two client reg updates no error is produced.
The `letsencrypt/boulder-tools` image was recently updated, pulling in version
0.8.0 of certbot. That version stores the output of `certonly` requests in a
different path. In test.sh, we check out a specific tagged release of certbot in
order to get its integration tests. Prior to this commit, we were using
certbot 0.8.0 with the integration tests from version 0.6.0 of certbot,
which looked for `certonly` output in the wrong place, and failed.
This commit changes test.sh to checkout the 0.8.0 branch, and also removes a
temporary shim we used to make the `certbot` command call out to the
`letsencrypt` command.
Also, since the latest version of `letsencrypt/boulder-tools` includes an updated
`protoc-gen-go`, this change also updates the support packages to match.
Adds a test for CSRs generated using a pre-1.0.2 version of OpenSSL and a buggy client which will fail to parse with Golang 1.6+.
This test checks the values of the bytes in the 8th and 9th offsets, which in a properly formatted CSR should be the version integer declaration bytes, and if the malformed values are present will return a error to the user informing them that they are using an old version of OpenSSL and/or a client which doesn't explicitly set the CSR version.
Fixes#1902.
The `regID` parameter in the PA's `WillingToIssue` function was originally used for whitelisting purposes, but is not used any longer. This PR removes it.
This PR, adds a check in registration authority for non-ASCII encoded characters in an email address. This is due to a 'funky email implementation'.
Fixes#1350
In https://github.com/letsencrypt/boulder/pull/774 we introduced and account key stored with the challenge. This was a stopgap fix to the now-defunct SimpleHTTP and DNS challenges in the face of https://mailarchive.ietf.org/arch/msg/acme/F71iz6qq1o_QPVhJCV4dqWf-4Yc. However, we no longer offer or implement those challenges, so the extra field is unnecessary. It also take up a huge amount of space in the challenges table, which is our biggest table. SimpleHTTP and DNS challenges were removed in https://github.com/letsencrypt/boulder/pull/1247.
We can provide a follow-up migration to delete the column later, once we have a plan for large migrations without downtime.
Fixes#1909
The RA UpdateRegistration function merges a base registration object with an update by calling Registration.MergeUpdate. Prior to this commit MergeUpdate only allowed the updated registration object to overwrite the Contact field of the existing registration if the updated reg. defined at least one AcmeURL. This prevented clients from being able to outright remove the contact associated with an existing registration.
This commit removes the len() check on the input.Contact in MergeUpdate to allow the r.Contact field to be overwritten by a []*core.AcmeURL(nil) Contact field. Subsequently clients can now send an empty contacts list in the update registration POST in order to remove their reg contact.
Fixes#1846
* Allow removing registration contact.
* Adds a test for `MergeUpdate` contact removal.
* Change `Registration.Contact` type to `*[]*core.AcmeURL`.
* End validateContacts early for empty contacts
* Test removing reg. contact more thoroughly.
Several of the `ProblemType`s had convenience functions to instantiate `ProblemDetails`s using their type and a detail message. Where these existed I did a quick scan of the codebase to convert places where callers were explicitly constructing the `ProblemDetails` to use the convenience function.
For the `ProblemType`s that did not have such a function, I created one and then converted callers to use it.
Solves #1837.
* Split CSR testing and name hoisting into own functions, verify CSR in RA & CA
* Move tests around and various other fixes
* 1.5.3 doesn't have the needed stringer
* Move functions to their own lib
* Remove unused imports
* Move MaxCNLength and BadSignatureAlgorithms to csr package
* Always normalizeCSR in VerifyCSR and de-export it
* Update comments
When a CAA request to Unbound times out, fall back to checking CAA via Google Public DNS' HTTPS API, through multiple proxies so as to hit geographically distributed paths. All successful multipath responses must be identical in order to succeed, and at most one can fail.
Fixes#1618
* Fix all errcheck errors
* Add errcheck to test.sh
* Add a new sa.Rollback method to make handling errors in rollbacks easier.
This also causes a behavior change in the VA. If a HTTP connection is
abruptly closed after serving the headers for a non-200 response, the
reported error will be the read failure instead of the non-200.
- Remove error signatures from log methods. This means fewer places where errcheck will show ignored errors.
- Pull in latest cfssl to be compatible with errorless log messages.
- Reduce the number of message priorities we support to just those we actually use.
- AuditNotice -> AuditInfo
- Remove InfoObject (only one use, switched to Info)
- Remove EmergencyExit and related functions in favor of panic
- Remove SyslogWriter / AuditLogger separate types in favor of a single interface, Logger, that has all the logging methods on it.
- Merge mock log into logger. This allows us to unexport the internals but still override them in the mock.
- Shorten names to be compatible with Go style: New, Set, Get, Logger, NewMock, etc.
- Use a shorter log format for stdout logs.
- Remove "... Starting" log messages. We have better information in the "Versions" message logged at startup.
Motivation: The AuditLogger / SyslogWriter distinction was confusing and exposed internals only necessary for tests. Some components accepted one type and some accepted the other. This made it hard to consistently use mock loggers in tests. Also, the unnecessarily fat interface for AuditLogger made it hard to meaningfully mock out.
The RPC call returns only the updated Challenge object instead of the
whole authz, because the VA shouldn't be allowed to update any of the
other fields of the Authorization object.
In ra.checkCertificatesPerName allow a bypass of the rate limit
if the exact name set has previously been issued for. This should
make a few current scenarios people have been running into slightly
less painful.
Adds a new rate limit, certficatesPerFQDNSet, which counts certificates
with the same set of FQDNS using a table containing the hash of the dNSNames
mapped to a certificate serial. A new method is added to the SA in AddCertificate
to add this hash to the fqdnSets table, which is gated by a config bool.
Server *MAY* return an authority section, especially on NXDOMAIN
the server will return an SOA authority response in order to
provide the nxdomain ttl value.
Otherwise there is no need for such section.
Dns client should be checking the header aa flags to check if the
response is authoritative and not check the presence of authority
section.
B64enc and B64dec can be replaced by base64.RawURLEncoding.
Thumbprint is now implemented in go-jose, and we have the relevant version
imported already, so we can use that.
SyntaxError isn't used anywhere and can be deleted.
Adds a dns-01 type validation to test.js and reworks dns-test-srv to allow changing TXT record values.
Also makes some changes to how integration-test.py works in order to reduce complexity now the
ct-test-srv is working again.
This uses ProblemDetails throughout the wfe. This is the last step in
allowing the backend services to pass ProblemDetails from RPCs through
to the user.
Updates #1153.
Fixes#1161.
Moves the DNS code from core to dns and renames the dns package to bdns
to be clearer.
Fixes#1260 and will be good to have while we add retries and such.
This is a change to ValidationRecord. This case is unlikely to be
trigged by code, but was allowing tests to pass in a branch that deleted
the simpleHttp and dvsni challenge types and is a good check to have in
place.
Updates #894
Generally Dial will be very fast because our resolver is local, so there's no
need to override its default of 2s. However, since our resolver recurses more or
less every time, getting the answer back is very slow. So we want to be able to
set a high ReadTimeout.
Since Boulder always requests DNSSEC records, in practice DNS responses often
exceed the IP MTU.
Boulder installations expect to have a local DNS resolver, and all modern DNS
resolvers support TCP connections. Since miekg/dns does not perform an
"attempt udp, timeout, retry via tcp" approach, it's simpler and more reliable
to always use TCP for internal DNS resolution. This makes failures more
obvious as well.
Also change the integration test DNS server to TCP.
This allows us to call the Google Safe Browsing calls through the VA.
If the RA config's boolean UseIsSafeDomain is true, the RA will make the RPC
call to the VA during its NewAuthorization.
If the VA config's GoogleSafeBrowsingConfig struct is not nil, the VA
will check the Google Safe Browsing API in
VA.IsSafeDomain. If the GoogleSafeBrowsingConfig struct is nil, it will
always return true.
In order to actually make requests, the VA's GoogleSafeBrowsingConfig
will need to have a directory on disk it can store the local GSB hashes
it will check first and a working Google API key for the GSB API.
Fixes#1058
If a ServiceUnavailableError is returned from GenerateOCSP backoff before
attempting to retry the call as to not to overwhelm the CA with calls that
may instantly fail.
* Moves revocation from the CA to the OCSP-Updater, the RA will mark certificates as
revoked then wait for the OCSP-Updater to create a new (final) revoked response
* Merges the ocspResponses table with the certificateStatus table and only use UPDATES
to update the OCSP response (vs INSERT-only since this happens quite often and will
lead to an extremely large table)
This change lowercases domains before they are stored in the database
and makes policy.WillingToIssue reject any domains with uppercase
letters.
Fixes#927.
B64enc makes some nasty allocations through its use of strings.Replace
in unpad. This changes that strings.Replace into a simple for-loop.
B64enc gets used in many places, including the rpc library on every
request and response. While we should probably not use it in the rpc
library (#909), there are enough other places it's used (now or in the near
future) that make this valuable.
Was a performance problem found during early load-testing (#20) of the
CA. More to come.
Currently, the whitelisted registration ID is one that is impossible for the
database to return. Once the partner's registration is in place, we can
deploy a change to it.
Fixes#810
instead of submitted key. This minimizes the chances of unexpected JWK fields in
the submitted key altering its interpretation without altering the lookup in the
registrations table.
In the process, fix handling of NoSuchRegistration responses.
Fixes https://github.com/letsencrypt/boulder/issues/865.
* Rewrite JSONDuration as ConfigDuration that can handle both JSON and YAML unmarshaling
* Factor out RPC certificate count request struct
* Return 429 to WFE on rate limit exceeded
* Fix wonky RateLimitPolicy comment
To keep the change small, I have not yet completely removed the
GetCertificateByShortSerial method from interfaces and the RPC. I will do taht
in a follow up change.
Adds a new service, Publisher, which exists to submit issued certificates to various Certificate Transparency logs. Once submitted the Publisher will also parse and store the returned SCT (Signed Certificate Timestamp) receipts that are used to prove inclusion in a specific log in the SA database. A SA migration adds the new SCT receipt table.
The Publisher only exposes one method, SubmitToCT, which is called in a goroutine by ca.IssueCertificate as to not block any other issuance operations. This method will iterate through all of the configured logs attempting to submit the certificate, and any required intermediate certificates, to them. If a submission to a log fails it will be retried the pre-configured number of times and will either use a back-off set in a Retry-After header or a pre-configured back-off between submission attempts.
This changeset is the first of a number of changes ending with serving SCT receipts in OCSP responses and purposefully leaves out the following pieces for follow-up PRs.
* A fake CT server for integration testing
* A external tool to search the database for certificates lacking a full set of SCT receipts
* A method to construct X.509 v3 extensions containing receipts for the OCSP responder
* Returned SCT signature verification (beyond just checking that the signature is of the correct type so we aren't just serving arbitrary binary blobs to clients)
Resolves#95.
The WFE test relies on a pre-generated cert. Since there are some sanity checks
on the dates in certs, we were getting errors during the test.
One quick fix is to have those sanity checks rely on RA's clock object, which
can be replaced with a fake for testing. In order to do that, I had to move the
sanity check (MatchesCSR) into the registration authority package, where it
makes more sense anyhow.
I also removed a handful of equality testing functions in objects.go that were
only used by MatchesCSR and whose purpose is better served by reflect.DeepEqual.
This was to avoid having to also move those equality testing functions into the
registration authority.
Challenge URIs should be determined by the WFE at fetch time, rather than stored
alongside the challenge in the DB. This simplifies a lot of the logic, and
allows to to remove a code path in NewAuthorization where we create an
authorization, then immediately save it with modifications to the challenges.
This change also gives challenges their own endpoint, which contains the
challenge id rather than the challenge's offset within its parent authorization.
This is also a first step towards replacing UpdateAuthorization with
UpdateChallenge: https://github.com/letsencrypt/boulder/issues/760.
Add an easy script to build and run the Docker instance.
Update some out-of-date information in the README.
Add goose to the Docker image.
Remove unnecessary go install step from Dockerfile.
Allow dns-test-srv to return a hardcoded address other than localhost. This was
preventing a Dockerized Boulder from answering requests from a letsencrypt
client on the host.
Change allowLoopbackAddresses to allowRestrictedAddresses and make it cover all
the private IPv4 ranges. The host IP in Docker is commonly in the 172.* range.
Fix a couple of references to lets-encrypt-preview.
This was inspired by investigation into https://github.com/letsencrypt/boulder/issues/756.
To try and reproduce, I tried running Boulder inside a container, and found some
broken things.
This has required some substantive changes to the tests. Where
previously the foreign key constraints did not exist in the tests, now
that we use the actual production schema, they do. This has mostly led
to having to create real Registrations in the sa, ca, and ra tests. Long
term, it would be nice to fake this out better instead of needing a real
sa in the ca and ra tests.
The "goose" being referred to is <https://bitbucket.org/liamstask/goose>.
Database migrations are stored in a _db directory inside the relevant
owner service (namely, ca/_db, and sa/_db, today).
An example of migrating up with goose:
goose -path ./sa/_db -env test up
An example of creating a new migration with goose:
goose -path ./sa/_db -env test create NameOfNewMigration sql
Notice the "sql" at the end. It would be easier for us to manage sql
migrations. I would like us to stick to only them. In case we do use Go
migrations in the future, the underscore at the beginning of "_db" will
at least prevent build errors when using "..." with goose-created Go
files. Goose-created Go migrations do not compile with the go tool but
only with goose.
Fixes#111
Unblocks #623
Refactor DNS problem details use
Actually store and log resolved addresses
Less convuluted get adresses function/usage
Store redirects, reconstruct transport on redirect, add redirect + lookup tests
Add another test
Review fixes
Initial bulk of review fixes (cleanups inc)
Comment cleanup
Add some more tests
Cleanups
Give addrFilter a type and add the config wiring
Expose filters
LookupHost cleanups
Remove Resolved Addresses and Redirect chain from replies to client without breaking RPC layer
Switch address/redirect logging method, add redirect loop checking + test
Review fixes + remove IPv6
Remove AddressFilter remnant + constant-ize the VA timeout
Review fixes pt. 1
Initialize validation record
Don't blank out validation reocrds
Add validation record sanity checking
Switch to shared struct
Check port is in valid range
Review fixes
Fixes#579 (which blocks #132).
This changes the SA to use a unique index on the sha256 of a
Registration's JWK's public key data instead of on the full serialized
JSON of the JWK. This corrects multiple problems:
1. MySQL/Mariadb no longer complain about key's being larger than the
largest allowed key size in an index
2. We no longer have to worry about large keys not being seen as unique
3. We no longer have to worry about the JWK's JSON being serialized with its inner keys in different orders and causing incorrectly empty queries or non-unique writes.
This change also hides the details of how Registrations are stored in
the database from the other services outside of SA. This will give us
greater flexibility if we need to move them to another database, or
change their schema, etc.
Also, adds some tests for NoSuchRegistration in the SA.
Only if the cache returns nothing for the CNAME query do we need to
look up CNAME/DNAME explicitly, in order to check CAAs on the parent
of the CNAME target rather than the parent of the original name.
This is the result of `godep save -r ./...` and
`git rm -r -f Godeps/_workspace/src/github.com/square`
Our fork is currently at the head of go-jose when Richard made the local nonce
changes, with the nonce changes added on top. In other words, the newly created
files are exactly equal to the deleted files.
In a separate commit I will bring our own go-jose fork up to the remote head,
then update our deps.
Also note: Square's go-jose repo contains a `cipher` package. Since we don't
make any changes to that package, we leave it imported as-is.
This allows us to log the remote address and registration object along with the
CSR.
Also, restore part of a comment on CertificateRequest that was deleted.