Adds a new service, Publisher, which exists to submit issued certificates to various Certificate Transparency logs. Once submitted the Publisher will also parse and store the returned SCT (Signed Certificate Timestamp) receipts that are used to prove inclusion in a specific log in the SA database. A SA migration adds the new SCT receipt table.
The Publisher only exposes one method, SubmitToCT, which is called in a goroutine by ca.IssueCertificate as to not block any other issuance operations. This method will iterate through all of the configured logs attempting to submit the certificate, and any required intermediate certificates, to them. If a submission to a log fails it will be retried the pre-configured number of times and will either use a back-off set in a Retry-After header or a pre-configured back-off between submission attempts.
This changeset is the first of a number of changes ending with serving SCT receipts in OCSP responses and purposefully leaves out the following pieces for follow-up PRs.
* A fake CT server for integration testing
* A external tool to search the database for certificates lacking a full set of SCT receipts
* A method to construct X.509 v3 extensions containing receipts for the OCSP responder
* Returned SCT signature verification (beyond just checking that the signature is of the correct type so we aren't just serving arbitrary binary blobs to clients)
Resolves#95.
The WFE test relies on a pre-generated cert. Since there are some sanity checks
on the dates in certs, we were getting errors during the test.
One quick fix is to have those sanity checks rely on RA's clock object, which
can be replaced with a fake for testing. In order to do that, I had to move the
sanity check (MatchesCSR) into the registration authority package, where it
makes more sense anyhow.
I also removed a handful of equality testing functions in objects.go that were
only used by MatchesCSR and whose purpose is better served by reflect.DeepEqual.
This was to avoid having to also move those equality testing functions into the
registration authority.
Challenge URIs should be determined by the WFE at fetch time, rather than stored
alongside the challenge in the DB. This simplifies a lot of the logic, and
allows to to remove a code path in NewAuthorization where we create an
authorization, then immediately save it with modifications to the challenges.
This change also gives challenges their own endpoint, which contains the
challenge id rather than the challenge's offset within its parent authorization.
This is also a first step towards replacing UpdateAuthorization with
UpdateChallenge: https://github.com/letsencrypt/boulder/issues/760.
The RA did not have any code to test what occurred when a challenge
failed. This let in the authz schema change in #705.
This change sets the expires column in authz back to NULLable and fixes
the RA tests (including, using clock.Clocks in the RA).
Fixes#744.
The ca's TestRevoke was failing occasionally.
The test was saying "has the certificate's OCSPLastUpdated been set to a
time within the last second?" as a way to see if the revocation updated
the OCSPLastUpdated. OCSPLastUpdated was not being set on revocation,
but the test still passed most of the time.
The test still passed most of the time because the creation of the
certificate (which also sets the OCSPLastUpdated) has usually happened
within the last second. So, even without revocation, the OCSPLastUpdated
was set to something in the last second because the test is fast.
Threading a clock.FakeClock through the CA induced the test to fail
consistently. Debugging and threading a FakeClock through the SA caused
changes in times reported but did not fix the test because the
OCSPLastUpdated was simply not being updated. There were not tests for
the sa.MarkCertificateRevoked API that was being called by
ca.RevokeCertificate.
Now the SA has tests for its MarkCertificateRevoked method. It uses a
fake clock to ensure not just that OCSPLastUpdated is set correctly, but
that RevokedDate is, as well. The test also checks for the
CertificateStatus's status and RevocationCode changes.
The SA and CA now use Clocks throughout instead of time.Now() allowing
for more reliable and expansive testing in the future.
The CA had to gain a public Clock field in order for the RA to use the
CertificateAuthorityImpl struct without using its constructor
function. Otherwise, the field would be nil and cause panics in the RA
tests.
The RA tests are similarly also panicking when the CAImpl attempts to
log something with its private, nil-in-those-tests log field but we're
getting "lucky" because the RA tests only cause the CAImpl to log when
they are broken.
There is a TODO there to make the CAImpl's constructor function take
just what it needs to operate instead of taking large config objects and
doing file IO and such. The Clk field should be made private and the log
field filled in for the RA tests.
Fixes#734.
Managing the single row needed in serialNumber is a bit of hassle in a
world where we delete all of the rows in all tables in our tests. Plus,
if someone does that on their development database, they have to drop
all the way to the start of the migrations and run them again. It's a
bummer.
Instead, use the MySQL id generation design as [described and used by
Flickr](https://code.flickr.net/2010/02/08/ticket-servers-distributed-unique-primary-keys-on-the-cheap/). That design
doesn't need a row at its first insert to work correctly.
(That post mentions maybe using `ON DUPLICATE KEY UPDATE`, but it has
subtle bugs that even using `LAST_INCREMENT_ID(id)` doesn't fix. This is
because `UPDATE` doesn't run on the first `INSERT` but the `INSERT` will
return 1. Then, id 1 will be returned again on the
second `INSERT` attempt because the `LAST_INCREMENT_ID(id)` will be 0
because no increment was done! All subsequent `INSERT` attempts will be off by
one.)
Fixes#649.
This has required some substantive changes to the tests. Where
previously the foreign key constraints did not exist in the tests, now
that we use the actual production schema, they do. This has mostly led
to having to create real Registrations in the sa, ca, and ra tests. Long
term, it would be nice to fake this out better instead of needing a real
sa in the ca and ra tests.
The "goose" being referred to is <https://bitbucket.org/liamstask/goose>.
Database migrations are stored in a _db directory inside the relevant
owner service (namely, ca/_db, and sa/_db, today).
An example of migrating up with goose:
goose -path ./sa/_db -env test up
An example of creating a new migration with goose:
goose -path ./sa/_db -env test create NameOfNewMigration sql
Notice the "sql" at the end. It would be easier for us to manage sql
migrations. I would like us to stick to only them. In case we do use Go
migrations in the future, the underscore at the beginning of "_db" will
at least prevent build errors when using "..." with goose-created Go
files. Goose-created Go migrations do not compile with the go tool but
only with goose.
Fixes#111
Unblocks #623
Fixes#579 (which blocks #132).
This changes the SA to use a unique index on the sha256 of a
Registration's JWK's public key data instead of on the full serialized
JSON of the JWK. This corrects multiple problems:
1. MySQL/Mariadb no longer complain about key's being larger than the
largest allowed key size in an index
2. We no longer have to worry about large keys not being seen as unique
3. We no longer have to worry about the JWK's JSON being serialized with its inner keys in different orders and causing incorrectly empty queries or non-unique writes.
This change also hides the details of how Registrations are stored in
the database from the other services outside of SA. This will give us
greater flexibility if we need to move them to another database, or
change their schema, etc.
Also, adds some tests for NoSuchRegistration in the SA.