* Moves revocation from the CA to the OCSP-Updater, the RA will mark certificates as
revoked then wait for the OCSP-Updater to create a new (final) revoked response
* Merges the ocspResponses table with the certificateStatus table and only use UPDATES
to update the OCSP response (vs INSERT-only since this happens quite often and will
lead to an extremely large table)
We had a staging deploy go bad because of the missing error handling on
the "source" config not being in the JSON. While we debugged, I wrote
some tests.
Fixes#936.
Currently, the whitelisted registration ID is one that is impossible for the
database to return. Once the partner's registration is in place, we can
deploy a change to it.
Fixes#810
also moves the first OCSP responses generation from the CA to the OCSP updater. This patch lays the
ground work for moving CT submission and adding CT backfill to the OCSP updater.
instead of submitted key. This minimizes the chances of unexpected JWK fields in
the submitted key altering its interpretation without altering the lookup in the
registrations table.
In the process, fix handling of NoSuchRegistration responses.
Fixes https://github.com/letsencrypt/boulder/issues/865.
This means after parsing the config file, setting up stats, and dialing the
syslogger. But it is still before trying to initialize the given server. This
means that we are more likely to get version numbers logged for some common
runtime failures.
* Rewrite JSONDuration as ConfigDuration that can handle both JSON and YAML unmarshaling
* Factor out RPC certificate count request struct
* Return 429 to WFE on rate limit exceeded
* Fix wonky RateLimitPolicy comment
Adds a new service, Publisher, which exists to submit issued certificates to various Certificate Transparency logs. Once submitted the Publisher will also parse and store the returned SCT (Signed Certificate Timestamp) receipts that are used to prove inclusion in a specific log in the SA database. A SA migration adds the new SCT receipt table.
The Publisher only exposes one method, SubmitToCT, which is called in a goroutine by ca.IssueCertificate as to not block any other issuance operations. This method will iterate through all of the configured logs attempting to submit the certificate, and any required intermediate certificates, to them. If a submission to a log fails it will be retried the pre-configured number of times and will either use a back-off set in a Retry-After header or a pre-configured back-off between submission attempts.
This changeset is the first of a number of changes ending with serving SCT receipts in OCSP responses and purposefully leaves out the following pieces for follow-up PRs.
* A fake CT server for integration testing
* A external tool to search the database for certificates lacking a full set of SCT receipts
* A method to construct X.509 v3 extensions containing receipts for the OCSP responder
* Returned SCT signature verification (beyond just checking that the signature is of the correct type so we aren't just serving arbitrary binary blobs to clients)
Resolves#95.
The WFE test relies on a pre-generated cert. Since there are some sanity checks
on the dates in certs, we were getting errors during the test.
One quick fix is to have those sanity checks rely on RA's clock object, which
can be replaced with a fake for testing. In order to do that, I had to move the
sanity check (MatchesCSR) into the registration authority package, where it
makes more sense anyhow.
I also removed a handful of equality testing functions in objects.go that were
only used by MatchesCSR and whose purpose is better served by reflect.DeepEqual.
This was to avoid having to also move those equality testing functions into the
registration authority.
The activity monitor spawns a goroutine only to emit some stats. That
goroutine caused some confusion and some other PRs around this code got more
complicated trying to manage the thread-safety.
But the activity-monitor doesn't really need that goroutine to emit its stats. So, we
simplify by removing two letters and a space.
Challenge URIs should be determined by the WFE at fetch time, rather than stored
alongside the challenge in the DB. This simplifies a lot of the logic, and
allows to to remove a code path in NewAuthorization where we create an
authorization, then immediately save it with modifications to the challenges.
This change also gives challenges their own endpoint, which contains the
challenge id rather than the challenge's offset within its parent authorization.
This is also a first step towards replacing UpdateAuthorization with
UpdateChallenge: https://github.com/letsencrypt/boulder/issues/760.
The RA did not have any code to test what occurred when a challenge
failed. This let in the authz schema change in #705.
This change sets the expires column in authz back to NULLable and fixes
the RA tests (including, using clock.Clocks in the RA).
Fixes#744.
The ca's TestRevoke was failing occasionally.
The test was saying "has the certificate's OCSPLastUpdated been set to a
time within the last second?" as a way to see if the revocation updated
the OCSPLastUpdated. OCSPLastUpdated was not being set on revocation,
but the test still passed most of the time.
The test still passed most of the time because the creation of the
certificate (which also sets the OCSPLastUpdated) has usually happened
within the last second. So, even without revocation, the OCSPLastUpdated
was set to something in the last second because the test is fast.
Threading a clock.FakeClock through the CA induced the test to fail
consistently. Debugging and threading a FakeClock through the SA caused
changes in times reported but did not fix the test because the
OCSPLastUpdated was simply not being updated. There were not tests for
the sa.MarkCertificateRevoked API that was being called by
ca.RevokeCertificate.
Now the SA has tests for its MarkCertificateRevoked method. It uses a
fake clock to ensure not just that OCSPLastUpdated is set correctly, but
that RevokedDate is, as well. The test also checks for the
CertificateStatus's status and RevocationCode changes.
The SA and CA now use Clocks throughout instead of time.Now() allowing
for more reliable and expansive testing in the future.
The CA had to gain a public Clock field in order for the RA to use the
CertificateAuthorityImpl struct without using its constructor
function. Otherwise, the field would be nil and cause panics in the RA
tests.
The RA tests are similarly also panicking when the CAImpl attempts to
log something with its private, nil-in-those-tests log field but we're
getting "lucky" because the RA tests only cause the CAImpl to log when
they are broken.
There is a TODO there to make the CAImpl's constructor function take
just what it needs to operate instead of taking large config objects and
doing file IO and such. The Clk field should be made private and the log
field filled in for the RA tests.
Fixes#734.
The expiration mailer doesn't send email when the expiration is exactly
as far away as one of the "nag" times.
Adds a test for the bound checking behavior.
Previously the expiration times were right on the cusp of being included or not
included in the query. Adjusted the times to be solidly in the right range.
In a future PR, we should refactor the code to generate absolute expiration
times and have findExpiringCertificates take a time param, so the test isn't
dependent on time.Now().
This removes TestMode from the boulder-va command, from ca.Config
(it was only used in the VA) and gets the integration config to specify
the ports it should use explicitly.
(It also removes a DBDriver field from ca.Config that was left over from
letsencrypt/boulder#624.)
Fixes#627.
This has required some substantive changes to the tests. Where
previously the foreign key constraints did not exist in the tests, now
that we use the actual production schema, they do. This has mostly led
to having to create real Registrations in the sa, ca, and ra tests. Long
term, it would be nice to fake this out better instead of needing a real
sa in the ca and ra tests.
The "goose" being referred to is <https://bitbucket.org/liamstask/goose>.
Database migrations are stored in a _db directory inside the relevant
owner service (namely, ca/_db, and sa/_db, today).
An example of migrating up with goose:
goose -path ./sa/_db -env test up
An example of creating a new migration with goose:
goose -path ./sa/_db -env test create NameOfNewMigration sql
Notice the "sql" at the end. It would be easier for us to manage sql
migrations. I would like us to stick to only them. In case we do use Go
migrations in the future, the underscore at the beginning of "_db" will
at least prevent build errors when using "..." with goose-created Go
files. Goose-created Go migrations do not compile with the go tool but
only with goose.
Fixes#111
Unblocks #623
If two OCSP responses were generated in the same second, the earlier would
previously take priority sometimes, leading to a "good" response for revoked
certificates and causing the OCSP integration test to be flaky.
This changes moves from using SQLite in the integration tests and in the
test/boulder-config.json.
It does not port the unit tests over, unfortunately. That's a much more
invasive change.
This also updates the Dockerfile to include the MariaDB and RabbitMQ
requirements of start.py as well as adjusts the CMD to expose the
boulder server to the host machine. The Dockerfile also needed to have
its Go version bumped and the test.sh had to grow some explict
"function"s.
Updates #132
We move the admin-revoker and expiration-mailer to using the
SA.GetRegistration RPC method instead of digging into the database
itself.
This allows the hiding of the registration model layer inside of SA, so
we can do fancy things with sha256 for the unique index inside of
it. This will happen in a later commit. See #579.
By exposing fewer details about how Registration is stored, we gain more
flexibility to fix up how its stored.
In the expiration-mailer, the performance hit for the early filtering of
mailto is likely neglibible and possibly even a benefit given the cost
of joins to the memory of MySQL.
If need be, we can built a bulk RPC layer for SA that provides the data
we need in findExpiringCertificates. It'll be easier than trying to
scale and change the storage layer underneath for each consumer.