This allows repeated runs using the same hiearchy, and avoids spurious
errors from ocsp-updater saying "This CA doesn't have an issuer cert
with ID XXX"
Fixes#5721
Previously, `starservers.start()` would implicitly build the binaries.
This separates the `startservers.install()` step as a separate one that
must happen first. This is useful because it allows us to ensure the
`ceremony` tool has been built before we run `setupHierarchy`.
Also, add a `-s` flag to `curl` when checking whether start.py resulted
in a successful startup. This reduces the amount of log spam when it
failed to come up.
Delete the expired-authz-purger2 binary, as well as the
various config files, tests, and test helpers that exist to
support it.
This utility is no longer necessary, as it has not been
running for quite some time, and we have developed
alternative means of keeping the growth of the authz
table under control.
Fixes#5568
Delete the boulder-janitor binary, and the various configs
and tests which exist to support it.
This tool has not been actively running in quite some time.
The tables which is covers are either supported by our
more recent partitioning methods, or are rate-limit tables
that we hope to move out of mysql entirely. The cost of
maintaining the janitor is not offset by the benefits it brings
us (or the lack thereof).
Fixes#5569
- Edit integration test to start expired-authz-purger2 with config/
config-next
- Move config from `cmd/expired-authz-purger2/config.json` to
`test/config/expired-authz-purger2.json`
- Add a copy of `test/config/expired-authz-purger2.json` to
`test/config-next/`
Fixes#5351
Modifies test.sh to provide a long and short option flagged menu
with helpful documentation. Improves UX by printing the settings
used before each run. Improves visual parsability by printing a
bold blue section heading with a title for each test. Using set flags
-eu and an EXIT trap we now exit at any failure or undefined var
and print FAILURE in red on the last line. If the script doe not exit
prematurely, then SUCCESS is printed in green on the last line.
Adds the following options: Parse and print a full list of available
integration tests. Enable the -race flag for unit tests. Enable next
config. Flag to add integration test and unit test filter arguments.
Fixes, -race flag not being enabled by the previous tests. The script
comments seem to indicate that this is desirable, but either through
error or intention TRAVIS was never being set to true, I confirmed
this behavior by running CI tests and observing which options were
set before making this change. If this is not desirable, we can make
a change in .travis.yml to omit the -e flag on unit test and unit test
with next-config.
Fixed dir var not being set. Since the previous script did not use the
set -u flag, undefined vars would not cause an exit. Once this flag was
set it was apprent that var dir was never being defined. Instead, the
script was just using .coverprofile for the coverage test option.
Removes FAILURE and SUCCESS on exit from the Python wrapper
in test/integration-test.py.
Modifies .travis.yml to to use the new short option flags instead of
environment variables. Included comments here to describe all of
the available flags.
Modifies README.md to add examples of the new script in use as
well as examples of running tests without the new script. This should
give folks a good idea of what the script is wrapping.
Part of #5139
Add passthrough for certain environment variables to
docker-compose.yml, making it easier to set them:
RUN=unit docker-compose run --use-aliases boulder ./test.sh
Use 4001 instead of 4443 to monitor boulder-wfe2's health. This avoids
a spurious error log about a failed TLS handshake.
Remove unused code around running Certbot in integation tests.
Once we de-dupe startservers.install calls we may want to move
setupHierarchy there, but for now we need to call it from
integration-test.py and start.py.
Fixes#4842
This ended up taking a lot more work than I expected. In order to make the implementation more robust a bunch of stuff we previously relied on has been ripped out in order to reduce unnecessary complexity (I think I insisted on a bunch of this in the first place, so glad I can kill it now).
In particular this change:
* Removes bhsm and pkcs11-proxy: softhsm and pkcs11-proxy don't play well together, and any softhsm manipulation would need to happen on bhsm, then require a restart of pkcs11-proxy to pull in the on-disk changes. This makes manipulating softhsm from the boulder container extremely difficult, and because of the need to initialize new on each run (described below) we need direct access to the softhsm2 tools since pkcs11-tool cannot do slot initialization operations over the wire. I originally argued for bhsm as a way to mimic a network attached HSM, mainly so that we could do network level fault testing. In reality we've never actually done this, and the extra complexity is not really realistic for a handful of reasons. It seems better to just rip it out and operate directly on a local softhsm instance (the other option would be to use pkcs11-proxy locally, but this still would require manually restarting the proxy whenever softhsm2-util was used, and wouldn't really offer any realistic benefit).
* Initializes the softhsm slots on each integration test run, rather than when creating the docker image (this is necessary to prevent churn in test/cert-ceremonies/generate.go, which would need to be updated to reflect the new slot IDs each time a new boulder-tools image was created since slot IDs are randomly generated)
* Installs softhsm from source so that we can use a more up to date version (2.5.0 vs. 2.2.0 which is in the debian repo)
* Generates the root and intermediate private keys in softhsm and writes out the root and intermediate public keys to /tmp for use in integration tests (the existing test-{ca,root} certs are kept in test/ because they are used in a whole bunch of unit tests. At some point these should probably be renamed/moved to be more representative of what they are used for, but that is left for a follow-up in order to keep the churn in this PR as related to the ceremony work as possible)
Another follow-up item here is that we should really be zeroing out the database at the start of each integration test run, since certain things like certificates and ocsp responses will be signed by a key/issuer that is no longer is use/doesn't match the current key/issuer.
Fixes#4832.
Right now we show output like:
Traceback (most recent call last):
File "test/integration-test.py", line 60, in run_go_tests
subprocess.check_call(cmdLine, shell=False, stderr=subprocess.STDOUT)
File "/usr/lib/python3.5/subprocess.py", line 271, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['go', 'test', '-tags', 'integration', '-count=1', '-race', './test/integration']' returned non-zero exit status 1
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test/integration-test.py", line 414, in
main()
File "test/integration-test.py", line 293, in main
run_go_tests(args.test_case_filter)
File "test/integration-test.py", line 62, in run_go_tests
raise(Exception("%s. Output:\n%s" % (e, e.output)))
Exception: Command '['go', 'test', '-tags', 'integration', '-count=1', '-race', './test/integration']' returned non-zero exit status 1. Output:
None
This change removes the try / raise clauses that were causing this
double exception logging. The original purpose of these clauses was to
make sure we logged output on failure. To continue to fulfill that
purpose, I switched the run function to use check_call instead of
check_output. check_output captures the stdout; check_call emits it to
the caller's stdout as normal, so we still see the output.
I also changed the two cases that actually wanted to process output so
they use check_output directly.
In Python3, the output of subprocess.check_output is of type bytes.
That means calling print() on the output will print \n instead of an
actual newline. This PR adds decoding to the output of mysql in the slow
query test, bringing it into line with other check_output calls.
This also removes a redundant "def run" that is shadowed by the
definition in helpers.py (and was also missing a decode() call).
Python 2 is over in 1 month 4 days: https://pythonclock.org/
This rolls forward most of the changes in #4313.
The original change was rolled back in #4323 because it
broke `docker-compose up`. This change fixes those original issues by
(a) making sure `requests` is installed and (b) sourcing a virtualenv
containing the `requests` module before running start.py.
Other notable changes in this:
- Certbot has changed the developer instructions to install specific packages
rather than rely on `letsencrypt-auto --os-packages-only`, so we follow suit.
- Python3 now has a `bytes` type that is used in some places that used to
provide `str`, and all `str` are now Unicode. That means going from `bytes` to
`str` and back requires explicit `.decode()` and `.encode()`.
- Moved from urllib2 to requests in many places.
We intentionally use a SLEEP in a SQL query to trigger timeout behavior.
This caused integration tests failures locally (where unittests are run
in the same session as integration tests).
This updates the `github.com/eggsampler/acme` dependency used in our Go-based
integration tests to v3. Notably this fixes a data race we encountered in CI.
With the data race fixed this branch can also revert
54a798b7f6 and resolve
https://github.com/letsencrypt/boulder/issues/4542
I ran a `go mod tidy` to cleanup the old `v2` copy of the dep and it also
removed a few stale cfssl/mysql items from the `go.mod`.
Upstream library's tests are confirmed to pass:
```
~/go/src/github.com/eggsampler/acme$ git log --pretty=format:'%h' -n 1
b581dc6
~/go/src/github.com/eggsampler/acme$ make pebble
mkdir -p /home/daniel/go/src/github.com/letsencrypt/pebble
git clone --depth 1 https://github.com/letsencrypt/pebble.git /home/daniel/go/src/github.com/letsencrypt/pebble \
|| (cd /home/daniel/go/src/github.com/letsencrypt/pebble; git checkout -f master && git reset --hard HEAD && git pull -q)
fatal: destination path '/home/daniel/go/src/github.com/letsencrypt/pebble' already exists and is not an empty directory.
Already on 'master'
Your branch is up-to-date with 'le/master'.
HEAD is now at 6c2d514 wfe: compare Identifier.Type with acme.IndentifierIP (#287)
docker-compose -f /home/daniel/go/src/github.com/letsencrypt/pebble/docker-compose.yml up -d
Creating network "pebble_acmenet" with driver "bridge"
Creating pebble_challtestsrv_1 ... done
Creating pebble_pebble_1 ... done
while ! wget --delete-after -q --no-check-certificate "https://localhost:14000/dir" ; do sleep 1 ; done
go clean -testcache
go test -race -coverprofile=coverage_18.txt -covermode=atomic github.com/eggsampler/acme/v3
ok github.com/eggsampler/acme/v3 24.292s coverage: 83.0% of statements
docker-compose -f /home/daniel/go/src/github.com/letsencrypt/pebble/docker-compose.yml down
Stopping pebble_pebble_1 ... done
Stopping pebble_challtestsrv_1 ... done
Removing pebble_pebble_1 ... done
Removing pebble_challtestsrv_1 ... done
Removing network pebble_acmenet
```
I couldn't get this to work cleanly with
`--log-queries-not-using-indexes` because a couple of queries show up
during integration test runs, seemingly because the tables involved are
small enough that the optimizer finds it faster to skip the index.
Some possible followups:
- Allow list those queries, or
- Preload the DB with a certain number of certificates before the start
of testing.
The `boulder-janitor` is extended to cleanup rows from the `orders` table that
have expired beyond the configured grace period, and the associated referencing
rows in `requestedNames`, `orderFqdnSets`, and `orderToAuthz2`.
To make implementing the transaction work for the deletions easier/consistent
I lifted the SA's `WithTransaction` code and assoc. functions to a new shared
`db` package. This also let me drop the one-off `janitorDb` interface from the
existing code.
There is an associated change to the `GRANT` statements for the `janitor` DB
user to allow it to find/delete the rows related to orders.
Resolves https://github.com/letsencrypt/boulder/issues/4527
We now expect that the config dir is always set, so we make that
explicit in the integration test and error if that's not true.
This change also renames the variable to just "config_dir", and removes
the parameter to startservers.start, which is currently never set to
anything other than its default value.
This also explicitly sets the environment variable in .travis.yml.
A gauge wasn't the appropriate stat type choice for this usage.
Switching the stat to be a counter instead of a gauge means we can't
detect when the janitor is finished its work in the integration test by
watching for this stat to drop to zero for all the table labels we're
concerned with. Instead the test is updated to watch for the counter
value to stabilize for a period longer than the workbatch sleep.
* Use `check_call` instead of `check_output`, we don't care about
capturing the output and instead want it to go to stdout so test
failures can be debugged.
* Don't use `shell=True`, it isn't needed here.
* Pipe through the test case filter so that it can be used with
`--test.run` to limit the Go integration tests run.
This test adds support in ct-test-srv for rejecting precertificates by
hostname, in order to artificially generate a condition where a
precertificate is issued but no final certificate can be issued. Right
now the final check in the test is temporarily disabled until the
feature is fixed.
Also, as our first Go-based integration test, this pulls in the
eggsampler/acme Go client, and adds some suport in integration-test.py.
This also refactors ct-test-srv slightly to use a ServeMux, and fixes
a couple of cases of not returning immediately on error.
This also removes some awkward dancing we did in integration_test.py to
run setup_twenty_days_ago under the opposite config of whatever we were
about to run tests under.
Reverts most of #4288 and #4290.
To make this work, I changed the twenty_days_ago setup to use
`config-next` when the main test phase is running `config`. That, in
turn, made the recheck_caa test fail, so I added a tweak to that.
I also moved the authzv2 migrations into `db`. Without that change,
the integration test would fail during the twenty_days_ago setup because
Boulder would attempt to create authzv2 objects but the table wouldn't
exist yet.
The ocsp-updater ocspStaleMaxAge config var has to be bumped up to ~7 months so that when it is run after the six-months-ago run it will actually update the ocsp responses generated during that period and mark the certificate status row as expired.
Fixes#4338.
This reverts commit 796a7aa2f4.
People's tests have been breaking on `docker-compose up` with the following output:
```
ImportError: No module named requests
```
Fixes#4322
* integration: move to Python3
- Add parentheses to all print and raise calls.
- Python3 distinguishes bytes from strings. Add encode() and
decode() calls as needed to provide the correct type.
- Use requests library consistently (urllib3 is not in Python3).
- Remove shebang from Python files without a main, and update
shebang for integration-test.py.
The three new cases separately test:
- Rechecking CAA during authz reuse.
- Successful issuance for a positive CAA record
- Rejected issuance for a negative CAA record
- The various CAA extensions from https://tools.ietf.org/html/draft-ietf-acme-caa-06
Importantly, this also switches `recheck.good-caa-reserved.com` to use a
dynamically generated random name. This should fix the problem where
running integration tests locally several times resulted in hitting an
exact match rate limit error, requiring a clear of the fqdnSets table.
This also moves the creation of the client for test_recheck_caa into its
own early-setup function, so there is less test-case-specific setup in
integration-test.py.
When NewAuthorizationSchema is enabled, we still want v1 authzs to be reusable in
new orders. This tests that that code is implemented correctly.
Updates #4241
These two setup phases were only used by `test_expired_authz_404`,
which is adequately covered by unittests. Since each setup and teardown
is rather time consuming, this speeds up and simplifies integration
tests.
Before: 5m10
After: 4m46
Move from using `requests` to `urllib2` in `helpers.py`. Verified
this works with `docker-compose up`. In the future we really should
be installing our own python dependencies in the boulder-tools image
rather than relying on getting them by using the certbot virtualenv.
* Switch to instant OCSP verification in integration tests
* Move waitport to helpers and use it to determine if ocsp-responder is
alive in test_single_ocsp
As part of #4241, I need to introduce some twenty-days-ago setup. So I refactored the
only current instance (test_caa) to use a style where setup functions can be registered right
next to the test cases they affect. The @register_twenty_days_ago is Python for
"call register_twenty_days_ago with the thing on the next line as an argument."
I also cleaned up a bunch of related stuff:
* Removed the ACCOUNT_URI environment variable and associated function params.
This was introduced in in #3736 to pass a URI to challtestsrv before we refactored for
more dynamic updates. It's not used any more.
* Removed a try / except from startChallSrv that needlessly hid errors.
* Move setting of DNS fixtures for caa_test into the test case itself.
Enables integration tests for authz2 and fixes a few bugs that were flagged up during the process. Disables expired-authorization-purger integration tests if config-next is being used as expired-authz-purger expects to purge some stuff but doesn't know about authz2 authorizations, a new test will be added with #4188.
Fixes#4079.
Without this change running a single integration tests with
`INT_SKIP_SETUP` like so:
```
docker-compose run --use-aliases -e INT_FILTER="test_http_multiva_threshold_pass" -e INT_SKIP_SETUP=true -e RUN="integration" boulder ./test.sh;
```
Produces an error like:
```
+ python2 test/integration-test.py --chisel --load --filter test_http_multiva_threshold_pass --skip-setup
Traceback (most recent call last):
File "test/integration-test.py", line 309, in <module>
main()
File "test/integration-test.py", line 217, in main
caa_account_uri = caa_client.account.uri if caa_client is not None else None
UnboundLocalError: local variable 'caa_client' referenced before assignment
```
This makes it a little clearer which bits are test setup helpers, and which
bits are actual test cases. It may also make it a little easier to see which cases
from the v1 tests also need a v2 test case.
Fixes#4126