Now that Pebble has a `pebble-challtestsrv` we can remove the `challtestrv`
package and associated command from Boulder. I switched CI to use
`pebble-challtestsrv`. Notably this means that we have to add our expected mock
data using the HTTP management interface. The Boulder-tools images are
regenerated to include the `pebble-challtestsrv` command.
Using this approach also allows separating the TLS-ALPN-01 and HTTPS HTTP-01
challenges by binding each challenge type in the `pebble-challtestsrv` to
different interfaces both using the same VA
HTTPS port. Mock DNS directs the VA to the correct interface.
The load-generator command that was previously using the `challtestsrv` package
from Boulder is updated to use a vendored copy of the new
`github.org/letsencrypt/challtestsrv` package.
Vendored dependencies change in two ways:
1) Gomock is updated to the latest release (matching what the Bouldertools image
provides)
2) A couple of new subpackages in `golang.org/x/net/` are added by way of
transitive dependency through the challtestsrv package.
Unit tests are confirmed to pass for `gomock`:
```
~/go/src/github.com/golang/mock/gomock$ git log --pretty=format:'%h' -n 1
51421b9
~/go/src/github.com/golang/mock/gomock$ go test ./...
ok github.com/golang/mock/gomock 0.002s
? github.com/golang/mock/gomock/internal/mock_matcher [no test files]
```
For `/x/net` all tests pass except two `/x/net/icmp` `TestDiag.go` test cases
that we have agreed are OK to ignore.
Resolves https://github.com/letsencrypt/boulder/issues/3962 and
https://github.com/letsencrypt/boulder/issues/3951
Resolves https://github.com/letsencrypt/boulder/issues/3872
**Note to reviewers**: There's an outstanding bug that I've tracked down to the `--load` stage of the integration tests that results in one of the remote VA instances in the `test/config-next` configuration under Go 1.11.1 to fail to cleanly shut down. I'm working on finding the root cause but in the meantime I've disabled `--load` during CI so we can unblock moving forward with getting Go 1.11.1 in dev/CI. Tracking this in https://github.com/letsencrypt/boulder/issues/3889
Since these two containers were using the same entrpoint.sh, they were
competing to run migrations and bind ports when run with `docker-compose
up`. Since we don't need the netaccess container when doing
`docker-compose up`, give it a separate entrypoint that exits
immediately by default, but does the normal migrations when run with
`docker-compose run`.
We'd like to start using the DNS load balancer in the latest version of gRPC. That means putting all IPs for a service under a single hostname (or using a SRV record, but we're not taking that path). This change adds an sd-test-srv to act as our service discovery DNS service. It returns both Boulder IP addresses for any A lookup ending in ".boulder". This change also sets up the Docker DNS for our boulder container to defer to sd-test-srv when it doesn't know an answer.
sd-test-srv doesn't know how to resolve public Internet names like `github.com`. Resolving public names is required for the `godep-restore` test phase, so this change breaks out a copy of the boulder container that is used only for `godep-restore`.
This change implements a shim of a DNS resolver for gRPC, so that we can switch to DNS-based load balancing with the currently vendored gRPC, then when we upgrade to the latest gRPC we won't need a simultaneous config update.
Also, this change introduces a check at the end of the integration test that each backend received at least one RPC, ensuring that we are not sending all load to a single backend.
Production/staging have been updated to a release built with Go 1.10.2.
This allows us to remove the Go 1.10.1 builds from the travis matrix and
default to Go 1.10.2.
Merging #3693 broke Certbot's integration tests. I'm working with them on a PR that will make the tests work against both a published and unpublished Boulder container. But the experience made me realize a lot of clients are running Boulder for integration tests, and we shouldn't break them without warning. This republishes the necessary ports, while leaving the unnecessary (debugging) ports unpublished.
Originally we published these ports mainly to make it easier to access Boulder
from the host when running things in a container locally. However, with the
recent changes to the Docker network, Boulder is now at a fixed, predictable IP
address on the Docker network that can be reached from the host, so we no longer
need these published ports.
This is a nice change because people running a Boulder test environment may not
want to open up ports on their public interfaces.
We're currently stuck on gRPC v1.1 because of a breaking change to certificate validation in gRPC 1.8. Our gRPC balancer uses a static list of multiple hostnames, and expects to validate against those hostnames. However gRPC expects that a service is one hostname, with multiple IP addresses, and validates all those IP addresses against the same hostname. See grpc/grpc-go#2012.
If we follow gRPC's assumptions, we can rip out our custom Balancer and custom TransportCredentials, and will probably have a lower-friction time in general.
This PR is the first step in doing so. In order to satisfy the "multiple IPs, one port" property of gRPC backends in our Docker container infrastructure, we switch to Docker's user-defined networking. This allows us to give the Boulder container multiple IP addresses on different local networks, and gives it different DNS aliases in each network.
In startservers.py, each shard of a service listens on a different DNS alias for that service, and therefore a different IP address. The listening port for each shard of a service is now identical.
This change also updates the gRPC service certificates. Now, each certificate that is used in a gRPC service (as opposed to something that is "only" a client) has three names. For instance, sa1.boulder, sa2.boulder, and sa.boulder (the generic service name). For now, we are validating against the specific hostnames. When we update our gRPC dependency, we will begin validating against the generic service name.
Incidentally, the DNS aliases feature of Docker allows us to get rid of some hackery in entrypoint.sh that inserted entries into /etc/hosts.
Note: Boulder now has a dependency on the DNS aliases feature in Docker. By default, docker-compose run creates a temporary container and doesn't assign any aliases to it. We now need to specify docker-compose run --use-aliases to get the correct behavior. Without --use-aliases, Boulder won't be able to resolve the hostnames it wants to bind to.
This PR adds Go 1.10.2 to the build matrix along with Go 1.10.1.
After staging/prod have been updated to Go 1.10.2 we can remove Go 1.10.1.
Resolves#3680
We have updated staging/prod Boulder builds to use Go 1.10.1. This means
we no longer need to support Go 1.10.0 in dev docker images, CI, and our
image building tools.
This PR updates the `test/boulder-tools/tag_and_upload.sh` script to template a `Dockerfile` for building multiple copies of `boulder-tools`: one per supported Go version. Unfortunately this is required because only Docker 17+ supports an env var in a Dockerfile `FROM`. It's best if we can stay on package manger installed versions of Docker which precludes 17+ 😞.
The `docker-compose.yml` is updated to version "3" to allow specifying a `GO_VERSION` env var in the respective Boulder `image` directives. This requires `docker-compose` version 1.10.0+ which in turn requires Docker engine version 1.13.0+. The README is updated to reflect these new requirements. This Docker engine version is commonly available in package managers (e.g. Ubuntu 16.04). A sufficient `docker-compose` version is not, but this is a simple one binary Go application that is easy to update outside of package managers.
The `.travis.yml` config file is updated to set the `GO_VERSION` in the build matrix, allowing build tasks for different Go versions. Since the `docker-compose.yml` now requires `docker-compose` 1.10.0+ the
`.travis.yml` also gains a new `before_install` for setting up a modern `docker-compose` version.
Lastly tools and images are updated to support both Go 1.10 (our current Go version) and Go 1.10.1 (the new point release). By default Go 1.10 is used, we can switch this once staging/prod are updated.
_*TODO*: One thing I haven't implemented yet is a `sed` expression in `tag_and_upload.sh` that updates both `image` lines in `docker-compose.yml` with an up-to-date tag. Putting this up for review while I work on that last creature comfort._
Resolves https://github.com/letsencrypt/boulder/issues/3551
Replaces https://github.com/letsencrypt/boulder/pull/3620 (GH got stuck from a yaml error)
In #3584 we mounted the package path in an attempt to speed things up,
but that wasn't actually effective. The latest Go has a $GOCACHE which
defaults to $HOME/.cache/go-build, and stores cached build artifacts.
This change mounts that path to a hidden directory on the host. It also
removes the volume mounts for /tmp/ and $GOPATH/pkg, which weren't doing
anything.
I've verified this speeds up the startup for RUN=integration on my
machine from several seconds to roughly zero seconds on repeated runs.
This pulls in an updated Certbot repo with a fix to content-type for the
revoke method.
Also, adds a convenience replacement in tag_and_release.sh to update
docker-compose.yml.
This change updates boulder-tools to use Go 1.10, and references a
newly-pushed image built using that new config.
Since boulder-tools pulls in the latest Certbot master at the time of
build, this also pulls in the latest changes to Certbot's acme module,
which now supports ACME v2. This means we no longer have to check out
the special acme-v2-integration branch in our integration tests.
This also updates chisel2.py to reflect some of the API changes that
landed in the acme module as it was merged to master.
Since we don't need additional checkouts to get the ACMEv2-compatible
version of the acme module, we can include it in the default RUN set for
local tests.
We have a Dockerfile that builds off of boulder-tools, but does
basically nothing. Removing it saves a build step and avoids some
hard-to-diagnose edge cases.
This change triggered an issue with the "prom" container and
"docker-compose up", saying that prom can't be linked to a not-running
container. Since we rarely use prom in dev/test, I'm removing it for
now. The nice, futuristic way to get containers to talk to each other is
with docker networking, so we can bring it back with a switch to docker
networking in the future if we want.
Pin version of Prometheus to `1.8.2` because Prometheus folks has just released v2 and switched the `:latest` tag to that.
Using Prometheus v2 produces an error:
prometheus: error: unknown short flag '-c'
This PR is the initial duplication of the WFE to create a WFE2
package. The rationale is briefly explained in `wfe2/README.md`.
Per #2822 this PR only lays the groundwork for further customization
and deduplication. Presently both the WFE and WFE2 are identical except
for the following configuration differences:
* The WFE offers HTTP and HTTPS on 4000 and 4430 respectively, the WFE2
offers HTTP on 4001 and 4431.
* The WFE has a debug port on 8000, the WFE2 uses the next free "8000
range port" and puts its debug service on 8013
Resolves https://github.com/letsencrypt/boulder/issues/2822
After talking to @jsha, this updates Boulder's docker-compose.yml to version 2. I'm currently working on moving some Certbot tests from EC2 to Docker and this allows me to take advantage of networking features like embedded DNS which is used by default in newer versions of Docker Compose.
This shouldn't change any behavior of the file. One notable thing is I had to add network_mode: bridge to the bhsm service. I don't believe this is a change in behavior though since bhsm was included in the links section for boulder
Deletes github.com/streadway/amqp and the various RabbitMQ setup tools etc. Changes how listenbuddy is used to proxy all of the gRPC client -> server connections so we test reconnection logic.
+49 -8,221 😁Fixes#2640 and #2562.
This was accidentally changed in
https://github.com/letsencrypt/boulder/pull/2634/, and broke Certbot's tests.
This also includes an update to chisel that fetches the certificate chain, which
would have caught this error.
Updates the various gRPC/protobuf libs (google.golang.org/grpc/... and github.com/golang/protobuf/proto) and the boulder-tools image so that we can update to the newest github.com/grpc-ecosystem/go-grpc-prometheus. Also regenerates all of the protobuf definition files.
Tests run on updated packages all pass.
Unblocks #2633fixes#2636.
This allows us to iterate more easily against the current acme module.
Also, remove nodejs from boulder-tools, clean up a few packages that weren't
previously cleaned up, and install a specific version of protoc-gen-go to match
our vendored grpc.
In order to provide the correct issuer certificate for older certificates after an issuer certificate rollover or when using multiple issuer certificates (e.g. RSA and ECDSA), use the AIA CA Issuer URL embedded in the certificate for the rel="up" link served by WFE. This behaviour is gated behind the UseAIAIssuerURL feature, which defaults to false.
To prevent MitM vulnerabilities in cases where the AIA URL is HTTP-only, it is upgraded to HTTPS.
This also adds a test for the issuer URL returned by the /acme/cert endpoint. wfe/test/178.{crt,key} were regenerated to add the AIA extension required to pass the test.
/acme/cert was changed to return an absolute URL to the issuer endpoint (making it consistent with /acme/new-cert).
Fixes#1663
Based on #1780
This PR has three primary contributions:
1. The existing code for using the V4 safe browsing API introduced in #2446 had some bugs that are fixed in this PR.
2. A gsb-test-srv is added to provide a mock Google Safebrowsing V4 server for integration testing purposes.
3. A short integration test is added to test end-to-end GSB lookup for an "unsafe" domain.
For 1) most notably Boulder was assuming the new V4 library accepted a directory for its database persistence when it instead expects an existing file to be provided. Additionally the VA wasn't properly instantiating feature flags preventing the V4 api from being used by the VA.
For 2) the test server is designed to have a fixed set of "bad" domains (Currently just honest.achmeds.discount.hosting.com). When asked for a database update by a client it will package the list of bad domains up & send them to the client. When the client is asked to do a URL lookup it will check the local database for a matching prefix, and if found, perform a lookup against the test server. The test server will process the lookup and increment a count for how many times the bad domain was asked about.
For 3) the Boulder startservers.py was updated to start the gsb-test-srv and the VA is configured to talk to it using the V4 API. The integration test consists of attempting issuance for a domain pre-configured in the gsb-test-srv as a bad domain. If the issuance succeeds we know the GSB lookup code is faulty. If the issuance fails, we check that the gsb-test-srv received the correct number of lookups for the "bad" domain and fail if the expected isn't reality.
Notes for reviewers:
* The gsb-test-srv has to be started before anything will use it. Right now the v4 library handles database update request failures poorly and will not retry for 30min. See google/safebrowsing#44 for more information.
* There's not an easy way to test for "good" domain lookups, only hits against the list. The design of the V4 API is such that a list of prefixes is delivered to the client in the db update phase and if the domain in question matches no prefixes then the lookup is deemed unneccesary and not performed. I experimented with sending 256 1 byte prefixes to try and trick the client to always do a lookup, but the min prefix size is 4 bytes and enumerating all possible prefixes seemed gross.
* The test server has a /add endpoint that could be used by integration tests to add new domains to the block list, but it isn't being used presently. The trouble is that the client only updates its database every 30 minutes at present, and so adding a new domain will only take affect after the client updates the database.
Resolves#2448
Protobuf files need to be regenerated because (I think) Golang 1.7.3 uses a somewhat different method of ordering fields in a struct when marshaling to bytes.
* Switch back to go 1.5 in Travis.
* Add back GO15VENDOREXPERIMENT.
* Add GO15VENDOREXPERIMENT to Dockerfile
* Revert FAKE_DNS change.
* Revert "Properly close test servers (#2110)"
* Revert "Close VA HTTP test servers (#2111)"
* Change Godep version to 1.5.
* Standardize on issue number
Updates #1699.
Adds a new package, `features`, which exposes methods to set and check if various internal features are enabled. The implementation uses global state to store the features so that services embedded in another service do not each require their own features map in order to check if something is enabled.
Requires a `boulder-tools` image update to include `golang.org/x/tools/cmd/stringer`.
Output base64-encoded DER, as expected by ocsp-responder.
Use flags instead of template for Status, ThisUpdate, NextUpdate.
Provide better help.
Remove old test (wasn't run automatically).
Add it to integration test, and use its output for integration test of issuer ocsp-responder.
Add another slot to boulder-tools HSM image, to store root key.
Instead of reading the CA key from a file on disk into memory and using that for signing in `boulder-ca` this patch adds a new Docker container that runs SoftHSM and pkcs11-proxy in order to hold the key and perform signing operations. The pkcs11-proxy module is used by `boulder-ca` to talk to the SoftHSM container.
This exercises (almost) the full pkcs11 path through boulder and will allow testing various HSM related failures in the future as well as simplifying tuning signing performance for benchmarking.
Fixes#703.
Commit test/boulder-tools (forgot to include them in #1838).
Mount $GOPATH as a volume in the container so that source code changes take
effect in the container without a rebuild, and build cache can make repeated
runs faster. Install rabbitmq-setup outside of the mounted-over path so it still
exists.
Remove unnecessary entries from Dockerfile's PATH and GOBIN.
https://github.com/letsencrypt/boulder/pull/1842
That change broke the certbot tests because it switched to a MariaDB
10.1-specific syntax. certbot/certbot#3058 changes the certbot tests to use
Boulder's docker-compose.yml, so they will get MariaDB 10.1 automatically.
* MariaDB 10.1
* MariaDB 10.1 in Docker
* Run docker stuff.
* Improve test.js error.
* Lower log level
* Revert dockerfile to master
* Export debug ports, set FAKE_DNS, and remove container_name.
* Remove typo.
* Make integration-test.py wait for debug ports.
* Use 10.1 and export more Boulder ports.
* Test updates for Docker
Listen on 0.0.0.0 for utility servers.
Make integration-test.py just wait for ports rather than calling startservers.
Run docker-compose in test.sh.
Remove bypass when database exists.
Separate mailer test into its own function in integration test.
Print better errors in test.js.
* Always bring up mysql container.
* Wait for MySQL to come up.
* Put it in travis-before-install.
* Use 127
* Remove manual docker-up.
* Add ifconfig
* Switch to docker-compose run
* It works!
* Remove some spurious env vars.
* Add bash
* try running it
* Add all deps.
* Pass through env.
* Install everything in the Dockerfile.
* Fix install of ruby
* More improvements
* Revert integration test to run directly
Also remove .git from dockerignore and add some packages.
* Revert integration-test.py to master.
* Stop ignoring test/js
* Start from boulder-tools.
* Add boulder-tools.
* Tweak travis.yml
* Separate out docker-compose pull as install.
* Build in install phase; don't bother with go install in Dockerfile
* Add virtualenv
* Actually build rabbitmq-setup
* Remove FAKE_DNS
* Trivial change
* Pull boulder-tools as a separate step so it gets its own timing info.
* Install certbot and protobuf from repos.
* Use cerbot from debian backports.
* Fix clone
* Remove CERTBOT_PATH
* Updates
* Go back to letsencrypt for build.sh
* Remove certbot volume.
* go back to preinstalled letsencrypt
* Restore ENV
* Remove BASH_ENV
* Adapt reloader test so it psses when run as root.
* Fixups for review.
* Revert test.js
* Revert startservers.py
* Revert Makefile.
The Docker Compose reference does not mention a key for `name`, and I believe the [correct one is `container_name`](https://docs.docker.com/compose/compose-file/#container-name), which is what this small patch changes.
```
± docker-compose build
ERROR: Validation failed in file './docker-compose.yml', reason(s):
Unsupported config option for bmysql: 'name'
Unsupported config option for brabbitmq: 'name'
± docker-compose -version
docker-compose version 1.6.2, build unknown```
Make COPY and compilation the last commands in the Dockerfile so in the common case Docker will cache results of EXPOSE, WORKDIR and ENV commands. The CMD is eliminated as entrypoint.sh now defaults to start.py if no arguments are given. The patch eliminates setting MYSQL_CONTAINER in run-docker.sh and docker-compose.yaml as entrypoint.sh sets the variable on its own when calling create_db.sh.
In addition the patch passes arguments passed to run-docker.sh as arguments to the entryscript.sh in the container. This way running `./run-docker.sh ./test.sh ...` allows to execute tests locally.
Use bridged networking.
Add some files to .dockerignore to shrink the build state sent to Docker
daemon.
Use specific hostnames to contact services, rather than localhost.
Add instructions for adding those hostnames to /etc/hosts in non-Docker config.
Use DSN-style connect strings for DBs.
Remove localhost / 127.0.0.1 rewrite hack from create_db.sh.
Add hosts section with new hostnames.
Remove bin from .dockerignore.
SQL grants go to %
Short-circuit DB creation if already existing.
Make `go install` a part of Docker image build so that Docker run is much
faster.
Bind to 0.0.0.0 for OCSP responders so they can be reached from host, and
publish / expose their ports.
Remove ToSServerThread and test.js' fetch of ToS.
Increase the registrationsPerIP rate limit threshold. When issuing from a Docker
host, the 127.0.0.1 override doesn't apply, so the limit is quickly hit.
Update docker-compose for bridged networking. Note: docker-compose doesn't currently work, but should be close.
https://github.com/letsencrypt/boulder/pull/1639
This ensures that services in mariadb and rabbitmq containers bind only to 127.0.0.1, not all interfaces. With host networking that would expose the test services outside the host.
Fixes#1594
- Separated RabbitMq into it's own container
- some various Dockerfile-isms cleanup
- updated routes to linked containers
- removed nodejs, I have not been able to figure out why it was being installed
(so this could be something that is actually needed)
To setup a dev environment:
You now need `docker-compose`, but running the setup with all the
configurations is as simple as:
```
$ docker-compose build
$ docker-compose up
```
Then you can even run the `test.sh` in the container with:
```
$ docker exec -it boulder_boulder_1 bash
root@container $ ./test.sh
```
This is just an _initial_ first pass at refactoring a bunch of this. There is
a bunch more I want to change and make better.
Also with regard to database migration taking awhile I want to try and move
the goose stuff over to the mariadb container, there is just some less savory
things I don't like about starting the db in the background then running the
migration script :/, I like to attach to the process on container start. I do
have some thoughts on a `docker exec` command in the mariadb container which
migrates the db... but trying to think of something better.
Signed-off-by: Jessica Frazelle <acidburn@docker.com>