Commit Graph

7 Commits

Author SHA1 Message Date
Samantha 576b6777b5
grpc: Implement a static multiple IP address gRPC resolver (#6270)
- Implement a static resolver for the gPRC dialer under the scheme `static:///`
  which allows the dialer to resolve a backend from a static list of IPv4/IPv6
  addresses passed via the existing JSON config.
- Add config key `serverAddresses` to the `GRPCClientConfig` which, when
  populated, enables static IP resolution of gRPC server backends.
- Set `config-next` to use static gRPC backend resolution for all SA clients.
- Generate a new SA certificate which adds `10.77.77.77` and `10.88.88.88` to
  the SANs.

Resolves #6255
2022-08-05 10:20:57 -07:00
Jacob Hoffman-Andrews c1d221abe6
Add Redis to Boulder's docker-compose (#5747)
This gets us ready to add writing to Redis from ocsp-updater. The Go
redis client requires different configuration for cluster operation
than non-cluster, so we need to simulate a cluster in our integration
environment. Cluster operation requires a manual initialization step,
which you can do like so:

```
docker-compose up -d bredis docker-compose exec bredis bash
/test/redis-create.sh
```

I still need to figure out how to make that happen automatically during
integration tests and when you run docker-compose up.

The hex values in redis.config are randomly generated passwords for the
different users.

Fixes #5723
2021-10-28 10:36:11 -07:00
Roland Bracewell Shoemaker 7673f02803
Use cmd/ceremony in integration tests (#4832)
This ended up taking a lot more work than I expected. In order to make the implementation more robust a bunch of stuff we previously relied on has been ripped out in order to reduce unnecessary complexity (I think I insisted on a bunch of this in the first place, so glad I can kill it now).

In particular this change:

* Removes bhsm and pkcs11-proxy: softhsm and pkcs11-proxy don't play well together, and any softhsm manipulation would need to happen on bhsm, then require a restart of pkcs11-proxy to pull in the on-disk changes. This makes manipulating softhsm from the boulder container extremely difficult, and because of the need to initialize new on each run (described below) we need direct access to the softhsm2 tools since pkcs11-tool cannot do slot initialization operations over the wire. I originally argued for bhsm as a way to mimic a network attached HSM, mainly so that we could do network level fault testing. In reality we've never actually done this, and the extra complexity is not really realistic for a handful of reasons. It seems better to just rip it out and operate directly on a local softhsm instance (the other option would be to use pkcs11-proxy locally, but this still would require manually restarting the proxy whenever softhsm2-util was used, and wouldn't really offer any realistic benefit).
* Initializes the softhsm slots on each integration test run, rather than when creating the docker image (this is necessary to prevent churn in test/cert-ceremonies/generate.go, which would need to be updated to reflect the new slot IDs each time a new boulder-tools image was created since slot IDs are randomly generated)
* Installs softhsm from source so that we can use a more up to date version (2.5.0 vs. 2.2.0 which is in the debian repo)
* Generates the root and intermediate private keys in softhsm and writes out the root and intermediate public keys to /tmp for use in integration tests (the existing test-{ca,root} certs are kept in test/ because they are used in a whole bunch of unit tests. At some point these should probably be renamed/moved to be more representative of what they are used for, but that is left for a follow-up in order to keep the churn in this PR as related to the ceremony work as possible)
Another follow-up item here is that we should really be zeroing out the database at the start of each integration test run, since certain things like certificates and ocsp responses will be signed by a key/issuer that is no longer is use/doesn't match the current key/issuer.

Fixes #4832.
2020-06-03 15:20:23 -07:00
alexzorin 93cb918ce4
wfe: implement alternate certificate chains (#4714)
Closes #4567.

Enabled in `config-next`.

This PR cross-signs the existing issuers (`test-ca-cross.pem`, `test-ca2-cross.pem`) with a new root (`test-root2.key`, `test-root2.pem` = *c2ckling cryptogr2pher f2ke ROOT*).

The cross-signed issuers are referenced in wfe2's configuration, beside the existing `certificateChains` key:

```json
    "certificateChains": {
      "http://boulder:4430/acme/issuer-cert": [ "test/test-ca2.pem" ],
      "http://127.0.0.1:4000/acme/issuer-cert": [ "test/test-ca2.pem" ]
    },
    "alternateCertificateChains": {
      "http://boulder:4430/acme/issuer-cert": [ "test/test-ca2-cross.pem" ],
      "http://127.0.0.1:4000/acme/issuer-cert": [ "test/test-ca2-cross.pem" ]
    },
```

When this key is populated, the WFE will send links for all alternate certificate chains available for the current end-entity certificate (except for the chain sent in the current response):

    Link: <http://localhost:4001/acme/cert/ff5d3d84e777fc91ae3afb7cbc1d2c7735e0/1>;rel="alternate"

For backwards-compatibility, not specifying a chain is the same as specifying `0`: `/acme/cert/{serial} == /acme/cert/{serial}/0` and `0` always refers to the default certificate chain for that issuer (i.e. the value of `certificateChains[aiaIssuerURL]`).
2020-03-24 12:43:26 -07:00
Maciej Dębski bb9ddb124e Implement TLS-ALPN-01 and integration test for it (#3654)
This implements newly proposed TLS-ALPN-01 validation method, as described in
https://tools.ietf.org/html/draft-ietf-acme-tls-alpn-01 This challenge type is disabled 
except in the config-next tree.
2018-06-06 13:04:09 -04:00
Jacob Hoffman-Andrews a4421ae75b Run gRPC backends on multiple IPs instead of multiple ports (#3679)
We're currently stuck on gRPC v1.1 because of a breaking change to certificate validation in gRPC 1.8. Our gRPC balancer uses a static list of multiple hostnames, and expects to validate against those hostnames. However gRPC expects that a service is one hostname, with multiple IP addresses, and validates all those IP addresses against the same hostname. See grpc/grpc-go#2012.

If we follow gRPC's assumptions, we can rip out our custom Balancer and custom TransportCredentials, and will probably have a lower-friction time in general.

This PR is the first step in doing so. In order to satisfy the "multiple IPs, one port" property of gRPC backends in our Docker container infrastructure, we switch to Docker's user-defined networking. This allows us to give the Boulder container multiple IP addresses on different local networks, and gives it different DNS aliases in each network.

In startservers.py, each shard of a service listens on a different DNS alias for that service, and therefore a different IP address. The listening port for each shard of a service is now identical.

This change also updates the gRPC service certificates. Now, each certificate that is used in a gRPC service (as opposed to something that is "only" a client) has three names. For instance, sa1.boulder, sa2.boulder, and sa.boulder (the generic service name). For now, we are validating against the specific hostnames. When we update our gRPC dependency, we will begin validating against the generic service name.

Incidentally, the DNS aliases feature of Docker allows us to get rid of some hackery in entrypoint.sh that inserted entries into /etc/hosts.

Note: Boulder now has a dependency on the DNS aliases feature in Docker. By default, docker-compose run creates a temporary container and doesn't assign any aliases to it. We now need to specify docker-compose run --use-aliases to get the correct behavior. Without --use-aliases, Boulder won't be able to resolve the hostnames it wants to bind to.
2018-05-07 10:38:31 -07:00
Jacob Hoffman-Andrews 1dfffa3fa4
Expand PKI.md to include override instructions (#3321)
Expand PKI.md to include override instructions
2018-02-01 17:15:22 -08:00