Make a series of small changes to our test database schema, both to make
it simpler to reason about and to bring it closer in alignment to our
production database schema:
- Incorporate the IssuedNamesDropIndex, Incidents, SimplePartitioning,
and NotUnique migrations into the CombinedSchema, as they have been
fully applied in prod;
- Use CHARSET=utf8mb4 everywhere, instead of just utf8;
- Use UNSIGNED for auto-increment ID columns in the tables where prod
does; and
- Re-sort the tables in CombinedSchema which no longer have foreign key
constraints.
Part of https://github.com/letsencrypt/boulder/issues/6820
Delete the ocsp-updater service, and the //ocsp/updater library that
supports it. Remove test configs for the service, and remove references
to the service from other test files.
This service has been fully shut down for an extended period now, and is
safe to remove.
Fixes#6499
In dev docker we've always used a single schema (`boulder_sa`), with two
environments (`test` and `integration`) making for a combined total of two
databases sharing the same users and schema (e.g. `boulder_sa_test` and
`boulder_sa_integration`). There are also two versions of this schema. `db` and
`db-next`. The former is the schema as it should exist in production and the
latter is everything from `db` with some un-deployed schema changes. This change
adds support for additional schemas with the same aforementioned environments
and versions.
- Add support for additional schemas in `test/create_db.sh` and sa/migrations.sh
- Add new schema `incidents_sa` with its own users
- Replace `bitbucket.org/liamstask/goose/` with `github.com/rubenv/sql-migrate`
Part of #6328
Docker container should load the appropriate schema (`sa/_db` or
`sa/_db-next`) for the given configuration.
- Add `docker-compose.next.yml` docker-compose overrides
- Detect when to apply `sa/_db-next/migrations`
- Detect mismatch between `goose dbversion` and the latest migration
- Symlink `promoted` schema back to `sa/_db-next/migrations`
- Add tooling to consistently promote/demote schema migrations
Fixes#5300
By default the MariaDB/MySQL container starts with a global max
connections limit of 100. The SA is configured in `test/config` and
`test/config-next` to use 100 connections. This doesn't leave any
overhead for `ocsp-updater` connections and can be reached under load
pretty easily.
This commit adjusts the global max connections setting from
`test/create_db.sh`, setting it to a more generous `500` instead of the
default `100`.
We started running our DB migrations in the background to speed up CI. However,
the semantics of subprocesses and `wait` mean that if a migration fails, the
overall `create_db.sh` doesn't fail. That means, for instance, tests continue to
run, and it's hard to find the resulting error.
This change runs the migrations in serial again so that we can catch such errors
more easily.
Switch certificates and certificateStatus to use autoincrement primary keys to avoid performance problems with clustered indexes (fixes#2754).
Remove empty externalCerts and identifierData tables (fixes#2881).
Make progress towards deleting unnecessary LockCol and subscriberApproved fields (#856, #873) by making them NULLable and not including them in INSERTs and UPDATEs.
This PR makes two improvements to how we handle migrations locally:
1) Prior to this PR an optimization was present in `test/create_db.sh` that would `exit 0` if the `boulder_sa_integration` database existed. This early exit meant that after the first invocation of `create_db.sh` no further `goose` migrations would be applied unless the operator dropped their databases or edited the script.
This PR reworks the existing DB optimization so that it only skips the `CREATE DATABASE` statements and allows `goose` to try and apply migrations. This doesn't result in significantly longer start up times because Goose is smart enough to know when no migrations are required and outputs something similar to:
`goose: no migrations to run. current version: 20160602142227`
This should address #2174.
2) This PR also implements a separate `sa/_db-next/` directory for "pending" migrations. This is meant to follow the "test/config" vs "test/config-next" approach to managing changes that are developed but not yet activated in production.
Migrations that are to-be-performed by Ops should be created in the `sa/_db-next` directory first. Once they have been performed by ops in staging/prod and the config flag gate for the migration (see CONTRIBUTING.md) has been set to true, the migration can be moved from `_db-next` to `_db`.
By default all pending migrations from the `-next` directory are applied in the local dev env. If you **do not** wish these migrations to be applied then set the `APPLY_NEXT_MIGRATIONS` env var to false. E.g.:
`docker-compose run -eAPPLY_NEXT_MIGRATIONS=false boulder`
This should address #2195
That change broke the certbot tests because it switched to a MariaDB
10.1-specific syntax. certbot/certbot#3058 changes the certbot tests to use
Boulder's docker-compose.yml, so they will get MariaDB 10.1 automatically.
Certbot invokes the `test/create_db.sh` script during its integration testing. The Boulder MariaDB instance is moving to 10.1, and the `sa_db_users.sql` sql fragment for creating users has been changed to utilize a 10.1+ syntax feature to create users only if they don't exist. Since Certbot and Travis remain on 10.0 this presently breaks their build.
This pull request changes `create_db.sh` to detect if the MariaDB instance is 10.0, and if so, uses a `mariadb100_users.sql` sql fragment that maintains the 10.0 compatible way of creating users. When Certbot and Travis can support MariaDB 10.1 we can kill the `mariadb100_users.sql` file and corresponding logic.
* MariaDB 10.1
* MariaDB 10.1 in Docker
* Run docker stuff.
* Improve test.js error.
* Lower log level
* Revert dockerfile to master
* Export debug ports, set FAKE_DNS, and remove container_name.
* Remove typo.
* Make integration-test.py wait for debug ports.
* Use 10.1 and export more Boulder ports.
* Test updates for Docker
Listen on 0.0.0.0 for utility servers.
Make integration-test.py just wait for ports rather than calling startservers.
Run docker-compose in test.sh.
Remove bypass when database exists.
Separate mailer test into its own function in integration test.
Print better errors in test.js.
* Always bring up mysql container.
* Wait for MySQL to come up.
* Put it in travis-before-install.
* Use 127
* Remove manual docker-up.
* Add ifconfig
* Switch to docker-compose run
* It works!
* Remove some spurious env vars.
* Add bash
* try running it
* Add all deps.
* Pass through env.
* Install everything in the Dockerfile.
* Fix install of ruby
* More improvements
* Revert integration test to run directly
Also remove .git from dockerignore and add some packages.
* Revert integration-test.py to master.
* Stop ignoring test/js
* Start from boulder-tools.
* Add boulder-tools.
* Tweak travis.yml
* Separate out docker-compose pull as install.
* Build in install phase; don't bother with go install in Dockerfile
* Add virtualenv
* Actually build rabbitmq-setup
* Remove FAKE_DNS
* Trivial change
* Pull boulder-tools as a separate step so it gets its own timing info.
* Install certbot and protobuf from repos.
* Use cerbot from debian backports.
* Fix clone
* Remove CERTBOT_PATH
* Updates
* Go back to letsencrypt for build.sh
* Remove certbot volume.
* go back to preinstalled letsencrypt
* Restore ENV
* Remove BASH_ENV
* Adapt reloader test so it psses when run as root.
* Fixups for review.
* Revert test.js
* Revert startservers.py
* Revert Makefile.
* Delete Policy DB.
This is no longer needed now that we have a JSON policy file.
* Fix tests.
* Revert Dockerfile.
* Fix create_db
* Simplify user addition.
* Fix tests.
* Fix tests
* Review fixes.
https://github.com/letsencrypt/boulder/pull/1773
Use bridged networking.
Add some files to .dockerignore to shrink the build state sent to Docker
daemon.
Use specific hostnames to contact services, rather than localhost.
Add instructions for adding those hostnames to /etc/hosts in non-Docker config.
Use DSN-style connect strings for DBs.
Remove localhost / 127.0.0.1 rewrite hack from create_db.sh.
Add hosts section with new hostnames.
Remove bin from .dockerignore.
SQL grants go to %
Short-circuit DB creation if already existing.
Make `go install` a part of Docker image build so that Docker run is much
faster.
Bind to 0.0.0.0 for OCSP responders so they can be reached from host, and
publish / expose their ports.
Remove ToSServerThread and test.js' fetch of ToS.
Increase the registrationsPerIP rate limit threshold. When issuing from a Docker
host, the 127.0.0.1 override doesn't apply, so the limit is quickly hit.
Update docker-compose for bridged networking. Note: docker-compose doesn't currently work, but should be close.
https://github.com/letsencrypt/boulder/pull/1639
This ensures that services in mariadb and rabbitmq containers bind only to 127.0.0.1, not all interfaces. With host networking that would expose the test services outside the host.
Fixes#1594
- Separated RabbitMq into it's own container
- some various Dockerfile-isms cleanup
- updated routes to linked containers
- removed nodejs, I have not been able to figure out why it was being installed
(so this could be something that is actually needed)
To setup a dev environment:
You now need `docker-compose`, but running the setup with all the
configurations is as simple as:
```
$ docker-compose build
$ docker-compose up
```
Then you can even run the `test.sh` in the container with:
```
$ docker exec -it boulder_boulder_1 bash
root@container $ ./test.sh
```
This is just an _initial_ first pass at refactoring a bunch of this. There is
a bunch more I want to change and make better.
Also with regard to database migration taking awhile I want to try and move
the goose stuff over to the mariadb container, there is just some less savory
things I don't like about starting the db in the background then running the
migration script :/, I like to attach to the process on container start. I do
have some thoughts on a `docker exec` command in the mariadb container which
migrates the db... but trying to think of something better.
Signed-off-by: Jessica Frazelle <acidburn@docker.com>
Fixes https://github.com/letsencrypt/boulder/issues/898
Also removes currently-unused 'development' DB, and do initial migrations in
parallel, which shortens create_db.sh from 20 seconds to 10 seconds.
Changes ResetTestDatabase into two functions, one each for SA and Policy DBs,
which take care of setting up the DB connection using a special higher-privileged
user called test_setup.
Adds a new service, Publisher, which exists to submit issued certificates to various Certificate Transparency logs. Once submitted the Publisher will also parse and store the returned SCT (Signed Certificate Timestamp) receipts that are used to prove inclusion in a specific log in the SA database. A SA migration adds the new SCT receipt table.
The Publisher only exposes one method, SubmitToCT, which is called in a goroutine by ca.IssueCertificate as to not block any other issuance operations. This method will iterate through all of the configured logs attempting to submit the certificate, and any required intermediate certificates, to them. If a submission to a log fails it will be retried the pre-configured number of times and will either use a back-off set in a Retry-After header or a pre-configured back-off between submission attempts.
This changeset is the first of a number of changes ending with serving SCT receipts in OCSP responses and purposefully leaves out the following pieces for follow-up PRs.
* A fake CT server for integration testing
* A external tool to search the database for certificates lacking a full set of SCT receipts
* A method to construct X.509 v3 extensions containing receipts for the OCSP responder
* Returned SCT signature verification (beyond just checking that the signature is of the correct type so we aren't just serving arbitrary binary blobs to clients)
Resolves#95.
This has required some substantive changes to the tests. Where
previously the foreign key constraints did not exist in the tests, now
that we use the actual production schema, they do. This has mostly led
to having to create real Registrations in the sa, ca, and ra tests. Long
term, it would be nice to fake this out better instead of needing a real
sa in the ca and ra tests.
The "goose" being referred to is <https://bitbucket.org/liamstask/goose>.
Database migrations are stored in a _db directory inside the relevant
owner service (namely, ca/_db, and sa/_db, today).
An example of migrating up with goose:
goose -path ./sa/_db -env test up
An example of creating a new migration with goose:
goose -path ./sa/_db -env test create NameOfNewMigration sql
Notice the "sql" at the end. It would be easier for us to manage sql
migrations. I would like us to stick to only them. In case we do use Go
migrations in the future, the underscore at the beginning of "_db" will
at least prevent build errors when using "..." with goose-created Go
files. Goose-created Go migrations do not compile with the go tool but
only with goose.
Fixes#111
Unblocks #623
This changes moves from using SQLite in the integration tests and in the
test/boulder-config.json.
It does not port the unit tests over, unfortunately. That's a much more
invasive change.
This also updates the Dockerfile to include the MariaDB and RabbitMQ
requirements of start.py as well as adjusts the CMD to expose the
boulder server to the host machine. The Dockerfile also needed to have
its Go version bumped and the test.sh had to grow some explict
"function"s.
Updates #132