Moved compose docs to compose subdirectory
979
CHANGELOG.md
|
|
@ -1,979 +0,0 @@
|
||||||
Change log
|
|
||||||
==========
|
|
||||||
|
|
||||||
1.8.0 (2016-06-14)
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
**Breaking Changes**
|
|
||||||
|
|
||||||
- As announced in 1.7.0, `docker-compose rm` now removes containers
|
|
||||||
created by `docker-compose run` by default.
|
|
||||||
|
|
||||||
- Setting `entrypoint` on a service now empties out any default
|
|
||||||
command that was set on the image (i.e. any `CMD` instruction in the
|
|
||||||
Dockerfile used to build it). This makes it consistent with
|
|
||||||
the `--entrypoint` flag to `docker run`.
|
|
||||||
|
|
||||||
New Features
|
|
||||||
|
|
||||||
- Added `docker-compose bundle`, a command that builds a bundle file
|
|
||||||
to be consumed by the new *Docker Stack* commands in Docker 1.12.
|
|
||||||
|
|
||||||
- Added `docker-compose push`, a command that pushes service images
|
|
||||||
to a registry.
|
|
||||||
|
|
||||||
- Compose now supports specifying a custom TLS version for
|
|
||||||
interaction with the Docker Engine using the `COMPOSE_TLS_VERSION`
|
|
||||||
environment variable.
|
|
||||||
|
|
||||||
Bug Fixes
|
|
||||||
|
|
||||||
- Fixed a bug where Compose would erroneously try to read `.env`
|
|
||||||
at the project's root when it is a directory.
|
|
||||||
|
|
||||||
- `docker-compose run -e VAR` now passes `VAR` through from the shell
|
|
||||||
to the container, as with `docker run -e VAR`.
|
|
||||||
|
|
||||||
- Improved config merging when multiple compose files are involved
|
|
||||||
for several service sub-keys.
|
|
||||||
|
|
||||||
- Fixed a bug where volume mappings containing Windows drives would
|
|
||||||
sometimes be parsed incorrectly.
|
|
||||||
|
|
||||||
- Fixed a bug in Windows environment where volume mappings of the
|
|
||||||
host's root directory would be parsed incorrectly.
|
|
||||||
|
|
||||||
- Fixed a bug where `docker-compose config` would ouput an invalid
|
|
||||||
Compose file if external networks were specified.
|
|
||||||
|
|
||||||
- Fixed an issue where unset buildargs would be assigned a string
|
|
||||||
containing `'None'` instead of the expected empty value.
|
|
||||||
|
|
||||||
- Fixed a bug where yes/no prompts on Windows would not show before
|
|
||||||
receiving input.
|
|
||||||
|
|
||||||
- Fixed a bug where trying to `docker-compose exec` on Windows
|
|
||||||
without the `-d` option would exit with a stacktrace. This will
|
|
||||||
still fail for the time being, but should do so gracefully.
|
|
||||||
|
|
||||||
- Fixed a bug where errors during `docker-compose up` would show
|
|
||||||
an unrelated stacktrace at the end of the process.
|
|
||||||
|
|
||||||
- `docker-compose create` and `docker-compose start` show more
|
|
||||||
descriptive error messages when something goes wrong.
|
|
||||||
|
|
||||||
|
|
||||||
1.7.1 (2016-05-04)
|
|
||||||
-----------------
|
|
||||||
|
|
||||||
Bug Fixes
|
|
||||||
|
|
||||||
- Fixed a bug where the output of `docker-compose config` for v1 files
|
|
||||||
would be an invalid configuration file.
|
|
||||||
|
|
||||||
- Fixed a bug where `docker-compose config` would not check the validity
|
|
||||||
of links.
|
|
||||||
|
|
||||||
- Fixed an issue where `docker-compose help` would not output a list of
|
|
||||||
available commands and generic options as expected.
|
|
||||||
|
|
||||||
- Fixed an issue where filtering by service when using `docker-compose logs`
|
|
||||||
would not apply for newly created services.
|
|
||||||
|
|
||||||
- Fixed a bug where unchanged services would sometimes be recreated in
|
|
||||||
in the up phase when using Compose with Python 3.
|
|
||||||
|
|
||||||
- Fixed an issue where API errors encountered during the up phase would
|
|
||||||
not be recognized as a failure state by Compose.
|
|
||||||
|
|
||||||
- Fixed a bug where Compose would raise a NameError because of an undefined
|
|
||||||
exception name on non-Windows platforms.
|
|
||||||
|
|
||||||
- Fixed a bug where the wrong version of `docker-py` would sometimes be
|
|
||||||
installed alongside Compose.
|
|
||||||
|
|
||||||
- Fixed a bug where the host value output by `docker-machine config default`
|
|
||||||
would not be recognized as valid options by the `docker-compose`
|
|
||||||
command line.
|
|
||||||
|
|
||||||
- Fixed an issue where Compose would sometimes exit unexpectedly while
|
|
||||||
reading events broadcasted by a Swarm cluster.
|
|
||||||
|
|
||||||
- Corrected a statement in the docs about the location of the `.env` file,
|
|
||||||
which is indeed read from the current directory, instead of in the same
|
|
||||||
location as the Compose file.
|
|
||||||
|
|
||||||
|
|
||||||
1.7.0 (2016-04-13)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
**Breaking Changes**
|
|
||||||
|
|
||||||
- `docker-compose logs` no longer follows log output by default. It now
|
|
||||||
matches the behaviour of `docker logs` and exits after the current logs
|
|
||||||
are printed. Use `-f` to get the old default behaviour.
|
|
||||||
|
|
||||||
- Booleans are no longer allows as values for mappings in the Compose file
|
|
||||||
(for keys `environment`, `labels` and `extra_hosts`). Previously this
|
|
||||||
was a warning. Boolean values should be quoted so they become string values.
|
|
||||||
|
|
||||||
New Features
|
|
||||||
|
|
||||||
- Compose now looks for a `.env` file in the directory where it's run and
|
|
||||||
reads any environment variables defined inside, if they're not already
|
|
||||||
set in the shell environment. This lets you easily set defaults for
|
|
||||||
variables used in the Compose file, or for any of the `COMPOSE_*` or
|
|
||||||
`DOCKER_*` variables.
|
|
||||||
|
|
||||||
- Added a `--remove-orphans` flag to both `docker-compose up` and
|
|
||||||
`docker-compose down` to remove containers for services that were removed
|
|
||||||
from the Compose file.
|
|
||||||
|
|
||||||
- Added a `--all` flag to `docker-compose rm` to include containers created
|
|
||||||
by `docker-compose run`. This will become the default behavior in the next
|
|
||||||
version of Compose.
|
|
||||||
|
|
||||||
- Added support for all the same TLS configuration flags used by the `docker`
|
|
||||||
client: `--tls`, `--tlscert`, `--tlskey`, etc.
|
|
||||||
|
|
||||||
- Compose files now support the `tmpfs` and `shm_size` options.
|
|
||||||
|
|
||||||
- Added the `--workdir` flag to `docker-compose run`
|
|
||||||
|
|
||||||
- `docker-compose logs` now shows logs for new containers that are created
|
|
||||||
after it starts.
|
|
||||||
|
|
||||||
- The `COMPOSE_FILE` environment variable can now contain multiple files,
|
|
||||||
separated by the host system's standard path separator (`:` on Mac/Linux,
|
|
||||||
`;` on Windows).
|
|
||||||
|
|
||||||
- You can now specify a static IP address when connecting a service to a
|
|
||||||
network with the `ipv4_address` and `ipv6_address` options.
|
|
||||||
|
|
||||||
- Added `--follow`, `--timestamp`, and `--tail` flags to the
|
|
||||||
`docker-compose logs` command.
|
|
||||||
|
|
||||||
- `docker-compose up`, and `docker-compose start` will now start containers
|
|
||||||
in parallel where possible.
|
|
||||||
|
|
||||||
- `docker-compose stop` now stops containers in reverse dependency order
|
|
||||||
instead of all at once.
|
|
||||||
|
|
||||||
- Added the `--build` flag to `docker-compose up` to force it to build a new
|
|
||||||
image. It now shows a warning if an image is automatically built when the
|
|
||||||
flag is not used.
|
|
||||||
|
|
||||||
- Added the `docker-compose exec` command for executing a process in a running
|
|
||||||
container.
|
|
||||||
|
|
||||||
|
|
||||||
Bug Fixes
|
|
||||||
|
|
||||||
- `docker-compose down` now removes containers created by
|
|
||||||
`docker-compose run`.
|
|
||||||
|
|
||||||
- A more appropriate error is shown when a timeout is hit during `up` when
|
|
||||||
using a tty.
|
|
||||||
|
|
||||||
- Fixed a bug in `docker-compose down` where it would abort if some resources
|
|
||||||
had already been removed.
|
|
||||||
|
|
||||||
- Fixed a bug where changes to network aliases would not trigger a service
|
|
||||||
to be recreated.
|
|
||||||
|
|
||||||
- Fix a bug where a log message was printed about creating a new volume
|
|
||||||
when it already existed.
|
|
||||||
|
|
||||||
- Fixed a bug where interrupting `up` would not always shut down containers.
|
|
||||||
|
|
||||||
- Fixed a bug where `log_opt` and `log_driver` were not properly carried over
|
|
||||||
when extending services in the v1 Compose file format.
|
|
||||||
|
|
||||||
- Fixed a bug where empty values for build args would cause file validation
|
|
||||||
to fail.
|
|
||||||
|
|
||||||
1.6.2 (2016-02-23)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fixed a bug where connecting to a TLS-enabled Docker Engine would fail with
|
|
||||||
a certificate verification error.
|
|
||||||
|
|
||||||
1.6.1 (2016-02-23)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Bug Fixes
|
|
||||||
|
|
||||||
- Fixed a bug where recreating a container multiple times would cause the
|
|
||||||
new container to be started without the previous volumes.
|
|
||||||
|
|
||||||
- Fixed a bug where Compose would set the value of unset environment variables
|
|
||||||
to an empty string, instead of a key without a value.
|
|
||||||
|
|
||||||
- Provide a better error message when Compose requires a more recent version
|
|
||||||
of the Docker API.
|
|
||||||
|
|
||||||
- Add a missing config field `network.aliases` which allows setting a network
|
|
||||||
scoped alias for a service.
|
|
||||||
|
|
||||||
- Fixed a bug where `run` would not start services listed in `depends_on`.
|
|
||||||
|
|
||||||
- Fixed a bug where `networks` and `network_mode` where not merged when using
|
|
||||||
extends or multiple Compose files.
|
|
||||||
|
|
||||||
- Fixed a bug with service aliases where the short container id alias was
|
|
||||||
only contained 10 characters, instead of the 12 characters used in previous
|
|
||||||
versions.
|
|
||||||
|
|
||||||
- Added a missing log message when creating a new named volume.
|
|
||||||
|
|
||||||
- Fixed a bug where `build.args` was not merged when using `extends` or
|
|
||||||
multiple Compose files.
|
|
||||||
|
|
||||||
- Fixed some bugs with config validation when null values or incorrect types
|
|
||||||
were used instead of a mapping.
|
|
||||||
|
|
||||||
- Fixed a bug where a `build` section without a `context` would show a stack
|
|
||||||
trace instead of a helpful validation message.
|
|
||||||
|
|
||||||
- Improved compatibility with swarm by only setting a container affinity to
|
|
||||||
the previous instance of a services' container when the service uses an
|
|
||||||
anonymous container volume. Previously the affinity was always set on all
|
|
||||||
containers.
|
|
||||||
|
|
||||||
- Fixed the validation of some `driver_opts` would cause an error if a number
|
|
||||||
was used instead of a string.
|
|
||||||
|
|
||||||
- Some improvements to the `run.sh` script used by the Compose container install
|
|
||||||
option.
|
|
||||||
|
|
||||||
- Fixed a bug with `up --abort-on-container-exit` where Compose would exit,
|
|
||||||
but would not stop other containers.
|
|
||||||
|
|
||||||
- Corrected the warning message that is printed when a boolean value is used
|
|
||||||
as a value in a mapping.
|
|
||||||
|
|
||||||
|
|
||||||
1.6.0 (2016-01-15)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Major Features:
|
|
||||||
|
|
||||||
- Compose 1.6 introduces a new format for `docker-compose.yml` which lets
|
|
||||||
you define networks and volumes in the Compose file as well as services. It
|
|
||||||
also makes a few changes to the structure of some configuration options.
|
|
||||||
|
|
||||||
You don't have to use it - your existing Compose files will run on Compose
|
|
||||||
1.6 exactly as they do today.
|
|
||||||
|
|
||||||
Check the upgrade guide for full details:
|
|
||||||
https://docs.docker.com/compose/compose-file#upgrading
|
|
||||||
|
|
||||||
- Support for networking has exited experimental status and is the recommended
|
|
||||||
way to enable communication between containers.
|
|
||||||
|
|
||||||
If you use the new file format, your app will use networking. If you aren't
|
|
||||||
ready yet, just leave your Compose file as it is and it'll continue to work
|
|
||||||
just the same.
|
|
||||||
|
|
||||||
By default, you don't have to configure any networks. In fact, using
|
|
||||||
networking with Compose involves even less configuration than using links.
|
|
||||||
Consult the networking guide for how to use it:
|
|
||||||
https://docs.docker.com/compose/networking
|
|
||||||
|
|
||||||
The experimental flags `--x-networking` and `--x-network-driver`, introduced
|
|
||||||
in Compose 1.5, have been removed.
|
|
||||||
|
|
||||||
- You can now pass arguments to a build if you're using the new file format:
|
|
||||||
|
|
||||||
build:
|
|
||||||
context: .
|
|
||||||
args:
|
|
||||||
buildno: 1
|
|
||||||
|
|
||||||
- You can now specify both a `build` and an `image` key if you're using the
|
|
||||||
new file format. `docker-compose build` will build the image and tag it with
|
|
||||||
the name you've specified, while `docker-compose pull` will attempt to pull
|
|
||||||
it.
|
|
||||||
|
|
||||||
- There's a new `events` command for monitoring container events from
|
|
||||||
the application, much like `docker events`. This is a good primitive for
|
|
||||||
building tools on top of Compose for performing actions when particular
|
|
||||||
things happen, such as containers starting and stopping.
|
|
||||||
|
|
||||||
- There's a new `depends_on` option for specifying dependencies between
|
|
||||||
services. This enforces the order of startup, and ensures that when you run
|
|
||||||
`docker-compose up SERVICE` on a service with dependencies, those are started
|
|
||||||
as well.
|
|
||||||
|
|
||||||
New Features:
|
|
||||||
|
|
||||||
- Added a new command `config` which validates and prints the Compose
|
|
||||||
configuration after interpolating variables, resolving relative paths, and
|
|
||||||
merging multiple files and `extends`.
|
|
||||||
|
|
||||||
- Added a new command `create` for creating containers without starting them.
|
|
||||||
|
|
||||||
- Added a new command `down` to stop and remove all the resources created by
|
|
||||||
`up` in a single command.
|
|
||||||
|
|
||||||
- Added support for the `cpu_quota` configuration option.
|
|
||||||
|
|
||||||
- Added support for the `stop_signal` configuration option.
|
|
||||||
|
|
||||||
- Commands `start`, `restart`, `pause`, and `unpause` now exit with an
|
|
||||||
error status code if no containers were modified.
|
|
||||||
|
|
||||||
- Added a new `--abort-on-container-exit` flag to `up` which causes `up` to
|
|
||||||
stop all container and exit once the first container exits.
|
|
||||||
|
|
||||||
- Removed support for `FIG_FILE`, `FIG_PROJECT_NAME`, and no longer reads
|
|
||||||
`fig.yml` as a default Compose file location.
|
|
||||||
|
|
||||||
- Removed the `migrate-to-labels` command.
|
|
||||||
|
|
||||||
- Removed the `--allow-insecure-ssl` flag.
|
|
||||||
|
|
||||||
|
|
||||||
Bug Fixes:
|
|
||||||
|
|
||||||
- Fixed a validation bug that prevented the use of a range of ports in
|
|
||||||
the `expose` field.
|
|
||||||
|
|
||||||
- Fixed a validation bug that prevented the use of arrays in the `entrypoint`
|
|
||||||
field if they contained duplicate entries.
|
|
||||||
|
|
||||||
- Fixed a bug that caused `ulimits` to be ignored when used with `extends`.
|
|
||||||
|
|
||||||
- Fixed a bug that prevented ipv6 addresses in `extra_hosts`.
|
|
||||||
|
|
||||||
- Fixed a bug that caused `extends` to be ignored when included from
|
|
||||||
multiple Compose files.
|
|
||||||
|
|
||||||
- Fixed an incorrect warning when a container volume was defined in
|
|
||||||
the Compose file.
|
|
||||||
|
|
||||||
- Fixed a bug that prevented the force shutdown behaviour of `up` and
|
|
||||||
`logs`.
|
|
||||||
|
|
||||||
- Fixed a bug that caused `None` to be printed as the network driver name
|
|
||||||
when the default network driver was used.
|
|
||||||
|
|
||||||
- Fixed a bug where using the string form of `dns` or `dns_search` would
|
|
||||||
cause an error.
|
|
||||||
|
|
||||||
- Fixed a bug where a container would be reported as "Up" when it was
|
|
||||||
in the restarting state.
|
|
||||||
|
|
||||||
- Fixed a confusing error message when DOCKER_CERT_PATH was not set properly.
|
|
||||||
|
|
||||||
- Fixed a bug where attaching to a container would fail if it was using a
|
|
||||||
non-standard logging driver (or none at all).
|
|
||||||
|
|
||||||
|
|
||||||
1.5.2 (2015-12-03)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fixed a bug which broke the use of `environment` and `env_file` with
|
|
||||||
`extends`, and caused environment keys without values to have a `None`
|
|
||||||
value, instead of a value from the host environment.
|
|
||||||
|
|
||||||
- Fixed a regression in 1.5.1 that caused a warning about volumes to be
|
|
||||||
raised incorrectly when containers were recreated.
|
|
||||||
|
|
||||||
- Fixed a bug which prevented building a `Dockerfile` that used `ADD <url>`
|
|
||||||
|
|
||||||
- Fixed a bug with `docker-compose restart` which prevented it from
|
|
||||||
starting stopped containers.
|
|
||||||
|
|
||||||
- Fixed handling of SIGTERM and SIGINT to properly stop containers
|
|
||||||
|
|
||||||
- Add support for using a url as the value of `build`
|
|
||||||
|
|
||||||
- Improved the validation of the `expose` option
|
|
||||||
|
|
||||||
|
|
||||||
1.5.1 (2015-11-12)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Add the `--force-rm` option to `build`.
|
|
||||||
|
|
||||||
- Add the `ulimit` option for services in the Compose file.
|
|
||||||
|
|
||||||
- Fixed a bug where `up` would error with "service needs to be built" if
|
|
||||||
a service changed from using `image` to using `build`.
|
|
||||||
|
|
||||||
- Fixed a bug that would cause incorrect output of parallel operations
|
|
||||||
on some terminals.
|
|
||||||
|
|
||||||
- Fixed a bug that prevented a container from being recreated when the
|
|
||||||
mode of a `volumes_from` was changed.
|
|
||||||
|
|
||||||
- Fixed a regression in 1.5.0 where non-utf-8 unicode characters would cause
|
|
||||||
`up` or `logs` to crash.
|
|
||||||
|
|
||||||
- Fixed a regression in 1.5.0 where Compose would use a success exit status
|
|
||||||
code when a command fails due to an HTTP timeout communicating with the
|
|
||||||
docker daemon.
|
|
||||||
|
|
||||||
- Fixed a regression in 1.5.0 where `name` was being accepted as a valid
|
|
||||||
service option which would override the actual name of the service.
|
|
||||||
|
|
||||||
- When using `--x-networking` Compose no longer sets the hostname to the
|
|
||||||
container name.
|
|
||||||
|
|
||||||
- When using `--x-networking` Compose will only create the default network
|
|
||||||
if at least one container is using the network.
|
|
||||||
|
|
||||||
- When printings logs during `up` or `logs`, flush the output buffer after
|
|
||||||
each line to prevent buffering issues from hideing logs.
|
|
||||||
|
|
||||||
- Recreate a container if one of its dependencies is being created.
|
|
||||||
Previously a container was only recreated if it's dependencies already
|
|
||||||
existed, but were being recreated as well.
|
|
||||||
|
|
||||||
- Add a warning when a `volume` in the Compose file is being ignored
|
|
||||||
and masked by a container volume from a previous container.
|
|
||||||
|
|
||||||
- Improve the output of `pull` when run without a tty.
|
|
||||||
|
|
||||||
- When using multiple Compose files, validate each before attempting to merge
|
|
||||||
them together. Previously invalid files would result in not helpful errors.
|
|
||||||
|
|
||||||
- Allow dashes in keys in the `environment` service option.
|
|
||||||
|
|
||||||
- Improve validation error messages by including the filename as part of the
|
|
||||||
error message.
|
|
||||||
|
|
||||||
|
|
||||||
1.5.0 (2015-11-03)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
**Breaking changes:**
|
|
||||||
|
|
||||||
With the introduction of variable substitution support in the Compose file, any
|
|
||||||
Compose file that uses an environment variable (`$VAR` or `${VAR}`) in the `command:`
|
|
||||||
or `entrypoint:` field will break.
|
|
||||||
|
|
||||||
Previously these values were interpolated inside the container, with a value
|
|
||||||
from the container environment. In Compose 1.5.0, the values will be
|
|
||||||
interpolated on the host, with a value from the host environment.
|
|
||||||
|
|
||||||
To migrate a Compose file to 1.5.0, escape the variables with an extra `$`
|
|
||||||
(ex: `$$VAR` or `$${VAR}`). See
|
|
||||||
https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution
|
|
||||||
|
|
||||||
Major features:
|
|
||||||
|
|
||||||
- Compose is now available for Windows.
|
|
||||||
|
|
||||||
- Environment variables can be used in the Compose file. See
|
|
||||||
https://github.com/docker/compose/blob/8cc8e61/docs/compose-file.md#variable-substitution
|
|
||||||
|
|
||||||
- Multiple compose files can be specified, allowing you to override
|
|
||||||
settings in the default Compose file. See
|
|
||||||
https://github.com/docker/compose/blob/8cc8e61/docs/reference/docker-compose.md
|
|
||||||
for more details.
|
|
||||||
|
|
||||||
- Compose now produces better error messages when a file contains
|
|
||||||
invalid configuration.
|
|
||||||
|
|
||||||
- `up` now waits for all services to exit before shutting down,
|
|
||||||
rather than shutting down as soon as one container exits.
|
|
||||||
|
|
||||||
- Experimental support for the new docker networking system can be
|
|
||||||
enabled with the `--x-networking` flag. Read more here:
|
|
||||||
https://github.com/docker/docker/blob/8fee1c20/docs/userguide/dockernetworks.md
|
|
||||||
|
|
||||||
New features:
|
|
||||||
|
|
||||||
- You can now optionally pass a mode to `volumes_from`, e.g.
|
|
||||||
`volumes_from: ["servicename:ro"]`.
|
|
||||||
|
|
||||||
- Since Docker now lets you create volumes with names, you can refer to those
|
|
||||||
volumes by name in `docker-compose.yml`. For example,
|
|
||||||
`volumes: ["mydatavolume:/data"]` will mount the volume named
|
|
||||||
`mydatavolume` at the path `/data` inside the container.
|
|
||||||
|
|
||||||
If the first component of an entry in `volumes` starts with a `.`, `/` or
|
|
||||||
`~`, it is treated as a path and expansion of relative paths is performed as
|
|
||||||
necessary. Otherwise, it is treated as a volume name and passed straight
|
|
||||||
through to Docker.
|
|
||||||
|
|
||||||
Read more on named volumes and volume drivers here:
|
|
||||||
https://github.com/docker/docker/blob/244d9c33/docs/userguide/dockervolumes.md
|
|
||||||
|
|
||||||
- `docker-compose build --pull` instructs Compose to pull the base image for
|
|
||||||
each Dockerfile before building.
|
|
||||||
|
|
||||||
- `docker-compose pull --ignore-pull-failures` instructs Compose to continue
|
|
||||||
if it fails to pull a single service's image, rather than aborting.
|
|
||||||
|
|
||||||
- You can now specify an IPC namespace in `docker-compose.yml` with the `ipc`
|
|
||||||
option.
|
|
||||||
|
|
||||||
- Containers created by `docker-compose run` can now be named with the
|
|
||||||
`--name` flag.
|
|
||||||
|
|
||||||
- If you install Compose with pip or use it as a library, it now works with
|
|
||||||
Python 3.
|
|
||||||
|
|
||||||
- `image` now supports image digests (in addition to ids and tags), e.g.
|
|
||||||
`image: "busybox@sha256:38a203e1986cf79639cfb9b2e1d6e773de84002feea2d4eb006b52004ee8502d"`
|
|
||||||
|
|
||||||
- `ports` now supports ranges of ports, e.g.
|
|
||||||
|
|
||||||
ports:
|
|
||||||
- "3000-3005"
|
|
||||||
- "9000-9001:8000-8001"
|
|
||||||
|
|
||||||
- `docker-compose run` now supports a `-p|--publish` parameter, much like
|
|
||||||
`docker run -p`, for publishing specific ports to the host.
|
|
||||||
|
|
||||||
- `docker-compose pause` and `docker-compose unpause` have been implemented,
|
|
||||||
analogous to `docker pause` and `docker unpause`.
|
|
||||||
|
|
||||||
- When using `extends` to copy configuration from another service in the same
|
|
||||||
Compose file, you can omit the `file` option.
|
|
||||||
|
|
||||||
- Compose can be installed and run as a Docker image. This is an experimental
|
|
||||||
feature.
|
|
||||||
|
|
||||||
Bug fixes:
|
|
||||||
|
|
||||||
- All values for the `log_driver` option which are supported by the Docker
|
|
||||||
daemon are now supported by Compose.
|
|
||||||
|
|
||||||
- `docker-compose build` can now be run successfully against a Swarm cluster.
|
|
||||||
|
|
||||||
|
|
||||||
1.4.2 (2015-09-22)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fixed a regression in the 1.4.1 release that would cause `docker-compose up`
|
|
||||||
without the `-d` option to exit immediately.
|
|
||||||
|
|
||||||
1.4.1 (2015-09-10)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
The following bugs have been fixed:
|
|
||||||
|
|
||||||
- Some configuration changes (notably changes to `links`, `volumes_from`, and
|
|
||||||
`net`) were not properly triggering a container recreate as part of
|
|
||||||
`docker-compose up`.
|
|
||||||
- `docker-compose up <service>` was showing logs for all services instead of
|
|
||||||
just the specified services.
|
|
||||||
- Containers with custom container names were showing up in logs as
|
|
||||||
`service_number` instead of their custom container name.
|
|
||||||
- When scaling a service sometimes containers would be recreated even when
|
|
||||||
the configuration had not changed.
|
|
||||||
|
|
||||||
|
|
||||||
1.4.0 (2015-08-04)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- By default, `docker-compose up` now only recreates containers for services whose configuration has changed since they were created. This should result in a dramatic speed-up for many applications.
|
|
||||||
|
|
||||||
The experimental `--x-smart-recreate` flag which introduced this feature in Compose 1.3.0 has been removed, and a `--force-recreate` flag has been added for when you want to recreate everything.
|
|
||||||
|
|
||||||
- Several of Compose's commands - `scale`, `stop`, `kill` and `rm` - now perform actions on multiple containers in parallel, rather than in sequence, which will run much faster on larger applications.
|
|
||||||
|
|
||||||
- You can now specify a custom name for a service's container with `container_name`. Because Docker container names must be unique, this means you can't scale the service beyond one container.
|
|
||||||
|
|
||||||
- You no longer have to specify a `file` option when using `extends` - it will default to the current file.
|
|
||||||
|
|
||||||
- Service names can now contain dots, dashes and underscores.
|
|
||||||
|
|
||||||
- Compose can now read YAML configuration from standard input, rather than from a file, by specifying `-` as the filename. This makes it easier to generate configuration dynamically:
|
|
||||||
|
|
||||||
$ echo 'redis: {"image": "redis"}' | docker-compose --file - up
|
|
||||||
|
|
||||||
- There's a new `docker-compose version` command which prints extended information about Compose's bundled dependencies.
|
|
||||||
|
|
||||||
- `docker-compose.yml` now supports `log_opt` as well as `log_driver`, allowing you to pass extra configuration to a service's logging driver.
|
|
||||||
|
|
||||||
- `docker-compose.yml` now supports `memswap_limit`, similar to `docker run --memory-swap`.
|
|
||||||
|
|
||||||
- When mounting volumes with the `volumes` option, you can now pass in any mode supported by the daemon, not just `:ro` or `:rw`. For example, SELinux users can pass `:z` or `:Z`.
|
|
||||||
|
|
||||||
- You can now specify a custom volume driver with the `volume_driver` option in `docker-compose.yml`, much like `docker run --volume-driver`.
|
|
||||||
|
|
||||||
- A bug has been fixed where Compose would fail to pull images from private registries serving plain (unsecured) HTTP. The `--allow-insecure-ssl` flag, which was previously used to work around this issue, has been deprecated and now has no effect.
|
|
||||||
|
|
||||||
- A bug has been fixed where `docker-compose build` would fail if the build depended on a private Hub image or an image from a private registry.
|
|
||||||
|
|
||||||
- A bug has been fixed where Compose would crash if there were containers which the Docker daemon had not finished removing.
|
|
||||||
|
|
||||||
- Two bugs have been fixed where Compose would sometimes fail with a "Duplicate bind mount" error, or fail to attach volumes to a container, if there was a volume path specified in `docker-compose.yml` with a trailing slash.
|
|
||||||
|
|
||||||
Thanks @mnowster, @dnephin, @ekristen, @funkyfuture, @jeffk and @lukemarsden!
|
|
||||||
|
|
||||||
1.3.3 (2015-07-15)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Two regressions have been fixed:
|
|
||||||
|
|
||||||
- When stopping containers gracefully, Compose was setting the timeout to 0, effectively forcing a SIGKILL every time.
|
|
||||||
- Compose would sometimes crash depending on the formatting of container data returned from the Docker API.
|
|
||||||
|
|
||||||
1.3.2 (2015-07-14)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
The following bugs have been fixed:
|
|
||||||
|
|
||||||
- When there were one-off containers created by running `docker-compose run` on an older version of Compose, `docker-compose run` would fail with a name collision. Compose now shows an error if you have leftover containers of this type lying around, and tells you how to remove them.
|
|
||||||
- Compose was not reading Docker authentication config files created in the new location, `~/docker/config.json`, and authentication against private registries would therefore fail.
|
|
||||||
- When a container had a pseudo-TTY attached, its output in `docker-compose up` would be truncated.
|
|
||||||
- `docker-compose up --x-smart-recreate` would sometimes fail when an image tag was updated.
|
|
||||||
- `docker-compose up` would sometimes create two containers with the same numeric suffix.
|
|
||||||
- `docker-compose rm` and `docker-compose ps` would sometimes list services that aren't part of the current project (though no containers were erroneously removed).
|
|
||||||
- Some `docker-compose` commands would not show an error if invalid service names were passed in.
|
|
||||||
|
|
||||||
Thanks @dano, @josephpage, @kevinsimper, @lieryan, @phemmer, @soulrebel and @sschepens!
|
|
||||||
|
|
||||||
1.3.1 (2015-06-21)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
The following bugs have been fixed:
|
|
||||||
|
|
||||||
- `docker-compose build` would always attempt to pull the base image before building.
|
|
||||||
- `docker-compose help migrate-to-labels` failed with an error.
|
|
||||||
- If no network mode was specified, Compose would set it to "bridge", rather than allowing the Docker daemon to use its configured default network mode.
|
|
||||||
|
|
||||||
1.3.0 (2015-06-18)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Firstly, two important notes:
|
|
||||||
|
|
||||||
- **This release contains breaking changes, and you will need to either remove or migrate your existing containers before running your app** - see the [upgrading section of the install docs](https://github.com/docker/compose/blob/1.3.0rc1/docs/install.md#upgrading) for details.
|
|
||||||
|
|
||||||
- Compose now requires Docker 1.6.0 or later.
|
|
||||||
|
|
||||||
We've done a lot of work in this release to remove hacks and make Compose more stable:
|
|
||||||
|
|
||||||
- Compose now uses container labels, rather than names, to keep track of containers. This makes Compose both faster and easier to integrate with your own tools.
|
|
||||||
|
|
||||||
- Compose no longer uses "intermediate containers" when recreating containers for a service. This makes `docker-compose up` less complex and more resilient to failure.
|
|
||||||
|
|
||||||
There are some new features:
|
|
||||||
|
|
||||||
- `docker-compose up` has an **experimental** new behaviour: it will only recreate containers for services whose configuration has changed in `docker-compose.yml`. This will eventually become the default, but for now you can take it for a spin:
|
|
||||||
|
|
||||||
$ docker-compose up --x-smart-recreate
|
|
||||||
|
|
||||||
- When invoked in a subdirectory of a project, `docker-compose` will now climb up through parent directories until it finds a `docker-compose.yml`.
|
|
||||||
|
|
||||||
Several new configuration keys have been added to `docker-compose.yml`:
|
|
||||||
|
|
||||||
- `dockerfile`, like `docker build --file`, lets you specify an alternate Dockerfile to use with `build`.
|
|
||||||
- `labels`, like `docker run --labels`, lets you add custom metadata to containers.
|
|
||||||
- `extra_hosts`, like `docker run --add-host`, lets you add entries to a container's `/etc/hosts` file.
|
|
||||||
- `pid: host`, like `docker run --pid=host`, lets you reuse the same PID namespace as the host machine.
|
|
||||||
- `cpuset`, like `docker run --cpuset-cpus`, lets you specify which CPUs to allow execution in.
|
|
||||||
- `read_only`, like `docker run --read-only`, lets you mount a container's filesystem as read-only.
|
|
||||||
- `security_opt`, like `docker run --security-opt`, lets you specify [security options](https://docs.docker.com/engine/reference/run/#security-configuration).
|
|
||||||
- `log_driver`, like `docker run --log-driver`, lets you specify a [log driver](https://docs.docker.com/engine/reference/run/#logging-drivers-log-driver).
|
|
||||||
|
|
||||||
Many bugs have been fixed, including the following:
|
|
||||||
|
|
||||||
- The output of `docker-compose run` was sometimes truncated, especially when running under Jenkins.
|
|
||||||
- A service's volumes would sometimes not update after volume configuration was changed in `docker-compose.yml`.
|
|
||||||
- Authenticating against third-party registries would sometimes fail.
|
|
||||||
- `docker-compose run --rm` would fail to remove the container if the service had a `restart` policy in place.
|
|
||||||
- `docker-compose scale` would refuse to scale a service beyond 1 container if it exposed a specific port number on the host.
|
|
||||||
- Compose would refuse to create multiple volume entries with the same host path.
|
|
||||||
|
|
||||||
Thanks @ahromis, @albers, @aleksandr-vin, @antoineco, @ccverak, @chernjie, @dnephin, @edmorley, @fordhurley, @josephpage, @KyleJamesWalker, @lsowen, @mchasal, @noironetworks, @sdake, @sdurrheimer, @sherter, @stephenlawrence, @thaJeztah, @thieman, @turtlemonvh, @twhiteman, @vdemeester, @xuxinkun and @zwily!
|
|
||||||
|
|
||||||
1.2.0 (2015-04-16)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- `docker-compose.yml` now supports an `extends` option, which enables a service to inherit configuration from another service in another configuration file. This is really good for sharing common configuration between apps, or for configuring the same app for different environments. Here's the [documentation](https://github.com/docker/compose/blob/master/docs/yml.md#extends).
|
|
||||||
|
|
||||||
- When using Compose with a Swarm cluster, containers that depend on one another will be co-scheduled on the same node. This means that most Compose apps will now work out of the box, as long as they don't use `build`.
|
|
||||||
|
|
||||||
- Repeated invocations of `docker-compose up` when using Compose with a Swarm cluster now work reliably.
|
|
||||||
|
|
||||||
- Directories passed to `build`, filenames passed to `env_file` and volume host paths passed to `volumes` are now treated as relative to the *directory of the configuration file*, not the directory that `docker-compose` is being run in. In the majority of cases, those are the same, but if you use the `-f|--file` argument to specify a configuration file in another directory, **this is a breaking change**.
|
|
||||||
|
|
||||||
- A service can now share another service's network namespace with `net: container:<service>`.
|
|
||||||
|
|
||||||
- `volumes_from` and `net: container:<service>` entries are taken into account when resolving dependencies, so `docker-compose up <service>` will correctly start all dependencies of `<service>`.
|
|
||||||
|
|
||||||
- `docker-compose run` now accepts a `--user` argument to specify a user to run the command as, just like `docker run`.
|
|
||||||
|
|
||||||
- The `up`, `stop` and `restart` commands now accept a `--timeout` (or `-t`) argument to specify how long to wait when attempting to gracefully stop containers, just like `docker stop`.
|
|
||||||
|
|
||||||
- `docker-compose rm` now accepts `-f` as a shorthand for `--force`, just like `docker rm`.
|
|
||||||
|
|
||||||
Thanks, @abesto, @albers, @alunduil, @dnephin, @funkyfuture, @gilclark, @IanVS, @KingsleyKelly, @knutwalker, @thaJeztah and @vmalloc!
|
|
||||||
|
|
||||||
1.1.0 (2015-02-25)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Fig has been renamed to Docker Compose, or just Compose for short. This has several implications for you:
|
|
||||||
|
|
||||||
- The command you type is now `docker-compose`, not `fig`.
|
|
||||||
- You should rename your fig.yml to docker-compose.yml.
|
|
||||||
- If you’re installing via PyPi, the package is now `docker-compose`, so install it with `pip install docker-compose`.
|
|
||||||
|
|
||||||
Besides that, there’s a lot of new stuff in this release:
|
|
||||||
|
|
||||||
- We’ve made a few small changes to ensure that Compose will work with Swarm, Docker’s new clustering tool (https://github.com/docker/swarm). Eventually you'll be able to point Compose at a Swarm cluster instead of a standalone Docker host and it’ll run your containers on the cluster with no extra work from you. As Swarm is still developing, integration is rough and lots of Compose features don't work yet.
|
|
||||||
|
|
||||||
- `docker-compose run` now has a `--service-ports` flag for exposing ports on the given service. This is useful for e.g. running your webapp with an interactive debugger.
|
|
||||||
|
|
||||||
- You can now link to containers outside your app with the `external_links` option in docker-compose.yml.
|
|
||||||
|
|
||||||
- You can now prevent `docker-compose up` from automatically building images with the `--no-build` option. This will make fewer API calls and run faster.
|
|
||||||
|
|
||||||
- If you don’t specify a tag when using the `image` key, Compose will default to the `latest` tag, rather than pulling all tags.
|
|
||||||
|
|
||||||
- `docker-compose kill` now supports the `-s` flag, allowing you to specify the exact signal you want to send to a service’s containers.
|
|
||||||
|
|
||||||
- docker-compose.yml now has an `env_file` key, analogous to `docker run --env-file`, letting you specify multiple environment variables in a separate file. This is great if you have a lot of them, or if you want to keep sensitive information out of version control.
|
|
||||||
|
|
||||||
- docker-compose.yml now supports the `dns_search`, `cap_add`, `cap_drop`, `cpu_shares` and `restart` options, analogous to `docker run`’s `--dns-search`, `--cap-add`, `--cap-drop`, `--cpu-shares` and `--restart` options.
|
|
||||||
|
|
||||||
- Compose now ships with Bash tab completion - see the installation and usage docs at https://github.com/docker/compose/blob/1.1.0/docs/completion.md
|
|
||||||
|
|
||||||
- A number of bugs have been fixed - see the milestone for details: https://github.com/docker/compose/issues?q=milestone%3A1.1.0+
|
|
||||||
|
|
||||||
Thanks @dnephin, @squebe, @jbalonso, @raulcd, @benlangfield, @albers, @ggtools, @bersace, @dtenenba, @petercv, @drewkett, @TFenby, @paulRbr, @Aigeruth and @salehe!
|
|
||||||
|
|
||||||
1.0.1 (2014-11-04)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Added an `--allow-insecure-ssl` option to allow `fig up`, `fig run` and `fig pull` to pull from insecure registries.
|
|
||||||
- Fixed `fig run` not showing output in Jenkins.
|
|
||||||
- Fixed a bug where Fig couldn't build Dockerfiles with ADD statements pointing at URLs.
|
|
||||||
|
|
||||||
1.0.0 (2014-10-16)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
The highlights:
|
|
||||||
|
|
||||||
- [Fig has joined Docker.](https://www.orchardup.com/blog/orchard-is-joining-docker) Fig will continue to be maintained, but we'll also be incorporating the best bits of Fig into Docker itself.
|
|
||||||
|
|
||||||
This means the GitHub repository has moved to [https://github.com/docker/fig](https://github.com/docker/fig) and our IRC channel is now #docker-fig on Freenode.
|
|
||||||
|
|
||||||
- Fig can be used with the [official Docker OS X installer](https://docs.docker.com/installation/mac/). Boot2Docker will mount the home directory from your host machine so volumes work as expected.
|
|
||||||
|
|
||||||
- Fig supports Docker 1.3.
|
|
||||||
|
|
||||||
- It is now possible to connect to the Docker daemon using TLS by using the `DOCKER_CERT_PATH` and `DOCKER_TLS_VERIFY` environment variables.
|
|
||||||
|
|
||||||
- There is a new `fig port` command which outputs the host port binding of a service, in a similar way to `docker port`.
|
|
||||||
|
|
||||||
- There is a new `fig pull` command which pulls the latest images for a service.
|
|
||||||
|
|
||||||
- There is a new `fig restart` command which restarts a service's containers.
|
|
||||||
|
|
||||||
- Fig creates multiple containers in service by appending a number to the service name (e.g. `db_1`, `db_2`, etc). As a convenience, Fig will now give the first container an alias of the service name (e.g. `db`).
|
|
||||||
|
|
||||||
This link alias is also a valid hostname and added to `/etc/hosts` so you can connect to linked services using their hostname. For example, instead of resolving the environment variables `DB_PORT_5432_TCP_ADDR` and `DB_PORT_5432_TCP_PORT`, you could just use the hostname `db` and port `5432` directly.
|
|
||||||
|
|
||||||
- Volume definitions now support `ro` mode, expanding `~` and expanding environment variables.
|
|
||||||
|
|
||||||
- `.dockerignore` is supported when building.
|
|
||||||
|
|
||||||
- The project name can be set with the `FIG_PROJECT_NAME` environment variable.
|
|
||||||
|
|
||||||
- The `--env` and `--entrypoint` options have been added to `fig run`.
|
|
||||||
|
|
||||||
- The Fig binary for Linux is now linked against an older version of glibc so it works on CentOS 6 and Debian Wheezy.
|
|
||||||
|
|
||||||
Other things:
|
|
||||||
|
|
||||||
- `fig ps` now works on Jenkins and makes fewer API calls to the Docker daemon.
|
|
||||||
- `--verbose` displays more useful debugging output.
|
|
||||||
- When starting a service where `volumes_from` points to a service without any containers running, that service will now be started.
|
|
||||||
- Lots of docs improvements. Notably, environment variables are documented and official repositories are used throughout.
|
|
||||||
|
|
||||||
Thanks @dnephin, @d11wtq, @marksteve, @rubbish, @jbalonso, @timfreund, @alunduil, @mieciu, @shuron, @moss, @suzaku and @chmouel! Whew.
|
|
||||||
|
|
||||||
0.5.2 (2014-07-28)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Added a `--no-cache` option to `fig build`, which bypasses the cache just like `docker build --no-cache`.
|
|
||||||
- Fixed the `dns:` fig.yml option, which was causing fig to error out.
|
|
||||||
- Fixed a bug where fig couldn't start under Python 2.6.
|
|
||||||
- Fixed a log-streaming bug that occasionally caused fig to exit.
|
|
||||||
|
|
||||||
Thanks @dnephin and @marksteve!
|
|
||||||
|
|
||||||
|
|
||||||
0.5.1 (2014-07-11)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- If a service has a command defined, `fig run [service]` with no further arguments will run it.
|
|
||||||
- The project name now defaults to the directory containing fig.yml, not the current working directory (if they're different)
|
|
||||||
- `volumes_from` now works properly with containers as well as services
|
|
||||||
- Fixed a race condition when recreating containers in `fig up`
|
|
||||||
|
|
||||||
Thanks @ryanbrainard and @d11wtq!
|
|
||||||
|
|
||||||
|
|
||||||
0.5.0 (2014-07-11)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fig now starts links when you run `fig run` or `fig up`.
|
|
||||||
|
|
||||||
For example, if you have a `web` service which depends on a `db` service, `fig run web ...` will start the `db` service.
|
|
||||||
|
|
||||||
- Environment variables can now be resolved from the environment that Fig is running in. Just specify it as a blank variable in your `fig.yml` and, if set, it'll be resolved:
|
|
||||||
```
|
|
||||||
environment:
|
|
||||||
RACK_ENV: development
|
|
||||||
SESSION_SECRET:
|
|
||||||
```
|
|
||||||
|
|
||||||
- `volumes_from` is now supported in `fig.yml`. All of the volumes from the specified services and containers will be mounted:
|
|
||||||
|
|
||||||
```
|
|
||||||
volumes_from:
|
|
||||||
- service_name
|
|
||||||
- container_name
|
|
||||||
```
|
|
||||||
|
|
||||||
- A host address can now be specified in `ports`:
|
|
||||||
|
|
||||||
```
|
|
||||||
ports:
|
|
||||||
- "0.0.0.0:8000:8000"
|
|
||||||
- "127.0.0.1:8001:8001"
|
|
||||||
```
|
|
||||||
|
|
||||||
- The `net` and `workdir` options are now supported in `fig.yml`.
|
|
||||||
- The `hostname` option now works in the same way as the Docker CLI, splitting out into a `domainname` option.
|
|
||||||
- TTY behaviour is far more robust, and resizes are supported correctly.
|
|
||||||
- Load YAML files safely.
|
|
||||||
|
|
||||||
Thanks to @d11wtq, @ryanbrainard, @rail44, @j0hnsmith, @binarin, @Elemecca, @mozz100 and @marksteve for their help with this release!
|
|
||||||
|
|
||||||
|
|
||||||
0.4.2 (2014-06-18)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fix various encoding errors when using `fig run`, `fig up` and `fig build`.
|
|
||||||
|
|
||||||
0.4.1 (2014-05-08)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Add support for Docker 0.11.0. (Thanks @marksteve!)
|
|
||||||
- Make project name configurable. (Thanks @jefmathiot!)
|
|
||||||
- Return correct exit code from `fig run`.
|
|
||||||
|
|
||||||
0.4.0 (2014-04-29)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Support Docker 0.9 and 0.10
|
|
||||||
- Display progress bars correctly when pulling images (no more ski slopes)
|
|
||||||
- `fig up` now stops all services when any container exits
|
|
||||||
- Added support for the `privileged` config option in fig.yml (thanks @kvz!)
|
|
||||||
- Shortened and aligned log prefixes in `fig up` output
|
|
||||||
- Only containers started with `fig run` link back to their own service
|
|
||||||
- Handle UTF-8 correctly when streaming `fig build/run/up` output (thanks @mauvm and @shanejonas!)
|
|
||||||
- Error message improvements
|
|
||||||
|
|
||||||
0.3.2 (2014-03-05)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Added an `--rm` option to `fig run`. (Thanks @marksteve!)
|
|
||||||
- Added an `expose` option to `fig.yml`.
|
|
||||||
|
|
||||||
0.3.1 (2014-03-04)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Added contribution instructions. (Thanks @kvz!)
|
|
||||||
- Fixed `fig rm` throwing an error.
|
|
||||||
- Fixed a bug in `fig ps` on Docker 0.8.1 when there is a container with no command.
|
|
||||||
|
|
||||||
0.3.0 (2014-03-03)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- We now ship binaries for OS X and Linux. No more having to install with Pip!
|
|
||||||
- Add `-f` flag to specify alternate `fig.yml` files
|
|
||||||
- Add support for custom link names
|
|
||||||
- Fix a bug where recreating would sometimes hang
|
|
||||||
- Update docker-py to support Docker 0.8.0.
|
|
||||||
- Various documentation improvements
|
|
||||||
- Various error message improvements
|
|
||||||
|
|
||||||
Thanks @marksteve, @Gazler and @teozkr!
|
|
||||||
|
|
||||||
0.2.2 (2014-02-17)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Resolve dependencies using Cormen/Tarjan topological sort
|
|
||||||
- Fix `fig up` not printing log output
|
|
||||||
- Stop containers in reverse order to starting
|
|
||||||
- Fix scale command not binding ports
|
|
||||||
|
|
||||||
Thanks to @barnybug and @dustinlacewell for their work on this release.
|
|
||||||
|
|
||||||
0.2.1 (2014-02-04)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- General improvements to error reporting (#77, #79)
|
|
||||||
|
|
||||||
0.2.0 (2014-01-31)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Link services to themselves so run commands can access the running service. (#67)
|
|
||||||
- Much better documentation.
|
|
||||||
- Make service dependency resolution more reliable. (#48)
|
|
||||||
- Load Fig configurations with a `.yaml` extension. (#58)
|
|
||||||
|
|
||||||
Big thanks to @cameronmaske, @mrchrisadams and @damianmoore for their help with this release.
|
|
||||||
|
|
||||||
0.1.4 (2014-01-27)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Add a link alias without the project name. This makes the environment variables a little shorter: `REDIS_1_PORT_6379_TCP_ADDR`. (#54)
|
|
||||||
|
|
||||||
0.1.3 (2014-01-23)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fix ports sometimes being configured incorrectly. (#46)
|
|
||||||
- Fix log output sometimes not displaying. (#47)
|
|
||||||
|
|
||||||
0.1.2 (2014-01-22)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Add `-T` option to `fig run` to disable pseudo-TTY. (#34)
|
|
||||||
- Fix `fig up` requiring the ubuntu image to be pulled to recreate containers. (#33) Thanks @cameronmaske!
|
|
||||||
- Improve reliability, fix arrow keys and fix a race condition in `fig run`. (#34, #39, #40)
|
|
||||||
|
|
||||||
0.1.1 (2014-01-17)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Fix bug where ports were not exposed correctly (#29). Thanks @dustinlacewell!
|
|
||||||
|
|
||||||
0.1.0 (2014-01-16)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Containers are recreated on each `fig up`, ensuring config is up-to-date with `fig.yml` (#2)
|
|
||||||
- Add `fig scale` command (#9)
|
|
||||||
- Use `DOCKER_HOST` environment variable to find Docker daemon, for consistency with the official Docker client (was previously `DOCKER_URL`) (#19)
|
|
||||||
- Truncate long commands in `fig ps` (#18)
|
|
||||||
- Fill out CLI help banners for commands (#15, #16)
|
|
||||||
- Show a friendlier error when `fig.yml` is missing (#4)
|
|
||||||
- Fix bug with `fig build` logging (#3)
|
|
||||||
- Fix bug where builds would time out if a step took a long time without generating output (#6)
|
|
||||||
- Fix bug where streaming container output over the Unix socket raised an error (#7)
|
|
||||||
|
|
||||||
Big thanks to @tomstuart, @EnTeQuAk, @schickling, @aronasorman and @GeoffreyPlitt.
|
|
||||||
|
|
||||||
0.0.2 (2014-01-02)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
- Improve documentation
|
|
||||||
- Try to connect to Docker on `tcp://localdocker:4243` and a UNIX socket in addition to `localhost`.
|
|
||||||
- Improve `fig up` behaviour
|
|
||||||
- Add confirmation prompt to `fig rm`
|
|
||||||
- Add `fig build` command
|
|
||||||
|
|
||||||
0.0.1 (2013-12-20)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Initial release.
|
|
||||||
|
|
@ -1 +0,0 @@
|
||||||
CHANGELOG.md
|
|
||||||
|
|
@ -1,74 +0,0 @@
|
||||||
# Contributing to Compose
|
|
||||||
|
|
||||||
Compose is a part of the Docker project, and follows the same rules and
|
|
||||||
principles. Take a read of [Docker's contributing guidelines](https://github.com/docker/docker/blob/master/CONTRIBUTING.md)
|
|
||||||
to get an overview.
|
|
||||||
|
|
||||||
## TL;DR
|
|
||||||
|
|
||||||
Pull requests will need:
|
|
||||||
|
|
||||||
- Tests
|
|
||||||
- Documentation
|
|
||||||
- [To be signed off](https://github.com/docker/docker/blob/master/CONTRIBUTING.md#sign-your-work)
|
|
||||||
- A logical series of [well written commits](https://github.com/alphagov/styleguides/blob/master/git.md)
|
|
||||||
|
|
||||||
## Development environment
|
|
||||||
|
|
||||||
If you're looking contribute to Compose
|
|
||||||
but you're new to the project or maybe even to Python, here are the steps
|
|
||||||
that should get you started.
|
|
||||||
|
|
||||||
1. Fork [https://github.com/docker/compose](https://github.com/docker/compose)
|
|
||||||
to your username.
|
|
||||||
2. Clone your forked repository locally `git clone git@github.com:yourusername/compose.git`.
|
|
||||||
3. You must [configure a remote](https://help.github.com/articles/configuring-a-remote-for-a-fork/) for your fork so that you can [sync changes you make](https://help.github.com/articles/syncing-a-fork/) with the original repository.
|
|
||||||
4. Enter the local directory `cd compose`.
|
|
||||||
5. Set up a development environment by running `python setup.py develop`. This
|
|
||||||
will install the dependencies and set up a symlink from your `docker-compose`
|
|
||||||
executable to the checkout of the repository. When you now run
|
|
||||||
`docker-compose` from anywhere on your machine, it will run your development
|
|
||||||
version of Compose.
|
|
||||||
|
|
||||||
## Install pre-commit hooks
|
|
||||||
|
|
||||||
This step is optional, but recommended. Pre-commit hooks will run style checks
|
|
||||||
and in some cases fix style issues for you, when you commit code.
|
|
||||||
|
|
||||||
Install the git pre-commit hooks using [tox](https://tox.readthedocs.org) by
|
|
||||||
running `tox -e pre-commit` or by following the
|
|
||||||
[pre-commit install guide](http://pre-commit.com/#install).
|
|
||||||
|
|
||||||
To run the style checks at any time run `tox -e pre-commit`.
|
|
||||||
|
|
||||||
## Submitting a pull request
|
|
||||||
|
|
||||||
See Docker's [basic contribution workflow](https://docs.docker.com/opensource/workflow/make-a-contribution/#the-basic-contribution-workflow) for a guide on how to submit a pull request for code or documentation.
|
|
||||||
|
|
||||||
## Running the test suite
|
|
||||||
|
|
||||||
Use the test script to run linting checks and then the full test suite against
|
|
||||||
different Python interpreters:
|
|
||||||
|
|
||||||
$ script/test/default
|
|
||||||
|
|
||||||
Tests are run against a Docker daemon inside a container, so that we can test
|
|
||||||
against multiple Docker versions. By default they'll run against only the latest
|
|
||||||
Docker version - set the `DOCKER_VERSIONS` environment variable to "all" to run
|
|
||||||
against all supported versions:
|
|
||||||
|
|
||||||
$ DOCKER_VERSIONS=all script/test/default
|
|
||||||
|
|
||||||
Arguments to `script/test/default` are passed through to the `tox` executable, so
|
|
||||||
you can specify a test directory, file, module, class or method:
|
|
||||||
|
|
||||||
$ script/test/default tests/unit
|
|
||||||
$ script/test/default tests/unit/cli_test.py
|
|
||||||
$ script/test/default tests/unit/config_test.py::ConfigTest
|
|
||||||
$ script/test/default tests/unit/config_test.py::ConfigTest::test_load
|
|
||||||
|
|
||||||
## Finding things to work on
|
|
||||||
|
|
||||||
We use a [ZenHub board](https://www.zenhub.io/) to keep track of specific things we are working on and planning to work on. If you're looking for things to work on, stuff in the backlog is a great place to start.
|
|
||||||
|
|
||||||
For more information about our project planning, take a look at our [GitHub wiki](https://github.com/docker/compose/wiki).
|
|
||||||
78
Dockerfile
|
|
@ -1,78 +0,0 @@
|
||||||
FROM debian:wheezy
|
|
||||||
|
|
||||||
RUN set -ex; \
|
|
||||||
apt-get update -qq; \
|
|
||||||
apt-get install -y \
|
|
||||||
locales \
|
|
||||||
gcc \
|
|
||||||
make \
|
|
||||||
zlib1g \
|
|
||||||
zlib1g-dev \
|
|
||||||
libssl-dev \
|
|
||||||
git \
|
|
||||||
ca-certificates \
|
|
||||||
curl \
|
|
||||||
libsqlite3-dev \
|
|
||||||
; \
|
|
||||||
rm -rf /var/lib/apt/lists/*
|
|
||||||
|
|
||||||
RUN curl https://get.docker.com/builds/Linux/x86_64/docker-1.8.3 \
|
|
||||||
-o /usr/local/bin/docker && \
|
|
||||||
chmod +x /usr/local/bin/docker
|
|
||||||
|
|
||||||
# Build Python 2.7.9 from source
|
|
||||||
RUN set -ex; \
|
|
||||||
curl -L https://www.python.org/ftp/python/2.7.9/Python-2.7.9.tgz | tar -xz; \
|
|
||||||
cd Python-2.7.9; \
|
|
||||||
./configure --enable-shared; \
|
|
||||||
make; \
|
|
||||||
make install; \
|
|
||||||
cd ..; \
|
|
||||||
rm -rf /Python-2.7.9
|
|
||||||
|
|
||||||
# Build python 3.4 from source
|
|
||||||
RUN set -ex; \
|
|
||||||
curl -L https://www.python.org/ftp/python/3.4.3/Python-3.4.3.tgz | tar -xz; \
|
|
||||||
cd Python-3.4.3; \
|
|
||||||
./configure --enable-shared; \
|
|
||||||
make; \
|
|
||||||
make install; \
|
|
||||||
cd ..; \
|
|
||||||
rm -rf /Python-3.4.3
|
|
||||||
|
|
||||||
# Make libpython findable
|
|
||||||
ENV LD_LIBRARY_PATH /usr/local/lib
|
|
||||||
|
|
||||||
# Install setuptools
|
|
||||||
RUN set -ex; \
|
|
||||||
curl -L https://bootstrap.pypa.io/ez_setup.py | python
|
|
||||||
|
|
||||||
# Install pip
|
|
||||||
RUN set -ex; \
|
|
||||||
curl -L https://pypi.python.org/packages/source/p/pip/pip-8.1.1.tar.gz | tar -xz; \
|
|
||||||
cd pip-8.1.1; \
|
|
||||||
python setup.py install; \
|
|
||||||
cd ..; \
|
|
||||||
rm -rf pip-8.1.1
|
|
||||||
|
|
||||||
# Python3 requires a valid locale
|
|
||||||
RUN echo "en_US.UTF-8 UTF-8" > /etc/locale.gen && locale-gen
|
|
||||||
ENV LANG en_US.UTF-8
|
|
||||||
|
|
||||||
RUN useradd -d /home/user -m -s /bin/bash user
|
|
||||||
WORKDIR /code/
|
|
||||||
|
|
||||||
RUN pip install tox==2.1.1
|
|
||||||
|
|
||||||
ADD requirements.txt /code/
|
|
||||||
ADD requirements-dev.txt /code/
|
|
||||||
ADD .pre-commit-config.yaml /code/
|
|
||||||
ADD setup.py /code/
|
|
||||||
ADD tox.ini /code/
|
|
||||||
ADD compose /code/compose/
|
|
||||||
RUN tox --notest
|
|
||||||
|
|
||||||
ADD . /code/
|
|
||||||
RUN chown -R user /code/
|
|
||||||
|
|
||||||
ENTRYPOINT ["/code/.tox/py27/bin/docker-compose"]
|
|
||||||
|
|
@ -1,13 +0,0 @@
|
||||||
|
|
||||||
FROM alpine:3.4
|
|
||||||
RUN apk -U add \
|
|
||||||
python \
|
|
||||||
py-pip
|
|
||||||
|
|
||||||
COPY requirements.txt /code/requirements.txt
|
|
||||||
RUN pip install -r /code/requirements.txt
|
|
||||||
|
|
||||||
ADD dist/docker-compose-release.tar.gz /code/docker-compose
|
|
||||||
RUN pip install --no-deps /code/docker-compose/docker-compose-*
|
|
||||||
|
|
||||||
ENTRYPOINT ["/usr/bin/docker-compose"]
|
|
||||||
191
LICENSE
|
|
@ -1,191 +0,0 @@
|
||||||
|
|
||||||
Apache License
|
|
||||||
Version 2.0, January 2004
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction,
|
|
||||||
and distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by
|
|
||||||
the copyright owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all
|
|
||||||
other entities that control, are controlled by, or are under common
|
|
||||||
control with that entity. For the purposes of this definition,
|
|
||||||
"control" means (i) the power, direct or indirect, to cause the
|
|
||||||
direction or management of such entity, whether by contract or
|
|
||||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity
|
|
||||||
exercising permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications,
|
|
||||||
including but not limited to software source code, documentation
|
|
||||||
source, and configuration files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical
|
|
||||||
transformation or translation of a Source form, including but
|
|
||||||
not limited to compiled object code, generated documentation,
|
|
||||||
and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or
|
|
||||||
Object form, made available under the License, as indicated by a
|
|
||||||
copyright notice that is included in or attached to the work
|
|
||||||
(an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object
|
|
||||||
form, that is based on (or derived from) the Work and for which the
|
|
||||||
editorial revisions, annotations, elaborations, or other modifications
|
|
||||||
represent, as a whole, an original work of authorship. For the purposes
|
|
||||||
of this License, Derivative Works shall not include works that remain
|
|
||||||
separable from, or merely link (or bind by name) to the interfaces of,
|
|
||||||
the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including
|
|
||||||
the original version of the Work and any modifications or additions
|
|
||||||
to that Work or Derivative Works thereof, that is intentionally
|
|
||||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
|
||||||
or by an individual or Legal Entity authorized to submit on behalf of
|
|
||||||
the copyright owner. For the purposes of this definition, "submitted"
|
|
||||||
means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems,
|
|
||||||
and issue tracking systems that are managed by, or on behalf of, the
|
|
||||||
Licensor for the purpose of discussing and improving the Work, but
|
|
||||||
excluding communication that is conspicuously marked or otherwise
|
|
||||||
designated in writing by the copyright owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
|
||||||
on behalf of whom a Contribution has been received by Licensor and
|
|
||||||
subsequently incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
copyright license to reproduce, prepare Derivative Works of,
|
|
||||||
publicly display, publicly perform, sublicense, and distribute the
|
|
||||||
Work and such Derivative Works in Source or Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of
|
|
||||||
this License, each Contributor hereby grants to You a perpetual,
|
|
||||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
|
||||||
(except as stated in this section) patent license to make, have made,
|
|
||||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
|
||||||
where such license applies only to those patent claims licensable
|
|
||||||
by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s)
|
|
||||||
with the Work to which such Contribution(s) was submitted. If You
|
|
||||||
institute patent litigation against any entity (including a
|
|
||||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
|
||||||
or a Contribution incorporated within the Work constitutes direct
|
|
||||||
or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate
|
|
||||||
as of the date such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the
|
|
||||||
Work or Derivative Works thereof in any medium, with or without
|
|
||||||
modifications, and in Source or Object form, provided that You
|
|
||||||
meet the following conditions:
|
|
||||||
|
|
||||||
(a) You must give any other recipients of the Work or
|
|
||||||
Derivative Works a copy of this License; and
|
|
||||||
|
|
||||||
(b) You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and
|
|
||||||
|
|
||||||
(c) You must retain, in the Source form of any Derivative Works
|
|
||||||
that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work,
|
|
||||||
excluding those notices that do not pertain to any part of
|
|
||||||
the Derivative Works; and
|
|
||||||
|
|
||||||
(d) If the Work includes a "NOTICE" text file as part of its
|
|
||||||
distribution, then any Derivative Works that You distribute must
|
|
||||||
include a readable copy of the attribution notices contained
|
|
||||||
within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one
|
|
||||||
of the following places: within a NOTICE text file distributed
|
|
||||||
as part of the Derivative Works; within the Source form or
|
|
||||||
documentation, if provided along with the Derivative Works; or,
|
|
||||||
within a display generated by the Derivative Works, if and
|
|
||||||
wherever such third-party notices normally appear. The contents
|
|
||||||
of the NOTICE file are for informational purposes only and
|
|
||||||
do not modify the License. You may add Your own attribution
|
|
||||||
notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided
|
|
||||||
that such additional attribution notices cannot be construed
|
|
||||||
as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and
|
|
||||||
may provide additional or different license terms and conditions
|
|
||||||
for use, reproduction, or distribution of Your modifications, or
|
|
||||||
for any such Derivative Works as a whole, provided Your use,
|
|
||||||
reproduction, and distribution of the Work otherwise complies with
|
|
||||||
the conditions stated in this License.
|
|
||||||
|
|
||||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
|
||||||
any Contribution intentionally submitted for inclusion in the Work
|
|
||||||
by You to the Licensor shall be under the terms and conditions of
|
|
||||||
this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify
|
|
||||||
the terms of any separate license agreement you may have executed
|
|
||||||
with Licensor regarding such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade
|
|
||||||
names, trademarks, service marks, or product names of the Licensor,
|
|
||||||
except as required for reasonable and customary use in describing the
|
|
||||||
origin of the Work and reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
|
||||||
agreed to in writing, Licensor provides the Work (and each
|
|
||||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
|
||||||
implied, including, without limitation, any warranties or conditions
|
|
||||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any
|
|
||||||
risks associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory,
|
|
||||||
whether in tort (including negligence), contract, or otherwise,
|
|
||||||
unless required by applicable law (such as deliberate and grossly
|
|
||||||
negligent acts) or agreed to in writing, shall any Contributor be
|
|
||||||
liable to You for damages, including any direct, indirect, special,
|
|
||||||
incidental, or consequential damages of any character arising as a
|
|
||||||
result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill,
|
|
||||||
work stoppage, computer failure or malfunction, or any and all
|
|
||||||
other commercial damages or losses), even if such Contributor
|
|
||||||
has been advised of the possibility of such damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing
|
|
||||||
the Work or Derivative Works thereof, You may choose to offer,
|
|
||||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
|
||||||
or other liability obligations and/or rights consistent with this
|
|
||||||
License. However, in accepting such obligations, You may act only
|
|
||||||
on Your own behalf and on Your sole responsibility, not on behalf
|
|
||||||
of any other Contributor, and only if You agree to indemnify,
|
|
||||||
defend, and hold each Contributor harmless for any liability
|
|
||||||
incurred by, or claims asserted against, such Contributor by reason
|
|
||||||
of your accepting any such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
||||||
Copyright 2014 Docker, Inc.
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|
|
||||||
46
MAINTAINERS
|
|
@ -1,46 +0,0 @@
|
||||||
# Compose maintainers file
|
|
||||||
#
|
|
||||||
# This file describes who runs the docker/compose project and how.
|
|
||||||
# This is a living document - if you see something out of date or missing, speak up!
|
|
||||||
#
|
|
||||||
# It is structured to be consumable by both humans and programs.
|
|
||||||
# To extract its contents programmatically, use any TOML-compliant parser.
|
|
||||||
#
|
|
||||||
# This file is compiled into the MAINTAINERS file in docker/opensource.
|
|
||||||
#
|
|
||||||
[Org]
|
|
||||||
[Org."Core maintainers"]
|
|
||||||
people = [
|
|
||||||
"aanand",
|
|
||||||
"bfirsh",
|
|
||||||
"dnephin",
|
|
||||||
"mnowster",
|
|
||||||
]
|
|
||||||
|
|
||||||
[people]
|
|
||||||
|
|
||||||
# A reference list of all people associated with the project.
|
|
||||||
# All other sections should refer to people by their canonical key
|
|
||||||
# in the people section.
|
|
||||||
|
|
||||||
# ADD YOURSELF HERE IN ALPHABETICAL ORDER
|
|
||||||
|
|
||||||
[people.aanand]
|
|
||||||
Name = "Aanand Prasad"
|
|
||||||
Email = "aanand.prasad@gmail.com"
|
|
||||||
GitHub = "aanand"
|
|
||||||
|
|
||||||
[people.bfirsh]
|
|
||||||
Name = "Ben Firshman"
|
|
||||||
Email = "ben@firshman.co.uk"
|
|
||||||
GitHub = "bfirsh"
|
|
||||||
|
|
||||||
[people.dnephin]
|
|
||||||
Name = "Daniel Nephin"
|
|
||||||
Email = "dnephin@gmail.com"
|
|
||||||
GitHub = "dnephin"
|
|
||||||
|
|
||||||
[people.mnowster]
|
|
||||||
Name = "Mazz Mosley"
|
|
||||||
Email = "mazz@houseofmnowster.com"
|
|
||||||
GitHub = "mnowster"
|
|
||||||
15
MANIFEST.in
|
|
@ -1,15 +0,0 @@
|
||||||
include Dockerfile
|
|
||||||
include LICENSE
|
|
||||||
include requirements.txt
|
|
||||||
include requirements-dev.txt
|
|
||||||
include tox.ini
|
|
||||||
include *.md
|
|
||||||
exclude README.md
|
|
||||||
include README.rst
|
|
||||||
include compose/config/*.json
|
|
||||||
include compose/GITSHA
|
|
||||||
recursive-include contrib/completion *
|
|
||||||
recursive-include tests *
|
|
||||||
global-exclude *.pyc
|
|
||||||
global-exclude *.pyo
|
|
||||||
global-exclude *.un~
|
|
||||||
65
README.md
|
|
@ -1,65 +0,0 @@
|
||||||
Docker Compose
|
|
||||||
==============
|
|
||||||

|
|
||||||
|
|
||||||
Compose is a tool for defining and running multi-container Docker applications.
|
|
||||||
With Compose, you use a Compose file to configure your application's services.
|
|
||||||
Then, using a single command, you create and start all the services
|
|
||||||
from your configuration. To learn more about all the features of Compose
|
|
||||||
see [the list of features](https://github.com/docker/compose/blob/release/docs/overview.md#features).
|
|
||||||
|
|
||||||
Compose is great for development, testing, and staging environments, as well as
|
|
||||||
CI workflows. You can learn more about each case in
|
|
||||||
[Common Use Cases](https://github.com/docker/compose/blob/release/docs/overview.md#common-use-cases).
|
|
||||||
|
|
||||||
Using Compose is basically a three-step process.
|
|
||||||
|
|
||||||
1. Define your app's environment with a `Dockerfile` so it can be
|
|
||||||
reproduced anywhere.
|
|
||||||
2. Define the services that make up your app in `docker-compose.yml` so
|
|
||||||
they can be run together in an isolated environment:
|
|
||||||
3. Lastly, run `docker-compose up` and Compose will start and run your entire app.
|
|
||||||
|
|
||||||
A `docker-compose.yml` looks like this:
|
|
||||||
|
|
||||||
version: '2'
|
|
||||||
|
|
||||||
services:
|
|
||||||
web:
|
|
||||||
build: .
|
|
||||||
ports:
|
|
||||||
- "5000:5000"
|
|
||||||
volumes:
|
|
||||||
- .:/code
|
|
||||||
redis:
|
|
||||||
image: redis
|
|
||||||
|
|
||||||
For more information about the Compose file, see the
|
|
||||||
[Compose file reference](https://github.com/docker/compose/blob/release/docs/compose-file.md)
|
|
||||||
|
|
||||||
Compose has commands for managing the whole lifecycle of your application:
|
|
||||||
|
|
||||||
* Start, stop and rebuild services
|
|
||||||
* View the status of running services
|
|
||||||
* Stream the log output of running services
|
|
||||||
* Run a one-off command on a service
|
|
||||||
|
|
||||||
Installation and documentation
|
|
||||||
------------------------------
|
|
||||||
|
|
||||||
- Full documentation is available on [Docker's website](https://docs.docker.com/compose/).
|
|
||||||
- If you have any questions, you can talk in real-time with other developers in the #docker-compose IRC channel on Freenode. [Click here to join using IRCCloud.](https://www.irccloud.com/invite?hostname=irc.freenode.net&channel=%23docker-compose)
|
|
||||||
- Code repository for Compose is on [Github](https://github.com/docker/compose)
|
|
||||||
- If you find any problems please fill out an [issue](https://github.com/docker/compose/issues/new)
|
|
||||||
|
|
||||||
Contributing
|
|
||||||
------------
|
|
||||||
|
|
||||||
[](http://jenkins.dockerproject.org/job/Compose%20Master/)
|
|
||||||
|
|
||||||
Want to help build Compose? Check out our [contributing documentation](https://github.com/docker/compose/blob/master/CONTRIBUTING.md).
|
|
||||||
|
|
||||||
Releasing
|
|
||||||
---------
|
|
||||||
|
|
||||||
Releases are built by maintainers, following an outline of the [release process](https://github.com/docker/compose/blob/master/project/RELEASE-PROCESS.md).
|
|
||||||
32
ROADMAP.md
|
|
@ -1,32 +0,0 @@
|
||||||
# Roadmap
|
|
||||||
|
|
||||||
## An even better tool for development environments
|
|
||||||
|
|
||||||
Compose is a great tool for development environments, but it could be even better. For example:
|
|
||||||
|
|
||||||
- It should be possible to define hostnames for containers which work from the host machine, e.g. “mywebcontainer.local”. This is needed by apps comprising multiple web services which generate links to one another (e.g. a frontend website and a separate admin webapp)
|
|
||||||
|
|
||||||
## More than just development environments
|
|
||||||
|
|
||||||
Compose currently works really well in development, but we want to make the Compose file format better for test, staging, and production environments. To support these use cases, there will need to be improvements to the file format, improvements to the command-line tool, integrations with other tools, and perhaps new tools altogether.
|
|
||||||
|
|
||||||
Some specific things we are considering:
|
|
||||||
|
|
||||||
- Compose currently will attempt to get your application into the correct state when running `up`, but it has a number of shortcomings:
|
|
||||||
- It should roll back to a known good state if it fails.
|
|
||||||
- It should allow a user to check the actions it is about to perform before running them.
|
|
||||||
- It should be possible to partially modify the config file for different environments (dev/test/staging/prod), passing in e.g. custom ports, volume mount paths, or volume drivers. ([#1377](https://github.com/docker/compose/issues/1377))
|
|
||||||
- Compose should recommend a technique for zero-downtime deploys.
|
|
||||||
- It should be possible to continuously attempt to keep an application in the correct state, instead of just performing `up` a single time.
|
|
||||||
|
|
||||||
## Integration with Swarm
|
|
||||||
|
|
||||||
Compose should integrate really well with Swarm so you can take an application you've developed on your laptop and run it on a Swarm cluster.
|
|
||||||
|
|
||||||
The current state of integration is documented in [SWARM.md](SWARM.md).
|
|
||||||
|
|
||||||
## Applications spanning multiple teams
|
|
||||||
|
|
||||||
Compose works well for applications that are in a single repository and depend on services that are hosted on Docker Hub. If your application depends on another application within your organisation, Compose doesn't work as well.
|
|
||||||
|
|
||||||
There are several ideas about how this could work, such as [including external files](https://github.com/docker/fig/issues/318).
|
|
||||||
24
appveyor.yml
|
|
@ -1,24 +0,0 @@
|
||||||
|
|
||||||
version: '{branch}-{build}'
|
|
||||||
|
|
||||||
install:
|
|
||||||
- "SET PATH=C:\\Python27-x64;C:\\Python27-x64\\Scripts;%PATH%"
|
|
||||||
- "python --version"
|
|
||||||
- "pip install tox==2.1.1 virtualenv==13.1.2"
|
|
||||||
|
|
||||||
# Build the binary after tests
|
|
||||||
build: false
|
|
||||||
|
|
||||||
test_script:
|
|
||||||
- "tox -e py27,py34 -- tests/unit"
|
|
||||||
- ps: ".\\script\\build\\windows.ps1"
|
|
||||||
|
|
||||||
artifacts:
|
|
||||||
- path: .\dist\docker-compose-Windows-x86_64.exe
|
|
||||||
name: "Compose Windows binary"
|
|
||||||
|
|
||||||
deploy:
|
|
||||||
- provider: Environment
|
|
||||||
name: master-builds
|
|
||||||
on:
|
|
||||||
branch: master
|
|
||||||
|
|
@ -1,3 +0,0 @@
|
||||||
#!/usr/bin/env python
|
|
||||||
from compose.cli.main import main
|
|
||||||
main()
|
|
||||||
|
|
@ -1,4 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
__version__ = '1.8.0'
|
|
||||||
|
|
@ -1,6 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from compose.cli.main import main
|
|
||||||
|
|
||||||
main()
|
|
||||||
|
|
@ -1,257 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
|
|
||||||
import six
|
|
||||||
from docker.utils import split_command
|
|
||||||
from docker.utils.ports import split_port
|
|
||||||
|
|
||||||
from .cli.errors import UserError
|
|
||||||
from .config.serialize import denormalize_config
|
|
||||||
from .network import get_network_defs_for_service
|
|
||||||
from .service import format_environment
|
|
||||||
from .service import NoSuchImageError
|
|
||||||
from .service import parse_repository_tag
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
SERVICE_KEYS = {
|
|
||||||
'working_dir': 'WorkingDir',
|
|
||||||
'user': 'User',
|
|
||||||
'labels': 'Labels',
|
|
||||||
}
|
|
||||||
|
|
||||||
IGNORED_KEYS = {'build'}
|
|
||||||
|
|
||||||
SUPPORTED_KEYS = {
|
|
||||||
'image',
|
|
||||||
'ports',
|
|
||||||
'expose',
|
|
||||||
'networks',
|
|
||||||
'command',
|
|
||||||
'environment',
|
|
||||||
'entrypoint',
|
|
||||||
} | set(SERVICE_KEYS)
|
|
||||||
|
|
||||||
VERSION = '0.1'
|
|
||||||
|
|
||||||
|
|
||||||
class NeedsPush(Exception):
|
|
||||||
def __init__(self, image_name):
|
|
||||||
self.image_name = image_name
|
|
||||||
|
|
||||||
|
|
||||||
class NeedsPull(Exception):
|
|
||||||
def __init__(self, image_name):
|
|
||||||
self.image_name = image_name
|
|
||||||
|
|
||||||
|
|
||||||
class MissingDigests(Exception):
|
|
||||||
def __init__(self, needs_push, needs_pull):
|
|
||||||
self.needs_push = needs_push
|
|
||||||
self.needs_pull = needs_pull
|
|
||||||
|
|
||||||
|
|
||||||
def serialize_bundle(config, image_digests):
|
|
||||||
return json.dumps(to_bundle(config, image_digests), indent=2, sort_keys=True)
|
|
||||||
|
|
||||||
|
|
||||||
def get_image_digests(project, allow_push=False):
|
|
||||||
digests = {}
|
|
||||||
needs_push = set()
|
|
||||||
needs_pull = set()
|
|
||||||
|
|
||||||
for service in project.services:
|
|
||||||
try:
|
|
||||||
digests[service.name] = get_image_digest(
|
|
||||||
service,
|
|
||||||
allow_push=allow_push,
|
|
||||||
)
|
|
||||||
except NeedsPush as e:
|
|
||||||
needs_push.add(e.image_name)
|
|
||||||
except NeedsPull as e:
|
|
||||||
needs_pull.add(e.image_name)
|
|
||||||
|
|
||||||
if needs_push or needs_pull:
|
|
||||||
raise MissingDigests(needs_push, needs_pull)
|
|
||||||
|
|
||||||
return digests
|
|
||||||
|
|
||||||
|
|
||||||
def get_image_digest(service, allow_push=False):
|
|
||||||
if 'image' not in service.options:
|
|
||||||
raise UserError(
|
|
||||||
"Service '{s.name}' doesn't define an image tag. An image name is "
|
|
||||||
"required to generate a proper image digest for the bundle. Specify "
|
|
||||||
"an image repo and tag with the 'image' option.".format(s=service))
|
|
||||||
|
|
||||||
_, _, separator = parse_repository_tag(service.options['image'])
|
|
||||||
# Compose file already uses a digest, no lookup required
|
|
||||||
if separator == '@':
|
|
||||||
return service.options['image']
|
|
||||||
|
|
||||||
try:
|
|
||||||
image = service.image()
|
|
||||||
except NoSuchImageError:
|
|
||||||
action = 'build' if 'build' in service.options else 'pull'
|
|
||||||
raise UserError(
|
|
||||||
"Image not found for service '{service}'. "
|
|
||||||
"You might need to run `docker-compose {action} {service}`."
|
|
||||||
.format(service=service.name, action=action))
|
|
||||||
|
|
||||||
if image['RepoDigests']:
|
|
||||||
# TODO: pick a digest based on the image tag if there are multiple
|
|
||||||
# digests
|
|
||||||
return image['RepoDigests'][0]
|
|
||||||
|
|
||||||
if 'build' not in service.options:
|
|
||||||
raise NeedsPull(service.image_name)
|
|
||||||
|
|
||||||
if not allow_push:
|
|
||||||
raise NeedsPush(service.image_name)
|
|
||||||
|
|
||||||
return push_image(service)
|
|
||||||
|
|
||||||
|
|
||||||
def push_image(service):
|
|
||||||
try:
|
|
||||||
digest = service.push()
|
|
||||||
except:
|
|
||||||
log.error(
|
|
||||||
"Failed to push image for service '{s.name}'. Please use an "
|
|
||||||
"image tag that can be pushed to a Docker "
|
|
||||||
"registry.".format(s=service))
|
|
||||||
raise
|
|
||||||
|
|
||||||
if not digest:
|
|
||||||
raise ValueError("Failed to get digest for %s" % service.name)
|
|
||||||
|
|
||||||
repo, _, _ = parse_repository_tag(service.options['image'])
|
|
||||||
identifier = '{repo}@{digest}'.format(repo=repo, digest=digest)
|
|
||||||
|
|
||||||
# only do this if RepoDigests isn't already populated
|
|
||||||
image = service.image()
|
|
||||||
if not image['RepoDigests']:
|
|
||||||
# Pull by digest so that image['RepoDigests'] is populated for next time
|
|
||||||
# and we don't have to pull/push again
|
|
||||||
service.client.pull(identifier)
|
|
||||||
log.info("Stored digest for {}".format(service.image_name))
|
|
||||||
|
|
||||||
return identifier
|
|
||||||
|
|
||||||
|
|
||||||
def to_bundle(config, image_digests):
|
|
||||||
if config.networks:
|
|
||||||
log.warn("Unsupported top level key 'networks' - ignoring")
|
|
||||||
|
|
||||||
if config.volumes:
|
|
||||||
log.warn("Unsupported top level key 'volumes' - ignoring")
|
|
||||||
|
|
||||||
config = denormalize_config(config)
|
|
||||||
|
|
||||||
return {
|
|
||||||
'Version': VERSION,
|
|
||||||
'Services': {
|
|
||||||
name: convert_service_to_bundle(
|
|
||||||
name,
|
|
||||||
service_dict,
|
|
||||||
image_digests[name],
|
|
||||||
)
|
|
||||||
for name, service_dict in config['services'].items()
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def convert_service_to_bundle(name, service_dict, image_digest):
|
|
||||||
container_config = {'Image': image_digest}
|
|
||||||
|
|
||||||
for key, value in service_dict.items():
|
|
||||||
if key in IGNORED_KEYS:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if key not in SUPPORTED_KEYS:
|
|
||||||
log.warn("Unsupported key '{}' in services.{} - ignoring".format(key, name))
|
|
||||||
continue
|
|
||||||
|
|
||||||
if key == 'environment':
|
|
||||||
container_config['Env'] = format_environment({
|
|
||||||
envkey: envvalue for envkey, envvalue in value.items()
|
|
||||||
if envvalue
|
|
||||||
})
|
|
||||||
continue
|
|
||||||
|
|
||||||
if key in SERVICE_KEYS:
|
|
||||||
container_config[SERVICE_KEYS[key]] = value
|
|
||||||
continue
|
|
||||||
|
|
||||||
set_command_and_args(
|
|
||||||
container_config,
|
|
||||||
service_dict.get('entrypoint', []),
|
|
||||||
service_dict.get('command', []))
|
|
||||||
container_config['Networks'] = make_service_networks(name, service_dict)
|
|
||||||
|
|
||||||
ports = make_port_specs(service_dict)
|
|
||||||
if ports:
|
|
||||||
container_config['Ports'] = ports
|
|
||||||
|
|
||||||
return container_config
|
|
||||||
|
|
||||||
|
|
||||||
# See https://github.com/docker/swarmkit/blob//agent/exec/container/container.go#L95
|
|
||||||
def set_command_and_args(config, entrypoint, command):
|
|
||||||
if isinstance(entrypoint, six.string_types):
|
|
||||||
entrypoint = split_command(entrypoint)
|
|
||||||
if isinstance(command, six.string_types):
|
|
||||||
command = split_command(command)
|
|
||||||
|
|
||||||
if entrypoint:
|
|
||||||
config['Command'] = entrypoint + command
|
|
||||||
return
|
|
||||||
|
|
||||||
if command:
|
|
||||||
config['Args'] = command
|
|
||||||
|
|
||||||
|
|
||||||
def make_service_networks(name, service_dict):
|
|
||||||
networks = []
|
|
||||||
|
|
||||||
for network_name, network_def in get_network_defs_for_service(service_dict).items():
|
|
||||||
for key in network_def.keys():
|
|
||||||
log.warn(
|
|
||||||
"Unsupported key '{}' in services.{}.networks.{} - ignoring"
|
|
||||||
.format(key, name, network_name))
|
|
||||||
|
|
||||||
networks.append(network_name)
|
|
||||||
|
|
||||||
return networks
|
|
||||||
|
|
||||||
|
|
||||||
def make_port_specs(service_dict):
|
|
||||||
ports = []
|
|
||||||
|
|
||||||
internal_ports = [
|
|
||||||
internal_port
|
|
||||||
for port_def in service_dict.get('ports', [])
|
|
||||||
for internal_port in split_port(port_def)[0]
|
|
||||||
]
|
|
||||||
|
|
||||||
internal_ports += service_dict.get('expose', [])
|
|
||||||
|
|
||||||
for internal_port in internal_ports:
|
|
||||||
spec = make_port_spec(internal_port)
|
|
||||||
if spec not in ports:
|
|
||||||
ports.append(spec)
|
|
||||||
|
|
||||||
return ports
|
|
||||||
|
|
||||||
|
|
||||||
def make_port_spec(value):
|
|
||||||
components = six.text_type(value).partition('/')
|
|
||||||
return {
|
|
||||||
'Protocol': components[2] or 'tcp',
|
|
||||||
'Port': int(components[0]),
|
|
||||||
}
|
|
||||||
|
|
@ -1,43 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
NAMES = [
|
|
||||||
'grey',
|
|
||||||
'red',
|
|
||||||
'green',
|
|
||||||
'yellow',
|
|
||||||
'blue',
|
|
||||||
'magenta',
|
|
||||||
'cyan',
|
|
||||||
'white'
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def get_pairs():
|
|
||||||
for i, name in enumerate(NAMES):
|
|
||||||
yield(name, str(30 + i))
|
|
||||||
yield('intense_' + name, str(30 + i) + ';1')
|
|
||||||
|
|
||||||
|
|
||||||
def ansi(code):
|
|
||||||
return '\033[{0}m'.format(code)
|
|
||||||
|
|
||||||
|
|
||||||
def ansi_color(code, s):
|
|
||||||
return '{0}{1}{2}'.format(ansi(code), s, ansi(0))
|
|
||||||
|
|
||||||
|
|
||||||
def make_color_fn(code):
|
|
||||||
return lambda s: ansi_color(code, s)
|
|
||||||
|
|
||||||
|
|
||||||
for (name, code) in get_pairs():
|
|
||||||
globals()[name] = make_color_fn(code)
|
|
||||||
|
|
||||||
|
|
||||||
def rainbow():
|
|
||||||
cs = ['cyan', 'yellow', 'green', 'magenta', 'red', 'blue',
|
|
||||||
'intense_cyan', 'intense_yellow', 'intense_green',
|
|
||||||
'intense_magenta', 'intense_red', 'intense_blue']
|
|
||||||
|
|
||||||
for c in cs:
|
|
||||||
yield globals()[c]
|
|
||||||
|
|
@ -1,130 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import ssl
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from . import verbose_proxy
|
|
||||||
from .. import config
|
|
||||||
from ..config.environment import Environment
|
|
||||||
from ..const import API_VERSIONS
|
|
||||||
from ..project import Project
|
|
||||||
from .docker_client import docker_client
|
|
||||||
from .docker_client import tls_config_from_options
|
|
||||||
from .utils import get_version_info
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def project_from_options(project_dir, options):
|
|
||||||
environment = Environment.from_env_file(project_dir)
|
|
||||||
host = options.get('--host')
|
|
||||||
if host is not None:
|
|
||||||
host = host.lstrip('=')
|
|
||||||
return get_project(
|
|
||||||
project_dir,
|
|
||||||
get_config_path_from_options(project_dir, options, environment),
|
|
||||||
project_name=options.get('--project-name'),
|
|
||||||
verbose=options.get('--verbose'),
|
|
||||||
host=host,
|
|
||||||
tls_config=tls_config_from_options(options),
|
|
||||||
environment=environment
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_config_from_options(base_dir, options):
|
|
||||||
environment = Environment.from_env_file(base_dir)
|
|
||||||
config_path = get_config_path_from_options(
|
|
||||||
base_dir, options, environment
|
|
||||||
)
|
|
||||||
return config.load(
|
|
||||||
config.find(base_dir, config_path, environment)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_config_path_from_options(base_dir, options, environment):
|
|
||||||
file_option = options.get('--file')
|
|
||||||
if file_option:
|
|
||||||
return file_option
|
|
||||||
|
|
||||||
config_files = environment.get('COMPOSE_FILE')
|
|
||||||
if config_files:
|
|
||||||
return config_files.split(os.pathsep)
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def get_tls_version(environment):
|
|
||||||
compose_tls_version = environment.get('COMPOSE_TLS_VERSION', None)
|
|
||||||
if not compose_tls_version:
|
|
||||||
return None
|
|
||||||
|
|
||||||
tls_attr_name = "PROTOCOL_{}".format(compose_tls_version)
|
|
||||||
if not hasattr(ssl, tls_attr_name):
|
|
||||||
log.warn(
|
|
||||||
'The "{}" protocol is unavailable. You may need to update your '
|
|
||||||
'version of Python or OpenSSL. Falling back to TLSv1 (default).'
|
|
||||||
.format(compose_tls_version)
|
|
||||||
)
|
|
||||||
return None
|
|
||||||
|
|
||||||
return getattr(ssl, tls_attr_name)
|
|
||||||
|
|
||||||
|
|
||||||
def get_client(environment, verbose=False, version=None, tls_config=None, host=None,
|
|
||||||
tls_version=None):
|
|
||||||
|
|
||||||
client = docker_client(
|
|
||||||
version=version, tls_config=tls_config, host=host,
|
|
||||||
environment=environment, tls_version=get_tls_version(environment)
|
|
||||||
)
|
|
||||||
if verbose:
|
|
||||||
version_info = six.iteritems(client.version())
|
|
||||||
log.info(get_version_info('full'))
|
|
||||||
log.info("Docker base_url: %s", client.base_url)
|
|
||||||
log.info("Docker version: %s",
|
|
||||||
", ".join("%s=%s" % item for item in version_info))
|
|
||||||
return verbose_proxy.VerboseProxy('docker', client)
|
|
||||||
return client
|
|
||||||
|
|
||||||
|
|
||||||
def get_project(project_dir, config_path=None, project_name=None, verbose=False,
|
|
||||||
host=None, tls_config=None, environment=None):
|
|
||||||
if not environment:
|
|
||||||
environment = Environment.from_env_file(project_dir)
|
|
||||||
config_details = config.find(project_dir, config_path, environment)
|
|
||||||
project_name = get_project_name(
|
|
||||||
config_details.working_dir, project_name, environment
|
|
||||||
)
|
|
||||||
config_data = config.load(config_details)
|
|
||||||
|
|
||||||
api_version = environment.get(
|
|
||||||
'COMPOSE_API_VERSION',
|
|
||||||
API_VERSIONS[config_data.version])
|
|
||||||
|
|
||||||
client = get_client(
|
|
||||||
verbose=verbose, version=api_version, tls_config=tls_config,
|
|
||||||
host=host, environment=environment
|
|
||||||
)
|
|
||||||
|
|
||||||
return Project.from_config(project_name, config_data, client)
|
|
||||||
|
|
||||||
|
|
||||||
def get_project_name(working_dir, project_name=None, environment=None):
|
|
||||||
def normalize_name(name):
|
|
||||||
return re.sub(r'[^a-z0-9]', '', name.lower())
|
|
||||||
|
|
||||||
if not environment:
|
|
||||||
environment = Environment.from_env_file(working_dir)
|
|
||||||
project_name = project_name or environment.get('COMPOSE_PROJECT_NAME')
|
|
||||||
if project_name:
|
|
||||||
return normalize_name(project_name)
|
|
||||||
|
|
||||||
project = os.path.basename(os.path.abspath(working_dir))
|
|
||||||
if project:
|
|
||||||
return normalize_name(project)
|
|
||||||
|
|
||||||
return 'default'
|
|
||||||
|
|
@ -1,73 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import logging
|
|
||||||
|
|
||||||
from docker import Client
|
|
||||||
from docker.errors import TLSParameterError
|
|
||||||
from docker.tls import TLSConfig
|
|
||||||
from docker.utils import kwargs_from_env
|
|
||||||
|
|
||||||
from ..const import HTTP_TIMEOUT
|
|
||||||
from .errors import UserError
|
|
||||||
from .utils import generate_user_agent
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def tls_config_from_options(options):
|
|
||||||
tls = options.get('--tls', False)
|
|
||||||
ca_cert = options.get('--tlscacert')
|
|
||||||
cert = options.get('--tlscert')
|
|
||||||
key = options.get('--tlskey')
|
|
||||||
verify = options.get('--tlsverify')
|
|
||||||
skip_hostname_check = options.get('--skip-hostname-check', False)
|
|
||||||
|
|
||||||
advanced_opts = any([ca_cert, cert, key, verify])
|
|
||||||
|
|
||||||
if tls is True and not advanced_opts:
|
|
||||||
return True
|
|
||||||
elif advanced_opts: # --tls is a noop
|
|
||||||
client_cert = None
|
|
||||||
if cert or key:
|
|
||||||
client_cert = (cert, key)
|
|
||||||
|
|
||||||
return TLSConfig(
|
|
||||||
client_cert=client_cert, verify=verify, ca_cert=ca_cert,
|
|
||||||
assert_hostname=False if skip_hostname_check else None
|
|
||||||
)
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def docker_client(environment, version=None, tls_config=None, host=None,
|
|
||||||
tls_version=None):
|
|
||||||
"""
|
|
||||||
Returns a docker-py client configured using environment variables
|
|
||||||
according to the same logic as the official Docker client.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
kwargs = kwargs_from_env(environment=environment, ssl_version=tls_version)
|
|
||||||
except TLSParameterError:
|
|
||||||
raise UserError(
|
|
||||||
"TLS configuration is invalid - make sure your DOCKER_TLS_VERIFY "
|
|
||||||
"and DOCKER_CERT_PATH are set correctly.\n"
|
|
||||||
"You might need to run `eval \"$(docker-machine env default)\"`")
|
|
||||||
|
|
||||||
if host:
|
|
||||||
kwargs['base_url'] = host
|
|
||||||
if tls_config:
|
|
||||||
kwargs['tls'] = tls_config
|
|
||||||
|
|
||||||
if version:
|
|
||||||
kwargs['version'] = version
|
|
||||||
|
|
||||||
timeout = environment.get('COMPOSE_HTTP_TIMEOUT')
|
|
||||||
if timeout:
|
|
||||||
kwargs['timeout'] = int(timeout)
|
|
||||||
else:
|
|
||||||
kwargs['timeout'] = HTTP_TIMEOUT
|
|
||||||
|
|
||||||
kwargs['user_agent'] = generate_user_agent()
|
|
||||||
|
|
||||||
return Client(**kwargs)
|
|
||||||
|
|
@ -1,59 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from inspect import getdoc
|
|
||||||
|
|
||||||
from docopt import docopt
|
|
||||||
from docopt import DocoptExit
|
|
||||||
|
|
||||||
|
|
||||||
def docopt_full_help(docstring, *args, **kwargs):
|
|
||||||
try:
|
|
||||||
return docopt(docstring, *args, **kwargs)
|
|
||||||
except DocoptExit:
|
|
||||||
raise SystemExit(docstring)
|
|
||||||
|
|
||||||
|
|
||||||
class DocoptDispatcher(object):
|
|
||||||
|
|
||||||
def __init__(self, command_class, options):
|
|
||||||
self.command_class = command_class
|
|
||||||
self.options = options
|
|
||||||
|
|
||||||
def parse(self, argv):
|
|
||||||
command_help = getdoc(self.command_class)
|
|
||||||
options = docopt_full_help(command_help, argv, **self.options)
|
|
||||||
command = options['COMMAND']
|
|
||||||
|
|
||||||
if command is None:
|
|
||||||
raise SystemExit(command_help)
|
|
||||||
|
|
||||||
handler = get_handler(self.command_class, command)
|
|
||||||
docstring = getdoc(handler)
|
|
||||||
|
|
||||||
if docstring is None:
|
|
||||||
raise NoSuchCommand(command, self)
|
|
||||||
|
|
||||||
command_options = docopt_full_help(docstring, options['ARGS'], options_first=True)
|
|
||||||
return options, handler, command_options
|
|
||||||
|
|
||||||
|
|
||||||
def get_handler(command_class, command):
|
|
||||||
command = command.replace('-', '_')
|
|
||||||
# we certainly want to have "exec" command, since that's what docker client has
|
|
||||||
# but in python exec is a keyword
|
|
||||||
if command == "exec":
|
|
||||||
command = "exec_command"
|
|
||||||
|
|
||||||
if not hasattr(command_class, command):
|
|
||||||
raise NoSuchCommand(command, command_class)
|
|
||||||
|
|
||||||
return getattr(command_class, command)
|
|
||||||
|
|
||||||
|
|
||||||
class NoSuchCommand(Exception):
|
|
||||||
def __init__(self, command, supercommand):
|
|
||||||
super(NoSuchCommand, self).__init__("No such command: %s" % command)
|
|
||||||
|
|
||||||
self.command = command
|
|
||||||
self.supercommand = supercommand
|
|
||||||
|
|
@ -1,139 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import contextlib
|
|
||||||
import logging
|
|
||||||
import socket
|
|
||||||
from textwrap import dedent
|
|
||||||
|
|
||||||
from docker.errors import APIError
|
|
||||||
from requests.exceptions import ConnectionError as RequestsConnectionError
|
|
||||||
from requests.exceptions import ReadTimeout
|
|
||||||
from requests.exceptions import SSLError
|
|
||||||
from requests.packages.urllib3.exceptions import ReadTimeoutError
|
|
||||||
|
|
||||||
from ..const import API_VERSION_TO_ENGINE_VERSION
|
|
||||||
from .utils import call_silently
|
|
||||||
from .utils import is_docker_for_mac_installed
|
|
||||||
from .utils import is_mac
|
|
||||||
from .utils import is_ubuntu
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class UserError(Exception):
|
|
||||||
|
|
||||||
def __init__(self, msg):
|
|
||||||
self.msg = dedent(msg).strip()
|
|
||||||
|
|
||||||
def __unicode__(self):
|
|
||||||
return self.msg
|
|
||||||
|
|
||||||
__str__ = __unicode__
|
|
||||||
|
|
||||||
|
|
||||||
class ConnectionError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
@contextlib.contextmanager
|
|
||||||
def handle_connection_errors(client):
|
|
||||||
try:
|
|
||||||
yield
|
|
||||||
except SSLError as e:
|
|
||||||
log.error('SSL error: %s' % e)
|
|
||||||
raise ConnectionError()
|
|
||||||
except RequestsConnectionError as e:
|
|
||||||
if e.args and isinstance(e.args[0], ReadTimeoutError):
|
|
||||||
log_timeout_error(client.timeout)
|
|
||||||
raise ConnectionError()
|
|
||||||
exit_with_error(get_conn_error_message(client.base_url))
|
|
||||||
except APIError as e:
|
|
||||||
log_api_error(e, client.api_version)
|
|
||||||
raise ConnectionError()
|
|
||||||
except (ReadTimeout, socket.timeout) as e:
|
|
||||||
log_timeout_error()
|
|
||||||
raise ConnectionError()
|
|
||||||
|
|
||||||
|
|
||||||
def log_timeout_error(timeout):
|
|
||||||
log.error(
|
|
||||||
"An HTTP request took too long to complete. Retry with --verbose to "
|
|
||||||
"obtain debug information.\n"
|
|
||||||
"If you encounter this issue regularly because of slow network "
|
|
||||||
"conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher "
|
|
||||||
"value (current value: %s)." % timeout)
|
|
||||||
|
|
||||||
|
|
||||||
def log_api_error(e, client_version):
|
|
||||||
if b'client is newer than server' not in e.explanation:
|
|
||||||
log.error(e.explanation)
|
|
||||||
return
|
|
||||||
|
|
||||||
version = API_VERSION_TO_ENGINE_VERSION.get(client_version)
|
|
||||||
if not version:
|
|
||||||
# They've set a custom API version
|
|
||||||
log.error(e.explanation)
|
|
||||||
return
|
|
||||||
|
|
||||||
log.error(
|
|
||||||
"The Docker Engine version is less than the minimum required by "
|
|
||||||
"Compose. Your current project requires a Docker Engine of "
|
|
||||||
"version {version} or greater.".format(version=version))
|
|
||||||
|
|
||||||
|
|
||||||
def exit_with_error(msg):
|
|
||||||
log.error(dedent(msg).strip())
|
|
||||||
raise ConnectionError()
|
|
||||||
|
|
||||||
|
|
||||||
def get_conn_error_message(url):
|
|
||||||
if call_silently(['which', 'docker']) != 0:
|
|
||||||
if is_mac():
|
|
||||||
return docker_not_found_mac
|
|
||||||
if is_ubuntu():
|
|
||||||
return docker_not_found_ubuntu
|
|
||||||
return docker_not_found_generic
|
|
||||||
if is_docker_for_mac_installed():
|
|
||||||
return conn_error_docker_for_mac
|
|
||||||
if call_silently(['which', 'docker-machine']) == 0:
|
|
||||||
return conn_error_docker_machine
|
|
||||||
return conn_error_generic.format(url=url)
|
|
||||||
|
|
||||||
|
|
||||||
docker_not_found_mac = """
|
|
||||||
Couldn't connect to Docker daemon. You might need to install Docker:
|
|
||||||
|
|
||||||
https://docs.docker.com/engine/installation/mac/
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
docker_not_found_ubuntu = """
|
|
||||||
Couldn't connect to Docker daemon. You might need to install Docker:
|
|
||||||
|
|
||||||
https://docs.docker.com/engine/installation/ubuntulinux/
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
docker_not_found_generic = """
|
|
||||||
Couldn't connect to Docker daemon. You might need to install Docker:
|
|
||||||
|
|
||||||
https://docs.docker.com/engine/installation/
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
conn_error_docker_machine = """
|
|
||||||
Couldn't connect to Docker daemon - you might need to run `docker-machine start default`.
|
|
||||||
"""
|
|
||||||
|
|
||||||
conn_error_docker_for_mac = """
|
|
||||||
Couldn't connect to Docker daemon. You might need to start Docker for Mac.
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
conn_error_generic = """
|
|
||||||
Couldn't connect to Docker daemon at {url} - is it running?
|
|
||||||
|
|
||||||
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
|
|
||||||
"""
|
|
||||||
|
|
@ -1,48 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
|
|
||||||
import texttable
|
|
||||||
|
|
||||||
from compose.cli import colors
|
|
||||||
|
|
||||||
|
|
||||||
def get_tty_width():
|
|
||||||
tty_size = os.popen('stty size', 'r').read().split()
|
|
||||||
if len(tty_size) != 2:
|
|
||||||
return 0
|
|
||||||
_, width = tty_size
|
|
||||||
return int(width)
|
|
||||||
|
|
||||||
|
|
||||||
class Formatter(object):
|
|
||||||
"""Format tabular data for printing."""
|
|
||||||
def table(self, headers, rows):
|
|
||||||
table = texttable.Texttable(max_width=get_tty_width())
|
|
||||||
table.set_cols_dtype(['t' for h in headers])
|
|
||||||
table.add_rows([headers] + rows)
|
|
||||||
table.set_deco(table.HEADER)
|
|
||||||
table.set_chars(['-', '|', '+', '-'])
|
|
||||||
|
|
||||||
return table.draw()
|
|
||||||
|
|
||||||
|
|
||||||
class ConsoleWarningFormatter(logging.Formatter):
|
|
||||||
"""A logging.Formatter which prints WARNING and ERROR messages with
|
|
||||||
a prefix of the log level colored appropriate for the log level.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def get_level_message(self, record):
|
|
||||||
separator = ': '
|
|
||||||
if record.levelno == logging.WARNING:
|
|
||||||
return colors.yellow(record.levelname) + separator
|
|
||||||
if record.levelno == logging.ERROR:
|
|
||||||
return colors.red(record.levelname) + separator
|
|
||||||
|
|
||||||
return ''
|
|
||||||
|
|
||||||
def format(self, record):
|
|
||||||
message = super(ConsoleWarningFormatter, self).format(record)
|
|
||||||
return self.get_level_message(record) + message
|
|
||||||
|
|
@ -1,230 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import sys
|
|
||||||
from collections import namedtuple
|
|
||||||
from itertools import cycle
|
|
||||||
from threading import Thread
|
|
||||||
|
|
||||||
from six.moves import _thread as thread
|
|
||||||
from six.moves.queue import Empty
|
|
||||||
from six.moves.queue import Queue
|
|
||||||
|
|
||||||
from . import colors
|
|
||||||
from compose import utils
|
|
||||||
from compose.cli.signals import ShutdownException
|
|
||||||
from compose.utils import split_buffer
|
|
||||||
|
|
||||||
|
|
||||||
class LogPresenter(object):
|
|
||||||
|
|
||||||
def __init__(self, prefix_width, color_func):
|
|
||||||
self.prefix_width = prefix_width
|
|
||||||
self.color_func = color_func
|
|
||||||
|
|
||||||
def present(self, container, line):
|
|
||||||
prefix = container.name_without_project.ljust(self.prefix_width)
|
|
||||||
return '{prefix} {line}'.format(
|
|
||||||
prefix=self.color_func(prefix + ' |'),
|
|
||||||
line=line)
|
|
||||||
|
|
||||||
|
|
||||||
def build_log_presenters(service_names, monochrome):
|
|
||||||
"""Return an iterable of functions.
|
|
||||||
|
|
||||||
Each function can be used to format the logs output of a container.
|
|
||||||
"""
|
|
||||||
prefix_width = max_name_width(service_names)
|
|
||||||
|
|
||||||
def no_color(text):
|
|
||||||
return text
|
|
||||||
|
|
||||||
for color_func in cycle([no_color] if monochrome else colors.rainbow()):
|
|
||||||
yield LogPresenter(prefix_width, color_func)
|
|
||||||
|
|
||||||
|
|
||||||
def max_name_width(service_names, max_index_width=3):
|
|
||||||
"""Calculate the maximum width of container names so we can make the log
|
|
||||||
prefixes line up like so:
|
|
||||||
|
|
||||||
db_1 | Listening
|
|
||||||
web_1 | Listening
|
|
||||||
"""
|
|
||||||
return max(len(name) for name in service_names) + max_index_width
|
|
||||||
|
|
||||||
|
|
||||||
class LogPrinter(object):
|
|
||||||
"""Print logs from many containers to a single output stream."""
|
|
||||||
|
|
||||||
def __init__(self,
|
|
||||||
containers,
|
|
||||||
presenters,
|
|
||||||
event_stream,
|
|
||||||
output=sys.stdout,
|
|
||||||
cascade_stop=False,
|
|
||||||
log_args=None):
|
|
||||||
self.containers = containers
|
|
||||||
self.presenters = presenters
|
|
||||||
self.event_stream = event_stream
|
|
||||||
self.output = utils.get_output_stream(output)
|
|
||||||
self.cascade_stop = cascade_stop
|
|
||||||
self.log_args = log_args or {}
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
if not self.containers:
|
|
||||||
return
|
|
||||||
|
|
||||||
queue = Queue()
|
|
||||||
thread_args = queue, self.log_args
|
|
||||||
thread_map = build_thread_map(self.containers, self.presenters, thread_args)
|
|
||||||
start_producer_thread((
|
|
||||||
thread_map,
|
|
||||||
self.event_stream,
|
|
||||||
self.presenters,
|
|
||||||
thread_args))
|
|
||||||
|
|
||||||
for line in consume_queue(queue, self.cascade_stop):
|
|
||||||
remove_stopped_threads(thread_map)
|
|
||||||
|
|
||||||
if not line:
|
|
||||||
if not thread_map:
|
|
||||||
# There are no running containers left to tail, so exit
|
|
||||||
return
|
|
||||||
# We got an empty line because of a timeout, but there are still
|
|
||||||
# active containers to tail, so continue
|
|
||||||
continue
|
|
||||||
|
|
||||||
self.output.write(line)
|
|
||||||
self.output.flush()
|
|
||||||
|
|
||||||
|
|
||||||
def remove_stopped_threads(thread_map):
|
|
||||||
for container_id, tailer_thread in list(thread_map.items()):
|
|
||||||
if not tailer_thread.is_alive():
|
|
||||||
thread_map.pop(container_id, None)
|
|
||||||
|
|
||||||
|
|
||||||
def build_thread(container, presenter, queue, log_args):
|
|
||||||
tailer = Thread(
|
|
||||||
target=tail_container_logs,
|
|
||||||
args=(container, presenter, queue, log_args))
|
|
||||||
tailer.daemon = True
|
|
||||||
tailer.start()
|
|
||||||
return tailer
|
|
||||||
|
|
||||||
|
|
||||||
def build_thread_map(initial_containers, presenters, thread_args):
|
|
||||||
return {
|
|
||||||
container.id: build_thread(container, next(presenters), *thread_args)
|
|
||||||
for container in initial_containers
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
class QueueItem(namedtuple('_QueueItem', 'item is_stop exc')):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def new(cls, item):
|
|
||||||
return cls(item, None, None)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def exception(cls, exc):
|
|
||||||
return cls(None, None, exc)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def stop(cls):
|
|
||||||
return cls(None, True, None)
|
|
||||||
|
|
||||||
|
|
||||||
def tail_container_logs(container, presenter, queue, log_args):
|
|
||||||
generator = get_log_generator(container)
|
|
||||||
|
|
||||||
try:
|
|
||||||
for item in generator(container, log_args):
|
|
||||||
queue.put(QueueItem.new(presenter.present(container, item)))
|
|
||||||
except Exception as e:
|
|
||||||
queue.put(QueueItem.exception(e))
|
|
||||||
return
|
|
||||||
|
|
||||||
if log_args.get('follow'):
|
|
||||||
queue.put(QueueItem.new(presenter.color_func(wait_on_exit(container))))
|
|
||||||
queue.put(QueueItem.stop())
|
|
||||||
|
|
||||||
|
|
||||||
def get_log_generator(container):
|
|
||||||
if container.has_api_logs:
|
|
||||||
return build_log_generator
|
|
||||||
return build_no_log_generator
|
|
||||||
|
|
||||||
|
|
||||||
def build_no_log_generator(container, log_args):
|
|
||||||
"""Return a generator that prints a warning about logs and waits for
|
|
||||||
container to exit.
|
|
||||||
"""
|
|
||||||
yield "WARNING: no logs are available with the '{}' log driver\n".format(
|
|
||||||
container.log_driver)
|
|
||||||
|
|
||||||
|
|
||||||
def build_log_generator(container, log_args):
|
|
||||||
# if the container doesn't have a log_stream we need to attach to container
|
|
||||||
# before log printer starts running
|
|
||||||
if container.log_stream is None:
|
|
||||||
stream = container.logs(stdout=True, stderr=True, stream=True, **log_args)
|
|
||||||
else:
|
|
||||||
stream = container.log_stream
|
|
||||||
|
|
||||||
return split_buffer(stream)
|
|
||||||
|
|
||||||
|
|
||||||
def wait_on_exit(container):
|
|
||||||
exit_code = container.wait()
|
|
||||||
return "%s exited with code %s\n" % (container.name, exit_code)
|
|
||||||
|
|
||||||
|
|
||||||
def start_producer_thread(thread_args):
|
|
||||||
producer = Thread(target=watch_events, args=thread_args)
|
|
||||||
producer.daemon = True
|
|
||||||
producer.start()
|
|
||||||
|
|
||||||
|
|
||||||
def watch_events(thread_map, event_stream, presenters, thread_args):
|
|
||||||
for event in event_stream:
|
|
||||||
if event['action'] == 'stop':
|
|
||||||
thread_map.pop(event['id'], None)
|
|
||||||
|
|
||||||
if event['action'] != 'start':
|
|
||||||
continue
|
|
||||||
|
|
||||||
if event['id'] in thread_map:
|
|
||||||
if thread_map[event['id']].is_alive():
|
|
||||||
continue
|
|
||||||
# Container was stopped and started, we need a new thread
|
|
||||||
thread_map.pop(event['id'], None)
|
|
||||||
|
|
||||||
thread_map[event['id']] = build_thread(
|
|
||||||
event['container'],
|
|
||||||
next(presenters),
|
|
||||||
*thread_args)
|
|
||||||
|
|
||||||
|
|
||||||
def consume_queue(queue, cascade_stop):
|
|
||||||
"""Consume the queue by reading lines off of it and yielding them."""
|
|
||||||
while True:
|
|
||||||
try:
|
|
||||||
item = queue.get(timeout=0.1)
|
|
||||||
except Empty:
|
|
||||||
yield None
|
|
||||||
continue
|
|
||||||
# See https://github.com/docker/compose/issues/189
|
|
||||||
except thread.error:
|
|
||||||
raise ShutdownException()
|
|
||||||
|
|
||||||
if item.exc:
|
|
||||||
raise item.exc
|
|
||||||
|
|
||||||
if item.is_stop:
|
|
||||||
if cascade_stop:
|
|
||||||
raise StopIteration
|
|
||||||
else:
|
|
||||||
continue
|
|
||||||
|
|
||||||
yield item.item
|
|
||||||
1046
compose/cli/main.py
|
|
@ -1,21 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import signal
|
|
||||||
|
|
||||||
|
|
||||||
class ShutdownException(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def shutdown(signal, frame):
|
|
||||||
raise ShutdownException()
|
|
||||||
|
|
||||||
|
|
||||||
def set_signal_handler(handler):
|
|
||||||
signal.signal(signal.SIGINT, handler)
|
|
||||||
signal.signal(signal.SIGTERM, handler)
|
|
||||||
|
|
||||||
|
|
||||||
def set_signal_handler_to_shutdown():
|
|
||||||
set_signal_handler(shutdown)
|
|
||||||
|
|
@ -1,124 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import division
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
import platform
|
|
||||||
import ssl
|
|
||||||
import subprocess
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import docker
|
|
||||||
|
|
||||||
import compose
|
|
||||||
|
|
||||||
# WindowsError is not defined on non-win32 platforms. Avoid runtime errors by
|
|
||||||
# defining it as OSError (its parent class) if missing.
|
|
||||||
try:
|
|
||||||
WindowsError
|
|
||||||
except NameError:
|
|
||||||
WindowsError = OSError
|
|
||||||
|
|
||||||
|
|
||||||
def yesno(prompt, default=None):
|
|
||||||
"""
|
|
||||||
Prompt the user for a yes or no.
|
|
||||||
|
|
||||||
Can optionally specify a default value, which will only be
|
|
||||||
used if they enter a blank line.
|
|
||||||
|
|
||||||
Unrecognised input (anything other than "y", "n", "yes",
|
|
||||||
"no" or "") will return None.
|
|
||||||
"""
|
|
||||||
answer = input(prompt).strip().lower()
|
|
||||||
|
|
||||||
if answer == "y" or answer == "yes":
|
|
||||||
return True
|
|
||||||
elif answer == "n" or answer == "no":
|
|
||||||
return False
|
|
||||||
elif answer == "":
|
|
||||||
return default
|
|
||||||
else:
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def input(prompt):
|
|
||||||
"""
|
|
||||||
Version of input (raw_input in Python 2) which forces a flush of sys.stdout
|
|
||||||
to avoid problems where the prompt fails to appear due to line buffering
|
|
||||||
"""
|
|
||||||
sys.stdout.write(prompt)
|
|
||||||
sys.stdout.flush()
|
|
||||||
return sys.stdin.readline().rstrip('\n')
|
|
||||||
|
|
||||||
|
|
||||||
def call_silently(*args, **kwargs):
|
|
||||||
"""
|
|
||||||
Like subprocess.call(), but redirects stdout and stderr to /dev/null.
|
|
||||||
"""
|
|
||||||
with open(os.devnull, 'w') as shutup:
|
|
||||||
try:
|
|
||||||
return subprocess.call(*args, stdout=shutup, stderr=shutup, **kwargs)
|
|
||||||
except WindowsError:
|
|
||||||
# On Windows, subprocess.call() can still raise exceptions. Normalize
|
|
||||||
# to POSIXy behaviour by returning a nonzero exit code.
|
|
||||||
return 1
|
|
||||||
|
|
||||||
|
|
||||||
def is_mac():
|
|
||||||
return platform.system() == 'Darwin'
|
|
||||||
|
|
||||||
|
|
||||||
def is_ubuntu():
|
|
||||||
return platform.system() == 'Linux' and platform.linux_distribution()[0] == 'Ubuntu'
|
|
||||||
|
|
||||||
|
|
||||||
def get_version_info(scope):
|
|
||||||
versioninfo = 'docker-compose version {}, build {}'.format(
|
|
||||||
compose.__version__,
|
|
||||||
get_build_version())
|
|
||||||
|
|
||||||
if scope == 'compose':
|
|
||||||
return versioninfo
|
|
||||||
if scope == 'full':
|
|
||||||
return (
|
|
||||||
"{}\n"
|
|
||||||
"docker-py version: {}\n"
|
|
||||||
"{} version: {}\n"
|
|
||||||
"OpenSSL version: {}"
|
|
||||||
).format(
|
|
||||||
versioninfo,
|
|
||||||
docker.version,
|
|
||||||
platform.python_implementation(),
|
|
||||||
platform.python_version(),
|
|
||||||
ssl.OPENSSL_VERSION)
|
|
||||||
|
|
||||||
raise ValueError("{} is not a valid version scope".format(scope))
|
|
||||||
|
|
||||||
|
|
||||||
def get_build_version():
|
|
||||||
filename = os.path.join(os.path.dirname(compose.__file__), 'GITSHA')
|
|
||||||
if not os.path.exists(filename):
|
|
||||||
return 'unknown'
|
|
||||||
|
|
||||||
with open(filename) as fh:
|
|
||||||
return fh.read().strip()
|
|
||||||
|
|
||||||
|
|
||||||
def is_docker_for_mac_installed():
|
|
||||||
return is_mac() and os.path.isdir('/Applications/Docker.app')
|
|
||||||
|
|
||||||
|
|
||||||
def generate_user_agent():
|
|
||||||
parts = [
|
|
||||||
"docker-compose/{}".format(compose.__version__),
|
|
||||||
"docker-py/{}".format(docker.__version__),
|
|
||||||
]
|
|
||||||
try:
|
|
||||||
p_system = platform.system()
|
|
||||||
p_release = platform.release()
|
|
||||||
except IOError:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
parts.append("{}/{}".format(p_system, p_release))
|
|
||||||
return " ".join(parts)
|
|
||||||
|
|
@ -1,60 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import functools
|
|
||||||
import logging
|
|
||||||
import pprint
|
|
||||||
from itertools import chain
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
|
|
||||||
def format_call(args, kwargs):
|
|
||||||
args = (repr(a) for a in args)
|
|
||||||
kwargs = ("{0!s}={1!r}".format(*item) for item in six.iteritems(kwargs))
|
|
||||||
return "({0})".format(", ".join(chain(args, kwargs)))
|
|
||||||
|
|
||||||
|
|
||||||
def format_return(result, max_lines):
|
|
||||||
if isinstance(result, (list, tuple, set)):
|
|
||||||
return "({0} with {1} items)".format(type(result).__name__, len(result))
|
|
||||||
|
|
||||||
if result:
|
|
||||||
lines = pprint.pformat(result).split('\n')
|
|
||||||
extra = '\n...' if len(lines) > max_lines else ''
|
|
||||||
return '\n'.join(lines[:max_lines]) + extra
|
|
||||||
|
|
||||||
return result
|
|
||||||
|
|
||||||
|
|
||||||
class VerboseProxy(object):
|
|
||||||
"""Proxy all function calls to another class and log method name, arguments
|
|
||||||
and return values for each call.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, obj_name, obj, log_name=None, max_lines=10):
|
|
||||||
self.obj_name = obj_name
|
|
||||||
self.obj = obj
|
|
||||||
self.max_lines = max_lines
|
|
||||||
self.log = logging.getLogger(log_name or __name__)
|
|
||||||
|
|
||||||
def __getattr__(self, name):
|
|
||||||
attr = getattr(self.obj, name)
|
|
||||||
|
|
||||||
if not six.callable(attr):
|
|
||||||
return attr
|
|
||||||
|
|
||||||
return functools.partial(self.proxy_callable, name)
|
|
||||||
|
|
||||||
def proxy_callable(self, call_name, *args, **kwargs):
|
|
||||||
self.log.info("%s %s <- %s",
|
|
||||||
self.obj_name,
|
|
||||||
call_name,
|
|
||||||
format_call(args, kwargs))
|
|
||||||
|
|
||||||
result = getattr(self.obj, call_name)(*args, **kwargs)
|
|
||||||
self.log.info("%s %s -> %s",
|
|
||||||
self.obj_name,
|
|
||||||
call_name,
|
|
||||||
format_return(result, self.max_lines))
|
|
||||||
return result
|
|
||||||
|
|
@ -1,11 +0,0 @@
|
||||||
# flake8: noqa
|
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from . import environment
|
|
||||||
from .config import ConfigurationError
|
|
||||||
from .config import DOCKER_CONFIG_KEYS
|
|
||||||
from .config import find
|
|
||||||
from .config import load
|
|
||||||
from .config import merge_environment
|
|
||||||
from .config import parse_environment
|
|
||||||
|
|
@ -1,997 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import functools
|
|
||||||
import logging
|
|
||||||
import ntpath
|
|
||||||
import os
|
|
||||||
import string
|
|
||||||
import sys
|
|
||||||
from collections import namedtuple
|
|
||||||
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
from cached_property import cached_property
|
|
||||||
|
|
||||||
from ..const import COMPOSEFILE_V1 as V1
|
|
||||||
from ..const import COMPOSEFILE_V2_0 as V2_0
|
|
||||||
from ..utils import build_string_dict
|
|
||||||
from .environment import env_vars_from_file
|
|
||||||
from .environment import Environment
|
|
||||||
from .environment import split_env
|
|
||||||
from .errors import CircularReference
|
|
||||||
from .errors import ComposeFileNotFound
|
|
||||||
from .errors import ConfigurationError
|
|
||||||
from .errors import VERSION_EXPLANATION
|
|
||||||
from .interpolation import interpolate_environment_variables
|
|
||||||
from .sort_services import get_container_name_from_network_mode
|
|
||||||
from .sort_services import get_service_name_from_network_mode
|
|
||||||
from .sort_services import sort_service_dicts
|
|
||||||
from .types import parse_extra_hosts
|
|
||||||
from .types import parse_restart_spec
|
|
||||||
from .types import ServiceLink
|
|
||||||
from .types import VolumeFromSpec
|
|
||||||
from .types import VolumeSpec
|
|
||||||
from .validation import match_named_volumes
|
|
||||||
from .validation import validate_against_config_schema
|
|
||||||
from .validation import validate_config_section
|
|
||||||
from .validation import validate_depends_on
|
|
||||||
from .validation import validate_extends_file_path
|
|
||||||
from .validation import validate_links
|
|
||||||
from .validation import validate_network_mode
|
|
||||||
from .validation import validate_service_constraints
|
|
||||||
from .validation import validate_top_level_object
|
|
||||||
from .validation import validate_ulimits
|
|
||||||
|
|
||||||
|
|
||||||
DOCKER_CONFIG_KEYS = [
|
|
||||||
'cap_add',
|
|
||||||
'cap_drop',
|
|
||||||
'cgroup_parent',
|
|
||||||
'command',
|
|
||||||
'cpu_quota',
|
|
||||||
'cpu_shares',
|
|
||||||
'cpuset',
|
|
||||||
'detach',
|
|
||||||
'devices',
|
|
||||||
'dns',
|
|
||||||
'dns_search',
|
|
||||||
'domainname',
|
|
||||||
'entrypoint',
|
|
||||||
'env_file',
|
|
||||||
'environment',
|
|
||||||
'extra_hosts',
|
|
||||||
'hostname',
|
|
||||||
'image',
|
|
||||||
'ipc',
|
|
||||||
'labels',
|
|
||||||
'links',
|
|
||||||
'mac_address',
|
|
||||||
'mem_limit',
|
|
||||||
'memswap_limit',
|
|
||||||
'net',
|
|
||||||
'pid',
|
|
||||||
'ports',
|
|
||||||
'privileged',
|
|
||||||
'read_only',
|
|
||||||
'restart',
|
|
||||||
'security_opt',
|
|
||||||
'shm_size',
|
|
||||||
'stdin_open',
|
|
||||||
'stop_signal',
|
|
||||||
'tty',
|
|
||||||
'user',
|
|
||||||
'volume_driver',
|
|
||||||
'volumes',
|
|
||||||
'volumes_from',
|
|
||||||
'working_dir',
|
|
||||||
]
|
|
||||||
|
|
||||||
ALLOWED_KEYS = DOCKER_CONFIG_KEYS + [
|
|
||||||
'build',
|
|
||||||
'container_name',
|
|
||||||
'dockerfile',
|
|
||||||
'log_driver',
|
|
||||||
'log_opt',
|
|
||||||
'logging',
|
|
||||||
'network_mode',
|
|
||||||
]
|
|
||||||
|
|
||||||
DOCKER_VALID_URL_PREFIXES = (
|
|
||||||
'http://',
|
|
||||||
'https://',
|
|
||||||
'git://',
|
|
||||||
'github.com/',
|
|
||||||
'git@',
|
|
||||||
)
|
|
||||||
|
|
||||||
SUPPORTED_FILENAMES = [
|
|
||||||
'docker-compose.yml',
|
|
||||||
'docker-compose.yaml',
|
|
||||||
]
|
|
||||||
|
|
||||||
DEFAULT_OVERRIDE_FILENAME = 'docker-compose.override.yml'
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigDetails(namedtuple('_ConfigDetails', 'working_dir config_files environment')):
|
|
||||||
"""
|
|
||||||
:param working_dir: the directory to use for relative paths in the config
|
|
||||||
:type working_dir: string
|
|
||||||
:param config_files: list of configuration files to load
|
|
||||||
:type config_files: list of :class:`ConfigFile`
|
|
||||||
:param environment: computed environment values for this project
|
|
||||||
:type environment: :class:`environment.Environment`
|
|
||||||
"""
|
|
||||||
def __new__(cls, working_dir, config_files, environment=None):
|
|
||||||
if environment is None:
|
|
||||||
environment = Environment.from_env_file(working_dir)
|
|
||||||
return super(ConfigDetails, cls).__new__(
|
|
||||||
cls, working_dir, config_files, environment
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigFile(namedtuple('_ConfigFile', 'filename config')):
|
|
||||||
"""
|
|
||||||
:param filename: filename of the config file
|
|
||||||
:type filename: string
|
|
||||||
:param config: contents of the config file
|
|
||||||
:type config: :class:`dict`
|
|
||||||
"""
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_filename(cls, filename):
|
|
||||||
return cls(filename, load_yaml(filename))
|
|
||||||
|
|
||||||
@cached_property
|
|
||||||
def version(self):
|
|
||||||
if 'version' not in self.config:
|
|
||||||
return V1
|
|
||||||
|
|
||||||
version = self.config['version']
|
|
||||||
|
|
||||||
if isinstance(version, dict):
|
|
||||||
log.warn('Unexpected type for "version" key in "{}". Assuming '
|
|
||||||
'"version" is the name of a service, and defaulting to '
|
|
||||||
'Compose file version 1.'.format(self.filename))
|
|
||||||
return V1
|
|
||||||
|
|
||||||
if not isinstance(version, six.string_types):
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Version in "{}" is invalid - it should be a string.'
|
|
||||||
.format(self.filename))
|
|
||||||
|
|
||||||
if version == '1':
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Version in "{}" is invalid. {}'
|
|
||||||
.format(self.filename, VERSION_EXPLANATION))
|
|
||||||
|
|
||||||
if version == '2':
|
|
||||||
version = V2_0
|
|
||||||
|
|
||||||
if version != V2_0:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Version in "{}" is unsupported. {}'
|
|
||||||
.format(self.filename, VERSION_EXPLANATION))
|
|
||||||
|
|
||||||
return version
|
|
||||||
|
|
||||||
def get_service(self, name):
|
|
||||||
return self.get_service_dicts()[name]
|
|
||||||
|
|
||||||
def get_service_dicts(self):
|
|
||||||
return self.config if self.version == V1 else self.config.get('services', {})
|
|
||||||
|
|
||||||
def get_volumes(self):
|
|
||||||
return {} if self.version == V1 else self.config.get('volumes', {})
|
|
||||||
|
|
||||||
def get_networks(self):
|
|
||||||
return {} if self.version == V1 else self.config.get('networks', {})
|
|
||||||
|
|
||||||
|
|
||||||
class Config(namedtuple('_Config', 'version services volumes networks')):
|
|
||||||
"""
|
|
||||||
:param version: configuration version
|
|
||||||
:type version: int
|
|
||||||
:param services: List of service description dictionaries
|
|
||||||
:type services: :class:`list`
|
|
||||||
:param volumes: Dictionary mapping volume names to description dictionaries
|
|
||||||
:type volumes: :class:`dict`
|
|
||||||
:param networks: Dictionary mapping network names to description dictionaries
|
|
||||||
:type networks: :class:`dict`
|
|
||||||
"""
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceConfig(namedtuple('_ServiceConfig', 'working_dir filename name config')):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def with_abs_paths(cls, working_dir, filename, name, config):
|
|
||||||
if not working_dir:
|
|
||||||
raise ValueError("No working_dir for ServiceConfig.")
|
|
||||||
|
|
||||||
return cls(
|
|
||||||
os.path.abspath(working_dir),
|
|
||||||
os.path.abspath(filename) if filename else filename,
|
|
||||||
name,
|
|
||||||
config)
|
|
||||||
|
|
||||||
|
|
||||||
def find(base_dir, filenames, environment):
|
|
||||||
if filenames == ['-']:
|
|
||||||
return ConfigDetails(
|
|
||||||
os.getcwd(),
|
|
||||||
[ConfigFile(None, yaml.safe_load(sys.stdin))],
|
|
||||||
environment
|
|
||||||
)
|
|
||||||
|
|
||||||
if filenames:
|
|
||||||
filenames = [os.path.join(base_dir, f) for f in filenames]
|
|
||||||
else:
|
|
||||||
filenames = get_default_config_files(base_dir)
|
|
||||||
|
|
||||||
log.debug("Using configuration files: {}".format(",".join(filenames)))
|
|
||||||
return ConfigDetails(
|
|
||||||
os.path.dirname(filenames[0]),
|
|
||||||
[ConfigFile.from_filename(f) for f in filenames],
|
|
||||||
environment
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_config_version(config_files):
|
|
||||||
main_file = config_files[0]
|
|
||||||
validate_top_level_object(main_file)
|
|
||||||
for next_file in config_files[1:]:
|
|
||||||
validate_top_level_object(next_file)
|
|
||||||
|
|
||||||
if main_file.version != next_file.version:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Version mismatch: file {0} specifies version {1} but "
|
|
||||||
"extension file {2} uses version {3}".format(
|
|
||||||
main_file.filename,
|
|
||||||
main_file.version,
|
|
||||||
next_file.filename,
|
|
||||||
next_file.version))
|
|
||||||
|
|
||||||
|
|
||||||
def get_default_config_files(base_dir):
|
|
||||||
(candidates, path) = find_candidates_in_parent_dirs(SUPPORTED_FILENAMES, base_dir)
|
|
||||||
|
|
||||||
if not candidates:
|
|
||||||
raise ComposeFileNotFound(SUPPORTED_FILENAMES)
|
|
||||||
|
|
||||||
winner = candidates[0]
|
|
||||||
|
|
||||||
if len(candidates) > 1:
|
|
||||||
log.warn("Found multiple config files with supported names: %s", ", ".join(candidates))
|
|
||||||
log.warn("Using %s\n", winner)
|
|
||||||
|
|
||||||
return [os.path.join(path, winner)] + get_default_override_file(path)
|
|
||||||
|
|
||||||
|
|
||||||
def get_default_override_file(path):
|
|
||||||
override_filename = os.path.join(path, DEFAULT_OVERRIDE_FILENAME)
|
|
||||||
return [override_filename] if os.path.exists(override_filename) else []
|
|
||||||
|
|
||||||
|
|
||||||
def find_candidates_in_parent_dirs(filenames, path):
|
|
||||||
"""
|
|
||||||
Given a directory path to start, looks for filenames in the
|
|
||||||
directory, and then each parent directory successively,
|
|
||||||
until found.
|
|
||||||
|
|
||||||
Returns tuple (candidates, path).
|
|
||||||
"""
|
|
||||||
candidates = [filename for filename in filenames
|
|
||||||
if os.path.exists(os.path.join(path, filename))]
|
|
||||||
|
|
||||||
if not candidates:
|
|
||||||
parent_dir = os.path.join(path, '..')
|
|
||||||
if os.path.abspath(parent_dir) != os.path.abspath(path):
|
|
||||||
return find_candidates_in_parent_dirs(filenames, parent_dir)
|
|
||||||
|
|
||||||
return (candidates, path)
|
|
||||||
|
|
||||||
|
|
||||||
def load(config_details):
|
|
||||||
"""Load the configuration from a working directory and a list of
|
|
||||||
configuration files. Files are loaded in order, and merged on top
|
|
||||||
of each other to create the final configuration.
|
|
||||||
|
|
||||||
Return a fully interpolated, extended and validated configuration.
|
|
||||||
"""
|
|
||||||
validate_config_version(config_details.config_files)
|
|
||||||
|
|
||||||
processed_files = [
|
|
||||||
process_config_file(config_file, config_details.environment)
|
|
||||||
for config_file in config_details.config_files
|
|
||||||
]
|
|
||||||
config_details = config_details._replace(config_files=processed_files)
|
|
||||||
|
|
||||||
main_file = config_details.config_files[0]
|
|
||||||
volumes = load_mapping(
|
|
||||||
config_details.config_files, 'get_volumes', 'Volume'
|
|
||||||
)
|
|
||||||
networks = load_mapping(
|
|
||||||
config_details.config_files, 'get_networks', 'Network'
|
|
||||||
)
|
|
||||||
service_dicts = load_services(config_details, main_file)
|
|
||||||
|
|
||||||
if main_file.version != V1:
|
|
||||||
for service_dict in service_dicts:
|
|
||||||
match_named_volumes(service_dict, volumes)
|
|
||||||
|
|
||||||
return Config(main_file.version, service_dicts, volumes, networks)
|
|
||||||
|
|
||||||
|
|
||||||
def load_mapping(config_files, get_func, entity_type):
|
|
||||||
mapping = {}
|
|
||||||
|
|
||||||
for config_file in config_files:
|
|
||||||
for name, config in getattr(config_file, get_func)().items():
|
|
||||||
mapping[name] = config or {}
|
|
||||||
if not config:
|
|
||||||
continue
|
|
||||||
|
|
||||||
external = config.get('external')
|
|
||||||
if external:
|
|
||||||
if len(config.keys()) > 1:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'{} {} declared as external but specifies'
|
|
||||||
' additional attributes ({}). '.format(
|
|
||||||
entity_type,
|
|
||||||
name,
|
|
||||||
', '.join([k for k in config.keys() if k != 'external'])
|
|
||||||
)
|
|
||||||
)
|
|
||||||
if isinstance(external, dict):
|
|
||||||
config['external_name'] = external.get('name')
|
|
||||||
else:
|
|
||||||
config['external_name'] = name
|
|
||||||
|
|
||||||
mapping[name] = config
|
|
||||||
|
|
||||||
if 'driver_opts' in config:
|
|
||||||
config['driver_opts'] = build_string_dict(
|
|
||||||
config['driver_opts']
|
|
||||||
)
|
|
||||||
|
|
||||||
return mapping
|
|
||||||
|
|
||||||
|
|
||||||
def load_services(config_details, config_file):
|
|
||||||
def build_service(service_name, service_dict, service_names):
|
|
||||||
service_config = ServiceConfig.with_abs_paths(
|
|
||||||
config_details.working_dir,
|
|
||||||
config_file.filename,
|
|
||||||
service_name,
|
|
||||||
service_dict)
|
|
||||||
resolver = ServiceExtendsResolver(
|
|
||||||
service_config, config_file, environment=config_details.environment
|
|
||||||
)
|
|
||||||
service_dict = process_service(resolver.run())
|
|
||||||
|
|
||||||
service_config = service_config._replace(config=service_dict)
|
|
||||||
validate_service(service_config, service_names, config_file.version)
|
|
||||||
service_dict = finalize_service(
|
|
||||||
service_config,
|
|
||||||
service_names,
|
|
||||||
config_file.version,
|
|
||||||
config_details.environment)
|
|
||||||
return service_dict
|
|
||||||
|
|
||||||
def build_services(service_config):
|
|
||||||
service_names = service_config.keys()
|
|
||||||
return sort_service_dicts([
|
|
||||||
build_service(name, service_dict, service_names)
|
|
||||||
for name, service_dict in service_config.items()
|
|
||||||
])
|
|
||||||
|
|
||||||
def merge_services(base, override):
|
|
||||||
all_service_names = set(base) | set(override)
|
|
||||||
return {
|
|
||||||
name: merge_service_dicts_from_files(
|
|
||||||
base.get(name, {}),
|
|
||||||
override.get(name, {}),
|
|
||||||
config_file.version)
|
|
||||||
for name in all_service_names
|
|
||||||
}
|
|
||||||
|
|
||||||
service_configs = [
|
|
||||||
file.get_service_dicts() for file in config_details.config_files
|
|
||||||
]
|
|
||||||
|
|
||||||
service_config = service_configs[0]
|
|
||||||
for next_config in service_configs[1:]:
|
|
||||||
service_config = merge_services(service_config, next_config)
|
|
||||||
|
|
||||||
return build_services(service_config)
|
|
||||||
|
|
||||||
|
|
||||||
def interpolate_config_section(filename, config, section, environment):
|
|
||||||
validate_config_section(filename, config, section)
|
|
||||||
return interpolate_environment_variables(config, section, environment)
|
|
||||||
|
|
||||||
|
|
||||||
def process_config_file(config_file, environment, service_name=None):
|
|
||||||
services = interpolate_config_section(
|
|
||||||
config_file.filename,
|
|
||||||
config_file.get_service_dicts(),
|
|
||||||
'service',
|
|
||||||
environment,)
|
|
||||||
|
|
||||||
if config_file.version == V2_0:
|
|
||||||
processed_config = dict(config_file.config)
|
|
||||||
processed_config['services'] = services
|
|
||||||
processed_config['volumes'] = interpolate_config_section(
|
|
||||||
config_file.filename,
|
|
||||||
config_file.get_volumes(),
|
|
||||||
'volume',
|
|
||||||
environment,)
|
|
||||||
processed_config['networks'] = interpolate_config_section(
|
|
||||||
config_file.filename,
|
|
||||||
config_file.get_networks(),
|
|
||||||
'network',
|
|
||||||
environment,)
|
|
||||||
|
|
||||||
if config_file.version == V1:
|
|
||||||
processed_config = services
|
|
||||||
|
|
||||||
config_file = config_file._replace(config=processed_config)
|
|
||||||
validate_against_config_schema(config_file)
|
|
||||||
|
|
||||||
if service_name and service_name not in services:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Cannot extend service '{}' in {}: Service not found".format(
|
|
||||||
service_name, config_file.filename))
|
|
||||||
|
|
||||||
return config_file
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceExtendsResolver(object):
|
|
||||||
def __init__(self, service_config, config_file, environment, already_seen=None):
|
|
||||||
self.service_config = service_config
|
|
||||||
self.working_dir = service_config.working_dir
|
|
||||||
self.already_seen = already_seen or []
|
|
||||||
self.config_file = config_file
|
|
||||||
self.environment = environment
|
|
||||||
|
|
||||||
@property
|
|
||||||
def signature(self):
|
|
||||||
return self.service_config.filename, self.service_config.name
|
|
||||||
|
|
||||||
def detect_cycle(self):
|
|
||||||
if self.signature in self.already_seen:
|
|
||||||
raise CircularReference(self.already_seen + [self.signature])
|
|
||||||
|
|
||||||
def run(self):
|
|
||||||
self.detect_cycle()
|
|
||||||
|
|
||||||
if 'extends' in self.service_config.config:
|
|
||||||
service_dict = self.resolve_extends(*self.validate_and_construct_extends())
|
|
||||||
return self.service_config._replace(config=service_dict)
|
|
||||||
|
|
||||||
return self.service_config
|
|
||||||
|
|
||||||
def validate_and_construct_extends(self):
|
|
||||||
extends = self.service_config.config['extends']
|
|
||||||
if not isinstance(extends, dict):
|
|
||||||
extends = {'service': extends}
|
|
||||||
|
|
||||||
config_path = self.get_extended_config_path(extends)
|
|
||||||
service_name = extends['service']
|
|
||||||
|
|
||||||
extends_file = ConfigFile.from_filename(config_path)
|
|
||||||
validate_config_version([self.config_file, extends_file])
|
|
||||||
extended_file = process_config_file(
|
|
||||||
extends_file, self.environment, service_name=service_name
|
|
||||||
)
|
|
||||||
service_config = extended_file.get_service(service_name)
|
|
||||||
|
|
||||||
return config_path, service_config, service_name
|
|
||||||
|
|
||||||
def resolve_extends(self, extended_config_path, service_dict, service_name):
|
|
||||||
resolver = ServiceExtendsResolver(
|
|
||||||
ServiceConfig.with_abs_paths(
|
|
||||||
os.path.dirname(extended_config_path),
|
|
||||||
extended_config_path,
|
|
||||||
service_name,
|
|
||||||
service_dict),
|
|
||||||
self.config_file,
|
|
||||||
already_seen=self.already_seen + [self.signature],
|
|
||||||
environment=self.environment
|
|
||||||
)
|
|
||||||
|
|
||||||
service_config = resolver.run()
|
|
||||||
other_service_dict = process_service(service_config)
|
|
||||||
validate_extended_service_dict(
|
|
||||||
other_service_dict,
|
|
||||||
extended_config_path,
|
|
||||||
service_name)
|
|
||||||
|
|
||||||
return merge_service_dicts(
|
|
||||||
other_service_dict,
|
|
||||||
self.service_config.config,
|
|
||||||
self.config_file.version)
|
|
||||||
|
|
||||||
def get_extended_config_path(self, extends_options):
|
|
||||||
"""Service we are extending either has a value for 'file' set, which we
|
|
||||||
need to obtain a full path too or we are extending from a service
|
|
||||||
defined in our own file.
|
|
||||||
"""
|
|
||||||
filename = self.service_config.filename
|
|
||||||
validate_extends_file_path(
|
|
||||||
self.service_config.name,
|
|
||||||
extends_options,
|
|
||||||
filename)
|
|
||||||
if 'file' in extends_options:
|
|
||||||
return expand_path(self.working_dir, extends_options['file'])
|
|
||||||
return filename
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_environment(service_dict, environment=None):
|
|
||||||
"""Unpack any environment variables from an env_file, if set.
|
|
||||||
Interpolate environment values if set.
|
|
||||||
"""
|
|
||||||
env = {}
|
|
||||||
for env_file in service_dict.get('env_file', []):
|
|
||||||
env.update(env_vars_from_file(env_file))
|
|
||||||
|
|
||||||
env.update(parse_environment(service_dict.get('environment')))
|
|
||||||
return dict(resolve_env_var(k, v, environment) for k, v in six.iteritems(env))
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_build_args(build, environment):
|
|
||||||
args = parse_build_arguments(build.get('args'))
|
|
||||||
return dict(resolve_env_var(k, v, environment) for k, v in six.iteritems(args))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_extended_service_dict(service_dict, filename, service):
|
|
||||||
error_prefix = "Cannot extend service '%s' in %s:" % (service, filename)
|
|
||||||
|
|
||||||
if 'links' in service_dict:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s services with 'links' cannot be extended" % error_prefix)
|
|
||||||
|
|
||||||
if 'volumes_from' in service_dict:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s services with 'volumes_from' cannot be extended" % error_prefix)
|
|
||||||
|
|
||||||
if 'net' in service_dict:
|
|
||||||
if get_container_name_from_network_mode(service_dict['net']):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s services with 'net: container' cannot be extended" % error_prefix)
|
|
||||||
|
|
||||||
if 'network_mode' in service_dict:
|
|
||||||
if get_service_name_from_network_mode(service_dict['network_mode']):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s services with 'network_mode: service' cannot be extended" % error_prefix)
|
|
||||||
|
|
||||||
if 'depends_on' in service_dict:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s services with 'depends_on' cannot be extended" % error_prefix)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_service(service_config, service_names, version):
|
|
||||||
service_dict, service_name = service_config.config, service_config.name
|
|
||||||
validate_service_constraints(service_dict, service_name, version)
|
|
||||||
validate_paths(service_dict)
|
|
||||||
|
|
||||||
validate_ulimits(service_config)
|
|
||||||
validate_network_mode(service_config, service_names)
|
|
||||||
validate_depends_on(service_config, service_names)
|
|
||||||
validate_links(service_config, service_names)
|
|
||||||
|
|
||||||
if not service_dict.get('image') and has_uppercase(service_name):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service '{name}' contains uppercase characters which are not valid "
|
|
||||||
"as part of an image name. Either use a lowercase service name or "
|
|
||||||
"use the `image` field to set a custom name for the service image."
|
|
||||||
.format(name=service_name))
|
|
||||||
|
|
||||||
|
|
||||||
def process_service(service_config):
|
|
||||||
working_dir = service_config.working_dir
|
|
||||||
service_dict = dict(service_config.config)
|
|
||||||
|
|
||||||
if 'env_file' in service_dict:
|
|
||||||
service_dict['env_file'] = [
|
|
||||||
expand_path(working_dir, path)
|
|
||||||
for path in to_list(service_dict['env_file'])
|
|
||||||
]
|
|
||||||
|
|
||||||
if 'build' in service_dict:
|
|
||||||
if isinstance(service_dict['build'], six.string_types):
|
|
||||||
service_dict['build'] = resolve_build_path(working_dir, service_dict['build'])
|
|
||||||
elif isinstance(service_dict['build'], dict) and 'context' in service_dict['build']:
|
|
||||||
path = service_dict['build']['context']
|
|
||||||
service_dict['build']['context'] = resolve_build_path(working_dir, path)
|
|
||||||
|
|
||||||
if 'volumes' in service_dict and service_dict.get('volume_driver') is None:
|
|
||||||
service_dict['volumes'] = resolve_volume_paths(working_dir, service_dict)
|
|
||||||
|
|
||||||
if 'labels' in service_dict:
|
|
||||||
service_dict['labels'] = parse_labels(service_dict['labels'])
|
|
||||||
|
|
||||||
if 'extra_hosts' in service_dict:
|
|
||||||
service_dict['extra_hosts'] = parse_extra_hosts(service_dict['extra_hosts'])
|
|
||||||
|
|
||||||
for field in ['dns', 'dns_search', 'tmpfs']:
|
|
||||||
if field in service_dict:
|
|
||||||
service_dict[field] = to_list(service_dict[field])
|
|
||||||
|
|
||||||
return service_dict
|
|
||||||
|
|
||||||
|
|
||||||
def finalize_service(service_config, service_names, version, environment):
|
|
||||||
service_dict = dict(service_config.config)
|
|
||||||
|
|
||||||
if 'environment' in service_dict or 'env_file' in service_dict:
|
|
||||||
service_dict['environment'] = resolve_environment(service_dict, environment)
|
|
||||||
service_dict.pop('env_file', None)
|
|
||||||
|
|
||||||
if 'volumes_from' in service_dict:
|
|
||||||
service_dict['volumes_from'] = [
|
|
||||||
VolumeFromSpec.parse(vf, service_names, version)
|
|
||||||
for vf in service_dict['volumes_from']
|
|
||||||
]
|
|
||||||
|
|
||||||
if 'volumes' in service_dict:
|
|
||||||
service_dict['volumes'] = [
|
|
||||||
VolumeSpec.parse(v) for v in service_dict['volumes']]
|
|
||||||
|
|
||||||
if 'net' in service_dict:
|
|
||||||
network_mode = service_dict.pop('net')
|
|
||||||
container_name = get_container_name_from_network_mode(network_mode)
|
|
||||||
if container_name and container_name in service_names:
|
|
||||||
service_dict['network_mode'] = 'service:{}'.format(container_name)
|
|
||||||
else:
|
|
||||||
service_dict['network_mode'] = network_mode
|
|
||||||
|
|
||||||
if 'networks' in service_dict:
|
|
||||||
service_dict['networks'] = parse_networks(service_dict['networks'])
|
|
||||||
|
|
||||||
if 'restart' in service_dict:
|
|
||||||
service_dict['restart'] = parse_restart_spec(service_dict['restart'])
|
|
||||||
|
|
||||||
normalize_build(service_dict, service_config.working_dir, environment)
|
|
||||||
|
|
||||||
service_dict['name'] = service_config.name
|
|
||||||
return normalize_v1_service_format(service_dict)
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_v1_service_format(service_dict):
|
|
||||||
if 'log_driver' in service_dict or 'log_opt' in service_dict:
|
|
||||||
if 'logging' not in service_dict:
|
|
||||||
service_dict['logging'] = {}
|
|
||||||
if 'log_driver' in service_dict:
|
|
||||||
service_dict['logging']['driver'] = service_dict['log_driver']
|
|
||||||
del service_dict['log_driver']
|
|
||||||
if 'log_opt' in service_dict:
|
|
||||||
service_dict['logging']['options'] = service_dict['log_opt']
|
|
||||||
del service_dict['log_opt']
|
|
||||||
|
|
||||||
if 'dockerfile' in service_dict:
|
|
||||||
service_dict['build'] = service_dict.get('build', {})
|
|
||||||
service_dict['build'].update({
|
|
||||||
'dockerfile': service_dict.pop('dockerfile')
|
|
||||||
})
|
|
||||||
|
|
||||||
return service_dict
|
|
||||||
|
|
||||||
|
|
||||||
def merge_service_dicts_from_files(base, override, version):
|
|
||||||
"""When merging services from multiple files we need to merge the `extends`
|
|
||||||
field. This is not handled by `merge_service_dicts()` which is used to
|
|
||||||
perform the `extends`.
|
|
||||||
"""
|
|
||||||
new_service = merge_service_dicts(base, override, version)
|
|
||||||
if 'extends' in override:
|
|
||||||
new_service['extends'] = override['extends']
|
|
||||||
elif 'extends' in base:
|
|
||||||
new_service['extends'] = base['extends']
|
|
||||||
return new_service
|
|
||||||
|
|
||||||
|
|
||||||
class MergeDict(dict):
|
|
||||||
"""A dict-like object responsible for merging two dicts into one."""
|
|
||||||
|
|
||||||
def __init__(self, base, override):
|
|
||||||
self.base = base
|
|
||||||
self.override = override
|
|
||||||
|
|
||||||
def needs_merge(self, field):
|
|
||||||
return field in self.base or field in self.override
|
|
||||||
|
|
||||||
def merge_field(self, field, merge_func, default=None):
|
|
||||||
if not self.needs_merge(field):
|
|
||||||
return
|
|
||||||
|
|
||||||
self[field] = merge_func(
|
|
||||||
self.base.get(field, default),
|
|
||||||
self.override.get(field, default))
|
|
||||||
|
|
||||||
def merge_mapping(self, field, parse_func):
|
|
||||||
if not self.needs_merge(field):
|
|
||||||
return
|
|
||||||
|
|
||||||
self[field] = parse_func(self.base.get(field))
|
|
||||||
self[field].update(parse_func(self.override.get(field)))
|
|
||||||
|
|
||||||
def merge_sequence(self, field, parse_func):
|
|
||||||
def parse_sequence_func(seq):
|
|
||||||
return to_mapping((parse_func(item) for item in seq), 'merge_field')
|
|
||||||
|
|
||||||
if not self.needs_merge(field):
|
|
||||||
return
|
|
||||||
|
|
||||||
merged = parse_sequence_func(self.base.get(field, []))
|
|
||||||
merged.update(parse_sequence_func(self.override.get(field, [])))
|
|
||||||
self[field] = [item.repr() for item in sorted(merged.values())]
|
|
||||||
|
|
||||||
def merge_scalar(self, field):
|
|
||||||
if self.needs_merge(field):
|
|
||||||
self[field] = self.override.get(field, self.base.get(field))
|
|
||||||
|
|
||||||
|
|
||||||
def merge_service_dicts(base, override, version):
|
|
||||||
md = MergeDict(base, override)
|
|
||||||
|
|
||||||
md.merge_mapping('environment', parse_environment)
|
|
||||||
md.merge_mapping('labels', parse_labels)
|
|
||||||
md.merge_mapping('ulimits', parse_ulimits)
|
|
||||||
md.merge_mapping('networks', parse_networks)
|
|
||||||
md.merge_sequence('links', ServiceLink.parse)
|
|
||||||
|
|
||||||
for field in ['volumes', 'devices']:
|
|
||||||
md.merge_field(field, merge_path_mappings)
|
|
||||||
|
|
||||||
for field in [
|
|
||||||
'ports', 'cap_add', 'cap_drop', 'expose', 'external_links',
|
|
||||||
'security_opt', 'volumes_from', 'depends_on',
|
|
||||||
]:
|
|
||||||
md.merge_field(field, merge_unique_items_lists, default=[])
|
|
||||||
|
|
||||||
for field in ['dns', 'dns_search', 'env_file', 'tmpfs']:
|
|
||||||
md.merge_field(field, merge_list_or_string)
|
|
||||||
|
|
||||||
for field in set(ALLOWED_KEYS) - set(md):
|
|
||||||
md.merge_scalar(field)
|
|
||||||
|
|
||||||
if version == V1:
|
|
||||||
legacy_v1_merge_image_or_build(md, base, override)
|
|
||||||
elif md.needs_merge('build'):
|
|
||||||
md['build'] = merge_build(md, base, override)
|
|
||||||
|
|
||||||
return dict(md)
|
|
||||||
|
|
||||||
|
|
||||||
def merge_unique_items_lists(base, override):
|
|
||||||
return sorted(set().union(base, override))
|
|
||||||
|
|
||||||
|
|
||||||
def merge_build(output, base, override):
|
|
||||||
def to_dict(service):
|
|
||||||
build_config = service.get('build', {})
|
|
||||||
if isinstance(build_config, six.string_types):
|
|
||||||
return {'context': build_config}
|
|
||||||
return build_config
|
|
||||||
|
|
||||||
md = MergeDict(to_dict(base), to_dict(override))
|
|
||||||
md.merge_scalar('context')
|
|
||||||
md.merge_scalar('dockerfile')
|
|
||||||
md.merge_mapping('args', parse_build_arguments)
|
|
||||||
return dict(md)
|
|
||||||
|
|
||||||
|
|
||||||
def legacy_v1_merge_image_or_build(output, base, override):
|
|
||||||
output.pop('image', None)
|
|
||||||
output.pop('build', None)
|
|
||||||
if 'image' in override:
|
|
||||||
output['image'] = override['image']
|
|
||||||
elif 'build' in override:
|
|
||||||
output['build'] = override['build']
|
|
||||||
elif 'image' in base:
|
|
||||||
output['image'] = base['image']
|
|
||||||
elif 'build' in base:
|
|
||||||
output['build'] = base['build']
|
|
||||||
|
|
||||||
|
|
||||||
def merge_environment(base, override):
|
|
||||||
env = parse_environment(base)
|
|
||||||
env.update(parse_environment(override))
|
|
||||||
return env
|
|
||||||
|
|
||||||
|
|
||||||
def split_label(label):
|
|
||||||
if '=' in label:
|
|
||||||
return label.split('=', 1)
|
|
||||||
else:
|
|
||||||
return label, ''
|
|
||||||
|
|
||||||
|
|
||||||
def parse_dict_or_list(split_func, type_name, arguments):
|
|
||||||
if not arguments:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
if isinstance(arguments, list):
|
|
||||||
return dict(split_func(e) for e in arguments)
|
|
||||||
|
|
||||||
if isinstance(arguments, dict):
|
|
||||||
return dict(arguments)
|
|
||||||
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s \"%s\" must be a list or mapping," %
|
|
||||||
(type_name, arguments)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
parse_build_arguments = functools.partial(parse_dict_or_list, split_env, 'build arguments')
|
|
||||||
parse_environment = functools.partial(parse_dict_or_list, split_env, 'environment')
|
|
||||||
parse_labels = functools.partial(parse_dict_or_list, split_label, 'labels')
|
|
||||||
parse_networks = functools.partial(parse_dict_or_list, lambda k: (k, None), 'networks')
|
|
||||||
|
|
||||||
|
|
||||||
def parse_ulimits(ulimits):
|
|
||||||
if not ulimits:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
if isinstance(ulimits, dict):
|
|
||||||
return dict(ulimits)
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_env_var(key, val, environment):
|
|
||||||
if val is not None:
|
|
||||||
return key, val
|
|
||||||
elif environment and key in environment:
|
|
||||||
return key, environment[key]
|
|
||||||
else:
|
|
||||||
return key, None
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_volume_paths(working_dir, service_dict):
|
|
||||||
return [
|
|
||||||
resolve_volume_path(working_dir, volume)
|
|
||||||
for volume in service_dict['volumes']
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_volume_path(working_dir, volume):
|
|
||||||
container_path, host_path = split_path_mapping(volume)
|
|
||||||
|
|
||||||
if host_path is not None:
|
|
||||||
if host_path.startswith('.'):
|
|
||||||
host_path = expand_path(working_dir, host_path)
|
|
||||||
host_path = os.path.expanduser(host_path)
|
|
||||||
return u"{}:{}".format(host_path, container_path)
|
|
||||||
else:
|
|
||||||
return container_path
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_build(service_dict, working_dir, environment):
|
|
||||||
|
|
||||||
if 'build' in service_dict:
|
|
||||||
build = {}
|
|
||||||
# Shortcut where specifying a string is treated as the build context
|
|
||||||
if isinstance(service_dict['build'], six.string_types):
|
|
||||||
build['context'] = service_dict.pop('build')
|
|
||||||
else:
|
|
||||||
build.update(service_dict['build'])
|
|
||||||
if 'args' in build:
|
|
||||||
build['args'] = build_string_dict(
|
|
||||||
resolve_build_args(build, environment)
|
|
||||||
)
|
|
||||||
|
|
||||||
service_dict['build'] = build
|
|
||||||
|
|
||||||
|
|
||||||
def resolve_build_path(working_dir, build_path):
|
|
||||||
if is_url(build_path):
|
|
||||||
return build_path
|
|
||||||
return expand_path(working_dir, build_path)
|
|
||||||
|
|
||||||
|
|
||||||
def is_url(build_path):
|
|
||||||
return build_path.startswith(DOCKER_VALID_URL_PREFIXES)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_paths(service_dict):
|
|
||||||
if 'build' in service_dict:
|
|
||||||
build = service_dict.get('build', {})
|
|
||||||
|
|
||||||
if isinstance(build, six.string_types):
|
|
||||||
build_path = build
|
|
||||||
elif isinstance(build, dict) and 'context' in build:
|
|
||||||
build_path = build['context']
|
|
||||||
else:
|
|
||||||
# We have a build section but no context, so nothing to validate
|
|
||||||
return
|
|
||||||
|
|
||||||
if (
|
|
||||||
not is_url(build_path) and
|
|
||||||
(not os.path.exists(build_path) or not os.access(build_path, os.R_OK))
|
|
||||||
):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"build path %s either does not exist, is not accessible, "
|
|
||||||
"or is not a valid URL." % build_path)
|
|
||||||
|
|
||||||
|
|
||||||
def merge_path_mappings(base, override):
|
|
||||||
d = dict_from_path_mappings(base)
|
|
||||||
d.update(dict_from_path_mappings(override))
|
|
||||||
return path_mappings_from_dict(d)
|
|
||||||
|
|
||||||
|
|
||||||
def dict_from_path_mappings(path_mappings):
|
|
||||||
if path_mappings:
|
|
||||||
return dict(split_path_mapping(v) for v in path_mappings)
|
|
||||||
else:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
|
|
||||||
def path_mappings_from_dict(d):
|
|
||||||
return [join_path_mapping(v) for v in sorted(d.items())]
|
|
||||||
|
|
||||||
|
|
||||||
def split_path_mapping(volume_path):
|
|
||||||
"""
|
|
||||||
Ascertain if the volume_path contains a host path as well as a container
|
|
||||||
path. Using splitdrive so windows absolute paths won't cause issues with
|
|
||||||
splitting on ':'.
|
|
||||||
"""
|
|
||||||
# splitdrive is very naive, so handle special cases where we can be sure
|
|
||||||
# the first character is not a drive.
|
|
||||||
if (volume_path.startswith('.') or volume_path.startswith('~') or
|
|
||||||
volume_path.startswith('/')):
|
|
||||||
drive, volume_config = '', volume_path
|
|
||||||
else:
|
|
||||||
drive, volume_config = ntpath.splitdrive(volume_path)
|
|
||||||
|
|
||||||
if ':' in volume_config:
|
|
||||||
(host, container) = volume_config.split(':', 1)
|
|
||||||
return (container, drive + host)
|
|
||||||
else:
|
|
||||||
return (volume_path, None)
|
|
||||||
|
|
||||||
|
|
||||||
def join_path_mapping(pair):
|
|
||||||
(container, host) = pair
|
|
||||||
if host is None:
|
|
||||||
return container
|
|
||||||
else:
|
|
||||||
return ":".join((host, container))
|
|
||||||
|
|
||||||
|
|
||||||
def expand_path(working_dir, path):
|
|
||||||
return os.path.abspath(os.path.join(working_dir, os.path.expanduser(path)))
|
|
||||||
|
|
||||||
|
|
||||||
def merge_list_or_string(base, override):
|
|
||||||
return to_list(base) + to_list(override)
|
|
||||||
|
|
||||||
|
|
||||||
def to_list(value):
|
|
||||||
if value is None:
|
|
||||||
return []
|
|
||||||
elif isinstance(value, six.string_types):
|
|
||||||
return [value]
|
|
||||||
else:
|
|
||||||
return value
|
|
||||||
|
|
||||||
|
|
||||||
def to_mapping(sequence, key_field):
|
|
||||||
return {getattr(item, key_field): item for item in sequence}
|
|
||||||
|
|
||||||
|
|
||||||
def has_uppercase(name):
|
|
||||||
return any(char in string.ascii_uppercase for char in name)
|
|
||||||
|
|
||||||
|
|
||||||
def load_yaml(filename):
|
|
||||||
try:
|
|
||||||
with open(filename, 'r') as fh:
|
|
||||||
return yaml.safe_load(fh)
|
|
||||||
except (IOError, yaml.YAMLError) as e:
|
|
||||||
error_name = getattr(e, '__module__', '') + '.' + e.__class__.__name__
|
|
||||||
raise ConfigurationError(u"{}: {}".format(error_name, e))
|
|
||||||
|
|
@ -1,187 +0,0 @@
|
||||||
{
|
|
||||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
|
||||||
"id": "config_schema_v1.json",
|
|
||||||
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-zA-Z0-9._-]+$": {
|
|
||||||
"$ref": "#/definitions/service"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"additionalProperties": false,
|
|
||||||
|
|
||||||
"definitions": {
|
|
||||||
"service": {
|
|
||||||
"id": "#/definitions/service",
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"properties": {
|
|
||||||
"build": {"type": "string"},
|
|
||||||
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"cgroup_parent": {"type": "string"},
|
|
||||||
"command": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{"type": "array", "items": {"type": "string"}}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"container_name": {"type": "string"},
|
|
||||||
"cpu_shares": {"type": ["number", "string"]},
|
|
||||||
"cpu_quota": {"type": ["number", "string"]},
|
|
||||||
"cpuset": {"type": "string"},
|
|
||||||
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"dns": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"dns_search": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"dockerfile": {"type": "string"},
|
|
||||||
"domainname": {"type": "string"},
|
|
||||||
"entrypoint": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{"type": "array", "items": {"type": "string"}}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"env_file": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"environment": {"$ref": "#/definitions/list_or_dict"},
|
|
||||||
|
|
||||||
"expose": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": ["string", "number"],
|
|
||||||
"format": "expose"
|
|
||||||
},
|
|
||||||
"uniqueItems": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"extends": {
|
|
||||||
"oneOf": [
|
|
||||||
{
|
|
||||||
"type": "string"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"properties": {
|
|
||||||
"service": {"type": "string"},
|
|
||||||
"file": {"type": "string"}
|
|
||||||
},
|
|
||||||
"required": ["service"],
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
|
|
||||||
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"hostname": {"type": "string"},
|
|
||||||
"image": {"type": "string"},
|
|
||||||
"ipc": {"type": "string"},
|
|
||||||
"labels": {"$ref": "#/definitions/list_or_dict"},
|
|
||||||
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"log_driver": {"type": "string"},
|
|
||||||
"log_opt": {"type": "object"},
|
|
||||||
"mac_address": {"type": "string"},
|
|
||||||
"mem_limit": {"type": ["number", "string"]},
|
|
||||||
"memswap_limit": {"type": ["number", "string"]},
|
|
||||||
"net": {"type": "string"},
|
|
||||||
"pid": {"type": ["string", "null"]},
|
|
||||||
|
|
||||||
"ports": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": ["string", "number"],
|
|
||||||
"format": "ports"
|
|
||||||
},
|
|
||||||
"uniqueItems": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"privileged": {"type": "boolean"},
|
|
||||||
"read_only": {"type": "boolean"},
|
|
||||||
"restart": {"type": "string"},
|
|
||||||
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"shm_size": {"type": ["number", "string"]},
|
|
||||||
"stdin_open": {"type": "boolean"},
|
|
||||||
"stop_signal": {"type": "string"},
|
|
||||||
"tty": {"type": "boolean"},
|
|
||||||
"ulimits": {
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-z]+$": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "integer"},
|
|
||||||
{
|
|
||||||
"type":"object",
|
|
||||||
"properties": {
|
|
||||||
"hard": {"type": "integer"},
|
|
||||||
"soft": {"type": "integer"}
|
|
||||||
},
|
|
||||||
"required": ["soft", "hard"],
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"user": {"type": "string"},
|
|
||||||
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"volume_driver": {"type": "string"},
|
|
||||||
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"working_dir": {"type": "string"}
|
|
||||||
},
|
|
||||||
|
|
||||||
"dependencies": {
|
|
||||||
"memswap_limit": ["mem_limit"]
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
|
|
||||||
"string_or_list": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{"$ref": "#/definitions/list_of_strings"}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"list_of_strings": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"uniqueItems": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"list_or_dict": {
|
|
||||||
"oneOf": [
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
".+": {
|
|
||||||
"type": ["string", "number", "null"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"constraints": {
|
|
||||||
"service": {
|
|
||||||
"id": "#/definitions/constraints/service",
|
|
||||||
"anyOf": [
|
|
||||||
{
|
|
||||||
"required": ["build"],
|
|
||||||
"not": {"required": ["image"]}
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"required": ["image"],
|
|
||||||
"not": {"anyOf": [
|
|
||||||
{"required": ["build"]},
|
|
||||||
{"required": ["dockerfile"]}
|
|
||||||
]}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,318 +0,0 @@
|
||||||
{
|
|
||||||
"$schema": "http://json-schema.org/draft-04/schema#",
|
|
||||||
"id": "config_schema_v2.0.json",
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"properties": {
|
|
||||||
"version": {
|
|
||||||
"type": "string"
|
|
||||||
},
|
|
||||||
|
|
||||||
"services": {
|
|
||||||
"id": "#/properties/services",
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-zA-Z0-9._-]+$": {
|
|
||||||
"$ref": "#/definitions/service"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
|
|
||||||
"networks": {
|
|
||||||
"id": "#/properties/networks",
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-zA-Z0-9._-]+$": {
|
|
||||||
"$ref": "#/definitions/network"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"volumes": {
|
|
||||||
"id": "#/properties/volumes",
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-zA-Z0-9._-]+$": {
|
|
||||||
"$ref": "#/definitions/volume"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
|
|
||||||
"additionalProperties": false,
|
|
||||||
|
|
||||||
"definitions": {
|
|
||||||
|
|
||||||
"service": {
|
|
||||||
"id": "#/definitions/service",
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"properties": {
|
|
||||||
"build": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"context": {"type": "string"},
|
|
||||||
"dockerfile": {"type": "string"},
|
|
||||||
"args": {"$ref": "#/definitions/list_or_dict"}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"cap_add": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"cap_drop": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"cgroup_parent": {"type": "string"},
|
|
||||||
"command": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{"type": "array", "items": {"type": "string"}}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"container_name": {"type": "string"},
|
|
||||||
"cpu_shares": {"type": ["number", "string"]},
|
|
||||||
"cpu_quota": {"type": ["number", "string"]},
|
|
||||||
"cpuset": {"type": "string"},
|
|
||||||
"depends_on": {"$ref": "#/definitions/list_of_strings"},
|
|
||||||
"devices": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"dns": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"dns_search": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"domainname": {"type": "string"},
|
|
||||||
"entrypoint": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{"type": "array", "items": {"type": "string"}}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"env_file": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"environment": {"$ref": "#/definitions/list_or_dict"},
|
|
||||||
|
|
||||||
"expose": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": ["string", "number"],
|
|
||||||
"format": "expose"
|
|
||||||
},
|
|
||||||
"uniqueItems": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"extends": {
|
|
||||||
"oneOf": [
|
|
||||||
{
|
|
||||||
"type": "string"
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"properties": {
|
|
||||||
"service": {"type": "string"},
|
|
||||||
"file": {"type": "string"}
|
|
||||||
},
|
|
||||||
"required": ["service"],
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"external_links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"extra_hosts": {"$ref": "#/definitions/list_or_dict"},
|
|
||||||
"hostname": {"type": "string"},
|
|
||||||
"image": {"type": "string"},
|
|
||||||
"ipc": {"type": "string"},
|
|
||||||
"labels": {"$ref": "#/definitions/list_or_dict"},
|
|
||||||
"links": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
|
|
||||||
"logging": {
|
|
||||||
"type": "object",
|
|
||||||
|
|
||||||
"properties": {
|
|
||||||
"driver": {"type": "string"},
|
|
||||||
"options": {"type": "object"}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
|
|
||||||
"mac_address": {"type": "string"},
|
|
||||||
"mem_limit": {"type": ["number", "string"]},
|
|
||||||
"memswap_limit": {"type": ["number", "string"]},
|
|
||||||
"network_mode": {"type": "string"},
|
|
||||||
|
|
||||||
"networks": {
|
|
||||||
"oneOf": [
|
|
||||||
{"$ref": "#/definitions/list_of_strings"},
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-zA-Z0-9._-]+$": {
|
|
||||||
"oneOf": [
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"aliases": {"$ref": "#/definitions/list_of_strings"},
|
|
||||||
"ipv4_address": {"type": "string"},
|
|
||||||
"ipv6_address": {"type": "string"}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
{"type": "null"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
"pid": {"type": ["string", "null"]},
|
|
||||||
|
|
||||||
"ports": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {
|
|
||||||
"type": ["string", "number"],
|
|
||||||
"format": "ports"
|
|
||||||
},
|
|
||||||
"uniqueItems": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"privileged": {"type": "boolean"},
|
|
||||||
"read_only": {"type": "boolean"},
|
|
||||||
"restart": {"type": "string"},
|
|
||||||
"security_opt": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"shm_size": {"type": ["number", "string"]},
|
|
||||||
"stdin_open": {"type": "boolean"},
|
|
||||||
"stop_signal": {"type": "string"},
|
|
||||||
"tmpfs": {"$ref": "#/definitions/string_or_list"},
|
|
||||||
"tty": {"type": "boolean"},
|
|
||||||
"ulimits": {
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^[a-z]+$": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "integer"},
|
|
||||||
{
|
|
||||||
"type":"object",
|
|
||||||
"properties": {
|
|
||||||
"hard": {"type": "integer"},
|
|
||||||
"soft": {"type": "integer"}
|
|
||||||
},
|
|
||||||
"required": ["soft", "hard"],
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"user": {"type": "string"},
|
|
||||||
"volumes": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"volume_driver": {"type": "string"},
|
|
||||||
"volumes_from": {"type": "array", "items": {"type": "string"}, "uniqueItems": true},
|
|
||||||
"working_dir": {"type": "string"}
|
|
||||||
},
|
|
||||||
|
|
||||||
"dependencies": {
|
|
||||||
"memswap_limit": ["mem_limit"]
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
|
|
||||||
"network": {
|
|
||||||
"id": "#/definitions/network",
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"driver": {"type": "string"},
|
|
||||||
"driver_opts": {
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^.+$": {"type": ["string", "number"]}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"ipam": {
|
|
||||||
"type": "object",
|
|
||||||
"properties": {
|
|
||||||
"driver": {"type": "string"},
|
|
||||||
"config": {
|
|
||||||
"type": "array"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
"external": {
|
|
||||||
"type": ["boolean", "object"],
|
|
||||||
"properties": {
|
|
||||||
"name": {"type": "string"}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
|
|
||||||
"volume": {
|
|
||||||
"id": "#/definitions/volume",
|
|
||||||
"type": ["object", "null"],
|
|
||||||
"properties": {
|
|
||||||
"driver": {"type": "string"},
|
|
||||||
"driver_opts": {
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
"^.+$": {"type": ["string", "number"]}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"external": {
|
|
||||||
"type": ["boolean", "object"],
|
|
||||||
"properties": {
|
|
||||||
"name": {"type": "string"}
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
|
|
||||||
"string_or_list": {
|
|
||||||
"oneOf": [
|
|
||||||
{"type": "string"},
|
|
||||||
{"$ref": "#/definitions/list_of_strings"}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"list_of_strings": {
|
|
||||||
"type": "array",
|
|
||||||
"items": {"type": "string"},
|
|
||||||
"uniqueItems": true
|
|
||||||
},
|
|
||||||
|
|
||||||
"list_or_dict": {
|
|
||||||
"oneOf": [
|
|
||||||
{
|
|
||||||
"type": "object",
|
|
||||||
"patternProperties": {
|
|
||||||
".+": {
|
|
||||||
"type": ["string", "number", "null"]
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"additionalProperties": false
|
|
||||||
},
|
|
||||||
{"type": "array", "items": {"type": "string"}, "uniqueItems": true}
|
|
||||||
]
|
|
||||||
},
|
|
||||||
|
|
||||||
"constraints": {
|
|
||||||
"service": {
|
|
||||||
"id": "#/definitions/constraints/service",
|
|
||||||
"anyOf": [
|
|
||||||
{"required": ["build"]},
|
|
||||||
{"required": ["image"]}
|
|
||||||
],
|
|
||||||
"properties": {
|
|
||||||
"build": {
|
|
||||||
"required": ["context"]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,107 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import codecs
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from ..const import IS_WINDOWS_PLATFORM
|
|
||||||
from .errors import ConfigurationError
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def split_env(env):
|
|
||||||
if isinstance(env, six.binary_type):
|
|
||||||
env = env.decode('utf-8', 'replace')
|
|
||||||
if '=' in env:
|
|
||||||
return env.split('=', 1)
|
|
||||||
else:
|
|
||||||
return env, None
|
|
||||||
|
|
||||||
|
|
||||||
def env_vars_from_file(filename):
|
|
||||||
"""
|
|
||||||
Read in a line delimited file of environment variables.
|
|
||||||
"""
|
|
||||||
if not os.path.exists(filename):
|
|
||||||
raise ConfigurationError("Couldn't find env file: %s" % filename)
|
|
||||||
elif not os.path.isfile(filename):
|
|
||||||
raise ConfigurationError("%s is not a file." % (filename))
|
|
||||||
env = {}
|
|
||||||
for line in codecs.open(filename, 'r', 'utf-8'):
|
|
||||||
line = line.strip()
|
|
||||||
if line and not line.startswith('#'):
|
|
||||||
k, v = split_env(line)
|
|
||||||
env[k] = v
|
|
||||||
return env
|
|
||||||
|
|
||||||
|
|
||||||
class Environment(dict):
|
|
||||||
def __init__(self, *args, **kwargs):
|
|
||||||
super(Environment, self).__init__(*args, **kwargs)
|
|
||||||
self.missing_keys = []
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_env_file(cls, base_dir):
|
|
||||||
def _initialize():
|
|
||||||
result = cls()
|
|
||||||
if base_dir is None:
|
|
||||||
return result
|
|
||||||
env_file_path = os.path.join(base_dir, '.env')
|
|
||||||
try:
|
|
||||||
return cls(env_vars_from_file(env_file_path))
|
|
||||||
except ConfigurationError:
|
|
||||||
pass
|
|
||||||
return result
|
|
||||||
instance = _initialize()
|
|
||||||
instance.update(os.environ)
|
|
||||||
return instance
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_command_line(cls, parsed_env_opts):
|
|
||||||
result = cls()
|
|
||||||
for k, v in parsed_env_opts.items():
|
|
||||||
# Values from the command line take priority, unless they're unset
|
|
||||||
# in which case they take the value from the system's environment
|
|
||||||
if v is None and k in os.environ:
|
|
||||||
result[k] = os.environ[k]
|
|
||||||
else:
|
|
||||||
result[k] = v
|
|
||||||
return result
|
|
||||||
|
|
||||||
def __getitem__(self, key):
|
|
||||||
try:
|
|
||||||
return super(Environment, self).__getitem__(key)
|
|
||||||
except KeyError:
|
|
||||||
if IS_WINDOWS_PLATFORM:
|
|
||||||
try:
|
|
||||||
return super(Environment, self).__getitem__(key.upper())
|
|
||||||
except KeyError:
|
|
||||||
pass
|
|
||||||
if key not in self.missing_keys:
|
|
||||||
log.warn(
|
|
||||||
"The {} variable is not set. Defaulting to a blank string."
|
|
||||||
.format(key)
|
|
||||||
)
|
|
||||||
self.missing_keys.append(key)
|
|
||||||
|
|
||||||
return ""
|
|
||||||
|
|
||||||
def __contains__(self, key):
|
|
||||||
result = super(Environment, self).__contains__(key)
|
|
||||||
if IS_WINDOWS_PLATFORM:
|
|
||||||
return (
|
|
||||||
result or super(Environment, self).__contains__(key.upper())
|
|
||||||
)
|
|
||||||
return result
|
|
||||||
|
|
||||||
def get(self, key, *args, **kwargs):
|
|
||||||
if IS_WINDOWS_PLATFORM:
|
|
||||||
return super(Environment, self).get(
|
|
||||||
key,
|
|
||||||
super(Environment, self).get(key.upper(), *args, **kwargs)
|
|
||||||
)
|
|
||||||
return super(Environment, self).get(key, *args, **kwargs)
|
|
||||||
|
|
@ -1,46 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
|
|
||||||
VERSION_EXPLANATION = (
|
|
||||||
'You might be seeing this error because you\'re using the wrong Compose '
|
|
||||||
'file version. Either specify a version of "2" (or "2.0") and place your '
|
|
||||||
'service definitions under the `services` key, or omit the `version` key '
|
|
||||||
'and place your service definitions at the root of the file to use '
|
|
||||||
'version 1.\nFor more on the Compose file format versions, see '
|
|
||||||
'https://docs.docker.com/compose/compose-file/')
|
|
||||||
|
|
||||||
|
|
||||||
class ConfigurationError(Exception):
|
|
||||||
def __init__(self, msg):
|
|
||||||
self.msg = msg
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return self.msg
|
|
||||||
|
|
||||||
|
|
||||||
class DependencyError(ConfigurationError):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class CircularReference(ConfigurationError):
|
|
||||||
def __init__(self, trail):
|
|
||||||
self.trail = trail
|
|
||||||
|
|
||||||
@property
|
|
||||||
def msg(self):
|
|
||||||
lines = [
|
|
||||||
"{} in {}".format(service_name, filename)
|
|
||||||
for (filename, service_name) in self.trail
|
|
||||||
]
|
|
||||||
return "Circular reference:\n {}".format("\n extends ".join(lines))
|
|
||||||
|
|
||||||
|
|
||||||
class ComposeFileNotFound(ConfigurationError):
|
|
||||||
def __init__(self, supported_filenames):
|
|
||||||
super(ComposeFileNotFound, self).__init__("""
|
|
||||||
Can't find a suitable configuration file in this directory or any
|
|
||||||
parent. Are you in the right directory?
|
|
||||||
|
|
||||||
Supported filenames: %s
|
|
||||||
""" % ", ".join(supported_filenames))
|
|
||||||
|
|
@ -1,63 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import logging
|
|
||||||
from string import Template
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from .errors import ConfigurationError
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
def interpolate_environment_variables(config, section, environment):
|
|
||||||
|
|
||||||
def process_item(name, config_dict):
|
|
||||||
return dict(
|
|
||||||
(key, interpolate_value(name, key, val, section, environment))
|
|
||||||
for key, val in (config_dict or {}).items()
|
|
||||||
)
|
|
||||||
|
|
||||||
return dict(
|
|
||||||
(name, process_item(name, config_dict or {}))
|
|
||||||
for name, config_dict in config.items()
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def interpolate_value(name, config_key, value, section, mapping):
|
|
||||||
try:
|
|
||||||
return recursive_interpolate(value, mapping)
|
|
||||||
except InvalidInterpolation as e:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Invalid interpolation format for "{config_key}" option '
|
|
||||||
'in {section} "{name}": "{string}"'.format(
|
|
||||||
config_key=config_key,
|
|
||||||
name=name,
|
|
||||||
section=section,
|
|
||||||
string=e.string))
|
|
||||||
|
|
||||||
|
|
||||||
def recursive_interpolate(obj, mapping):
|
|
||||||
if isinstance(obj, six.string_types):
|
|
||||||
return interpolate(obj, mapping)
|
|
||||||
elif isinstance(obj, dict):
|
|
||||||
return dict(
|
|
||||||
(key, recursive_interpolate(val, mapping))
|
|
||||||
for (key, val) in obj.items()
|
|
||||||
)
|
|
||||||
elif isinstance(obj, list):
|
|
||||||
return [recursive_interpolate(val, mapping) for val in obj]
|
|
||||||
else:
|
|
||||||
return obj
|
|
||||||
|
|
||||||
|
|
||||||
def interpolate(string, mapping):
|
|
||||||
try:
|
|
||||||
return Template(string).substitute(mapping)
|
|
||||||
except ValueError:
|
|
||||||
raise InvalidInterpolation(string)
|
|
||||||
|
|
||||||
|
|
||||||
class InvalidInterpolation(Exception):
|
|
||||||
def __init__(self, string):
|
|
||||||
self.string = string
|
|
||||||
|
|
@ -1,60 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import six
|
|
||||||
import yaml
|
|
||||||
|
|
||||||
from compose.config import types
|
|
||||||
from compose.config.config import V1
|
|
||||||
from compose.config.config import V2_0
|
|
||||||
|
|
||||||
|
|
||||||
def serialize_config_type(dumper, data):
|
|
||||||
representer = dumper.represent_str if six.PY3 else dumper.represent_unicode
|
|
||||||
return representer(data.repr())
|
|
||||||
|
|
||||||
|
|
||||||
yaml.SafeDumper.add_representer(types.VolumeFromSpec, serialize_config_type)
|
|
||||||
yaml.SafeDumper.add_representer(types.VolumeSpec, serialize_config_type)
|
|
||||||
|
|
||||||
|
|
||||||
def denormalize_config(config):
|
|
||||||
denormalized_services = [
|
|
||||||
denormalize_service_dict(service_dict, config.version)
|
|
||||||
for service_dict in config.services
|
|
||||||
]
|
|
||||||
services = {
|
|
||||||
service_dict.pop('name'): service_dict
|
|
||||||
for service_dict in denormalized_services
|
|
||||||
}
|
|
||||||
networks = config.networks.copy()
|
|
||||||
for net_name, net_conf in networks.items():
|
|
||||||
if 'external_name' in net_conf:
|
|
||||||
del net_conf['external_name']
|
|
||||||
|
|
||||||
return {
|
|
||||||
'version': V2_0,
|
|
||||||
'services': services,
|
|
||||||
'networks': networks,
|
|
||||||
'volumes': config.volumes,
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
def serialize_config(config):
|
|
||||||
return yaml.safe_dump(
|
|
||||||
denormalize_config(config),
|
|
||||||
default_flow_style=False,
|
|
||||||
indent=2,
|
|
||||||
width=80)
|
|
||||||
|
|
||||||
|
|
||||||
def denormalize_service_dict(service_dict, version):
|
|
||||||
service_dict = service_dict.copy()
|
|
||||||
|
|
||||||
if 'restart' in service_dict:
|
|
||||||
service_dict['restart'] = types.serialize_restart_spec(service_dict['restart'])
|
|
||||||
|
|
||||||
if version == V1 and 'network_mode' not in service_dict:
|
|
||||||
service_dict['network_mode'] = 'bridge'
|
|
||||||
|
|
||||||
return service_dict
|
|
||||||
|
|
@ -1,72 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from compose.config.errors import DependencyError
|
|
||||||
|
|
||||||
|
|
||||||
def get_service_name_from_network_mode(network_mode):
|
|
||||||
return get_source_name_from_network_mode(network_mode, 'service')
|
|
||||||
|
|
||||||
|
|
||||||
def get_container_name_from_network_mode(network_mode):
|
|
||||||
return get_source_name_from_network_mode(network_mode, 'container')
|
|
||||||
|
|
||||||
|
|
||||||
def get_source_name_from_network_mode(network_mode, source_type):
|
|
||||||
if not network_mode:
|
|
||||||
return
|
|
||||||
|
|
||||||
if not network_mode.startswith(source_type+':'):
|
|
||||||
return
|
|
||||||
|
|
||||||
_, net_name = network_mode.split(':', 1)
|
|
||||||
return net_name
|
|
||||||
|
|
||||||
|
|
||||||
def get_service_names(links):
|
|
||||||
return [link.split(':')[0] for link in links]
|
|
||||||
|
|
||||||
|
|
||||||
def get_service_names_from_volumes_from(volumes_from):
|
|
||||||
return [volume_from.source for volume_from in volumes_from]
|
|
||||||
|
|
||||||
|
|
||||||
def get_service_dependents(service_dict, services):
|
|
||||||
name = service_dict['name']
|
|
||||||
return [
|
|
||||||
service for service in services
|
|
||||||
if (name in get_service_names(service.get('links', [])) or
|
|
||||||
name in get_service_names_from_volumes_from(service.get('volumes_from', [])) or
|
|
||||||
name == get_service_name_from_network_mode(service.get('network_mode')) or
|
|
||||||
name in service.get('depends_on', []))
|
|
||||||
]
|
|
||||||
|
|
||||||
|
|
||||||
def sort_service_dicts(services):
|
|
||||||
# Topological sort (Cormen/Tarjan algorithm).
|
|
||||||
unmarked = services[:]
|
|
||||||
temporary_marked = set()
|
|
||||||
sorted_services = []
|
|
||||||
|
|
||||||
def visit(n):
|
|
||||||
if n['name'] in temporary_marked:
|
|
||||||
if n['name'] in get_service_names(n.get('links', [])):
|
|
||||||
raise DependencyError('A service can not link to itself: %s' % n['name'])
|
|
||||||
if n['name'] in n.get('volumes_from', []):
|
|
||||||
raise DependencyError('A service can not mount itself as volume: %s' % n['name'])
|
|
||||||
if n['name'] in n.get('depends_on', []):
|
|
||||||
raise DependencyError('A service can not depend on itself: %s' % n['name'])
|
|
||||||
raise DependencyError('Circular dependency between %s' % ' and '.join(temporary_marked))
|
|
||||||
|
|
||||||
if n in unmarked:
|
|
||||||
temporary_marked.add(n['name'])
|
|
||||||
for m in get_service_dependents(n, services):
|
|
||||||
visit(m)
|
|
||||||
temporary_marked.remove(n['name'])
|
|
||||||
unmarked.remove(n)
|
|
||||||
sorted_services.insert(0, n)
|
|
||||||
|
|
||||||
while unmarked:
|
|
||||||
visit(unmarked[-1])
|
|
||||||
|
|
||||||
return sorted_services
|
|
||||||
|
|
@ -1,198 +0,0 @@
|
||||||
"""
|
|
||||||
Types for objects parsed from the configuration.
|
|
||||||
"""
|
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import os
|
|
||||||
from collections import namedtuple
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from compose.config.config import V1
|
|
||||||
from compose.config.errors import ConfigurationError
|
|
||||||
from compose.const import IS_WINDOWS_PLATFORM
|
|
||||||
|
|
||||||
|
|
||||||
class VolumeFromSpec(namedtuple('_VolumeFromSpec', 'source mode type')):
|
|
||||||
|
|
||||||
# TODO: drop service_names arg when v1 is removed
|
|
||||||
@classmethod
|
|
||||||
def parse(cls, volume_from_config, service_names, version):
|
|
||||||
func = cls.parse_v1 if version == V1 else cls.parse_v2
|
|
||||||
return func(service_names, volume_from_config)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def parse_v1(cls, service_names, volume_from_config):
|
|
||||||
parts = volume_from_config.split(':')
|
|
||||||
if len(parts) > 2:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"volume_from {} has incorrect format, should be "
|
|
||||||
"service[:mode]".format(volume_from_config))
|
|
||||||
|
|
||||||
if len(parts) == 1:
|
|
||||||
source = parts[0]
|
|
||||||
mode = 'rw'
|
|
||||||
else:
|
|
||||||
source, mode = parts
|
|
||||||
|
|
||||||
type = 'service' if source in service_names else 'container'
|
|
||||||
return cls(source, mode, type)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def parse_v2(cls, service_names, volume_from_config):
|
|
||||||
parts = volume_from_config.split(':')
|
|
||||||
if len(parts) > 3:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"volume_from {} has incorrect format, should be one of "
|
|
||||||
"'<service name>[:<mode>]' or "
|
|
||||||
"'container:<container name>[:<mode>]'".format(volume_from_config))
|
|
||||||
|
|
||||||
if len(parts) == 1:
|
|
||||||
source = parts[0]
|
|
||||||
return cls(source, 'rw', 'service')
|
|
||||||
|
|
||||||
if len(parts) == 2:
|
|
||||||
if parts[0] == 'container':
|
|
||||||
type, source = parts
|
|
||||||
return cls(source, 'rw', type)
|
|
||||||
|
|
||||||
source, mode = parts
|
|
||||||
return cls(source, mode, 'service')
|
|
||||||
|
|
||||||
if len(parts) == 3:
|
|
||||||
type, source, mode = parts
|
|
||||||
if type not in ('service', 'container'):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Unknown volumes_from type '{}' in '{}'".format(
|
|
||||||
type,
|
|
||||||
volume_from_config))
|
|
||||||
|
|
||||||
return cls(source, mode, type)
|
|
||||||
|
|
||||||
def repr(self):
|
|
||||||
return '{v.type}:{v.source}:{v.mode}'.format(v=self)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_restart_spec(restart_config):
|
|
||||||
if not restart_config:
|
|
||||||
return None
|
|
||||||
parts = restart_config.split(':')
|
|
||||||
if len(parts) > 2:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Restart %s has incorrect format, should be "
|
|
||||||
"mode[:max_retry]" % restart_config)
|
|
||||||
if len(parts) == 2:
|
|
||||||
name, max_retry_count = parts
|
|
||||||
else:
|
|
||||||
name, = parts
|
|
||||||
max_retry_count = 0
|
|
||||||
|
|
||||||
return {'Name': name, 'MaximumRetryCount': int(max_retry_count)}
|
|
||||||
|
|
||||||
|
|
||||||
def serialize_restart_spec(restart_spec):
|
|
||||||
parts = [restart_spec['Name']]
|
|
||||||
if restart_spec['MaximumRetryCount']:
|
|
||||||
parts.append(six.text_type(restart_spec['MaximumRetryCount']))
|
|
||||||
return ':'.join(parts)
|
|
||||||
|
|
||||||
|
|
||||||
def parse_extra_hosts(extra_hosts_config):
|
|
||||||
if not extra_hosts_config:
|
|
||||||
return {}
|
|
||||||
|
|
||||||
if isinstance(extra_hosts_config, dict):
|
|
||||||
return dict(extra_hosts_config)
|
|
||||||
|
|
||||||
if isinstance(extra_hosts_config, list):
|
|
||||||
extra_hosts_dict = {}
|
|
||||||
for extra_hosts_line in extra_hosts_config:
|
|
||||||
# TODO: validate string contains ':' ?
|
|
||||||
host, ip = extra_hosts_line.split(':', 1)
|
|
||||||
extra_hosts_dict[host.strip()] = ip.strip()
|
|
||||||
return extra_hosts_dict
|
|
||||||
|
|
||||||
|
|
||||||
def normalize_paths_for_engine(external_path, internal_path):
|
|
||||||
"""Windows paths, c:\my\path\shiny, need to be changed to be compatible with
|
|
||||||
the Engine. Volume paths are expected to be linux style /c/my/path/shiny/
|
|
||||||
"""
|
|
||||||
if not IS_WINDOWS_PLATFORM:
|
|
||||||
return external_path, internal_path
|
|
||||||
|
|
||||||
if external_path:
|
|
||||||
drive, tail = os.path.splitdrive(external_path)
|
|
||||||
|
|
||||||
if drive:
|
|
||||||
external_path = '/' + drive.lower().rstrip(':') + tail
|
|
||||||
|
|
||||||
external_path = external_path.replace('\\', '/')
|
|
||||||
|
|
||||||
return external_path, internal_path.replace('\\', '/')
|
|
||||||
|
|
||||||
|
|
||||||
class VolumeSpec(namedtuple('_VolumeSpec', 'external internal mode')):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def parse(cls, volume_config):
|
|
||||||
"""Parse a volume_config path and split it into external:internal[:mode]
|
|
||||||
parts to be returned as a valid VolumeSpec.
|
|
||||||
"""
|
|
||||||
if IS_WINDOWS_PLATFORM:
|
|
||||||
# relative paths in windows expand to include the drive, eg C:\
|
|
||||||
# so we join the first 2 parts back together to count as one
|
|
||||||
drive, tail = os.path.splitdrive(volume_config)
|
|
||||||
parts = tail.split(":")
|
|
||||||
|
|
||||||
if drive:
|
|
||||||
parts[0] = drive + parts[0]
|
|
||||||
else:
|
|
||||||
parts = volume_config.split(':')
|
|
||||||
|
|
||||||
if len(parts) > 3:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Volume %s has incorrect format, should be "
|
|
||||||
"external:internal[:mode]" % volume_config)
|
|
||||||
|
|
||||||
if len(parts) == 1:
|
|
||||||
external, internal = normalize_paths_for_engine(
|
|
||||||
None,
|
|
||||||
os.path.normpath(parts[0]))
|
|
||||||
else:
|
|
||||||
external, internal = normalize_paths_for_engine(
|
|
||||||
os.path.normpath(parts[0]),
|
|
||||||
os.path.normpath(parts[1]))
|
|
||||||
|
|
||||||
mode = 'rw'
|
|
||||||
if len(parts) == 3:
|
|
||||||
mode = parts[2]
|
|
||||||
|
|
||||||
return cls(external, internal, mode)
|
|
||||||
|
|
||||||
def repr(self):
|
|
||||||
external = self.external + ':' if self.external else ''
|
|
||||||
return '{ext}{v.internal}:{v.mode}'.format(ext=external, v=self)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_named_volume(self):
|
|
||||||
return self.external and not self.external.startswith(('.', '/', '~'))
|
|
||||||
|
|
||||||
|
|
||||||
class ServiceLink(namedtuple('_ServiceLink', 'target alias')):
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def parse(cls, link_spec):
|
|
||||||
target, _, alias = link_spec.partition(':')
|
|
||||||
if not alias:
|
|
||||||
alias = target
|
|
||||||
return cls(target, alias)
|
|
||||||
|
|
||||||
def repr(self):
|
|
||||||
if self.target == self.alias:
|
|
||||||
return self.target
|
|
||||||
return '{s.target}:{s.alias}'.format(s=self)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def merge_field(self):
|
|
||||||
return self.alias
|
|
||||||
|
|
@ -1,421 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
import os
|
|
||||||
import re
|
|
||||||
import sys
|
|
||||||
|
|
||||||
import six
|
|
||||||
from docker.utils.ports import split_port
|
|
||||||
from jsonschema import Draft4Validator
|
|
||||||
from jsonschema import FormatChecker
|
|
||||||
from jsonschema import RefResolver
|
|
||||||
from jsonschema import ValidationError
|
|
||||||
|
|
||||||
from ..const import COMPOSEFILE_V1 as V1
|
|
||||||
from .errors import ConfigurationError
|
|
||||||
from .errors import VERSION_EXPLANATION
|
|
||||||
from .sort_services import get_service_name_from_network_mode
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
DOCKER_CONFIG_HINTS = {
|
|
||||||
'cpu_share': 'cpu_shares',
|
|
||||||
'add_host': 'extra_hosts',
|
|
||||||
'hosts': 'extra_hosts',
|
|
||||||
'extra_host': 'extra_hosts',
|
|
||||||
'device': 'devices',
|
|
||||||
'link': 'links',
|
|
||||||
'memory_swap': 'memswap_limit',
|
|
||||||
'port': 'ports',
|
|
||||||
'privilege': 'privileged',
|
|
||||||
'priviliged': 'privileged',
|
|
||||||
'privilige': 'privileged',
|
|
||||||
'volume': 'volumes',
|
|
||||||
'workdir': 'working_dir',
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
VALID_NAME_CHARS = '[a-zA-Z0-9\._\-]'
|
|
||||||
VALID_EXPOSE_FORMAT = r'^\d+(\-\d+)?(\/[a-zA-Z]+)?$'
|
|
||||||
|
|
||||||
|
|
||||||
@FormatChecker.cls_checks(format="ports", raises=ValidationError)
|
|
||||||
def format_ports(instance):
|
|
||||||
try:
|
|
||||||
split_port(instance)
|
|
||||||
except ValueError as e:
|
|
||||||
raise ValidationError(six.text_type(e))
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
@FormatChecker.cls_checks(format="expose", raises=ValidationError)
|
|
||||||
def format_expose(instance):
|
|
||||||
if isinstance(instance, six.string_types):
|
|
||||||
if not re.match(VALID_EXPOSE_FORMAT, instance):
|
|
||||||
raise ValidationError(
|
|
||||||
"should be of the format 'PORT[/PROTOCOL]'")
|
|
||||||
|
|
||||||
return True
|
|
||||||
|
|
||||||
|
|
||||||
def match_named_volumes(service_dict, project_volumes):
|
|
||||||
service_volumes = service_dict.get('volumes', [])
|
|
||||||
for volume_spec in service_volumes:
|
|
||||||
if volume_spec.is_named_volume and volume_spec.external not in project_volumes:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Named volume "{0}" is used in service "{1}" but no'
|
|
||||||
' declaration was found in the volumes section.'.format(
|
|
||||||
volume_spec.repr(), service_dict.get('name')
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def python_type_to_yaml_type(type_):
|
|
||||||
type_name = type(type_).__name__
|
|
||||||
return {
|
|
||||||
'dict': 'mapping',
|
|
||||||
'list': 'array',
|
|
||||||
'int': 'number',
|
|
||||||
'float': 'number',
|
|
||||||
'bool': 'boolean',
|
|
||||||
'unicode': 'string',
|
|
||||||
'str': 'string',
|
|
||||||
'bytes': 'string',
|
|
||||||
}.get(type_name, type_name)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_config_section(filename, config, section):
|
|
||||||
"""Validate the structure of a configuration section. This must be done
|
|
||||||
before interpolation so it's separate from schema validation.
|
|
||||||
"""
|
|
||||||
if not isinstance(config, dict):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"In file '{filename}', {section} must be a mapping, not "
|
|
||||||
"{type}.".format(
|
|
||||||
filename=filename,
|
|
||||||
section=section,
|
|
||||||
type=anglicize_json_type(python_type_to_yaml_type(config))))
|
|
||||||
|
|
||||||
for key, value in config.items():
|
|
||||||
if not isinstance(key, six.string_types):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"In file '{filename}', the {section} name {name} must be a "
|
|
||||||
"quoted string, i.e. '{name}'.".format(
|
|
||||||
filename=filename,
|
|
||||||
section=section,
|
|
||||||
name=key))
|
|
||||||
|
|
||||||
if not isinstance(value, (dict, type(None))):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"In file '{filename}', {section} '{name}' must be a mapping not "
|
|
||||||
"{type}.".format(
|
|
||||||
filename=filename,
|
|
||||||
section=section,
|
|
||||||
name=key,
|
|
||||||
type=anglicize_json_type(python_type_to_yaml_type(value))))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_top_level_object(config_file):
|
|
||||||
if not isinstance(config_file.config, dict):
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Top level object in '{}' needs to be an object not '{}'.".format(
|
|
||||||
config_file.filename,
|
|
||||||
type(config_file.config)))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_ulimits(service_config):
|
|
||||||
ulimit_config = service_config.config.get('ulimits', {})
|
|
||||||
for limit_name, soft_hard_values in six.iteritems(ulimit_config):
|
|
||||||
if isinstance(soft_hard_values, dict):
|
|
||||||
if not soft_hard_values['soft'] <= soft_hard_values['hard']:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service '{s.name}' has invalid ulimit '{ulimit}'. "
|
|
||||||
"'soft' value can not be greater than 'hard' value ".format(
|
|
||||||
s=service_config,
|
|
||||||
ulimit=ulimit_config))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_extends_file_path(service_name, extends_options, filename):
|
|
||||||
"""
|
|
||||||
The service to be extended must either be defined in the config key 'file',
|
|
||||||
or within 'filename'.
|
|
||||||
"""
|
|
||||||
error_prefix = "Invalid 'extends' configuration for %s:" % service_name
|
|
||||||
|
|
||||||
if 'file' not in extends_options and filename is None:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"%s you need to specify a 'file', e.g. 'file: something.yml'" % error_prefix
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_network_mode(service_config, service_names):
|
|
||||||
network_mode = service_config.config.get('network_mode')
|
|
||||||
if not network_mode:
|
|
||||||
return
|
|
||||||
|
|
||||||
if 'networks' in service_config.config:
|
|
||||||
raise ConfigurationError("'network_mode' and 'networks' cannot be combined")
|
|
||||||
|
|
||||||
dependency = get_service_name_from_network_mode(network_mode)
|
|
||||||
if not dependency:
|
|
||||||
return
|
|
||||||
|
|
||||||
if dependency not in service_names:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service '{s.name}' uses the network stack of service '{dep}' which "
|
|
||||||
"is undefined.".format(s=service_config, dep=dependency))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_links(service_config, service_names):
|
|
||||||
for link in service_config.config.get('links', []):
|
|
||||||
if link.split(':')[0] not in service_names:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service '{s.name}' has a link to service '{link}' which is "
|
|
||||||
"undefined.".format(s=service_config, link=link))
|
|
||||||
|
|
||||||
|
|
||||||
def validate_depends_on(service_config, service_names):
|
|
||||||
for dependency in service_config.config.get('depends_on', []):
|
|
||||||
if dependency not in service_names:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service '{s.name}' depends on service '{dep}' which is "
|
|
||||||
"undefined.".format(s=service_config, dep=dependency))
|
|
||||||
|
|
||||||
|
|
||||||
def get_unsupported_config_msg(path, error_key):
|
|
||||||
msg = "Unsupported config option for {}: '{}'".format(path_string(path), error_key)
|
|
||||||
if error_key in DOCKER_CONFIG_HINTS:
|
|
||||||
msg += " (did you mean '{}'?)".format(DOCKER_CONFIG_HINTS[error_key])
|
|
||||||
return msg
|
|
||||||
|
|
||||||
|
|
||||||
def anglicize_json_type(json_type):
|
|
||||||
if json_type.startswith(('a', 'e', 'i', 'o', 'u')):
|
|
||||||
return 'an ' + json_type
|
|
||||||
return 'a ' + json_type
|
|
||||||
|
|
||||||
|
|
||||||
def is_service_dict_schema(schema_id):
|
|
||||||
return schema_id in ('config_schema_v1.json', '#/properties/services')
|
|
||||||
|
|
||||||
|
|
||||||
def handle_error_for_schema_with_id(error, path):
|
|
||||||
schema_id = error.schema['id']
|
|
||||||
|
|
||||||
if is_service_dict_schema(schema_id) and error.validator == 'additionalProperties':
|
|
||||||
return "Invalid service name '{}' - only {} characters are allowed".format(
|
|
||||||
# The service_name is the key to the json object
|
|
||||||
list(error.instance)[0],
|
|
||||||
VALID_NAME_CHARS)
|
|
||||||
|
|
||||||
if error.validator == 'additionalProperties':
|
|
||||||
if schema_id == '#/definitions/service':
|
|
||||||
invalid_config_key = parse_key_from_error_msg(error)
|
|
||||||
return get_unsupported_config_msg(path, invalid_config_key)
|
|
||||||
|
|
||||||
if not error.path:
|
|
||||||
return '{}\n\n{}'.format(error.message, VERSION_EXPLANATION)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_generic_error(error, path):
|
|
||||||
msg_format = None
|
|
||||||
error_msg = error.message
|
|
||||||
|
|
||||||
if error.validator == 'oneOf':
|
|
||||||
msg_format = "{path} {msg}"
|
|
||||||
config_key, error_msg = _parse_oneof_validator(error)
|
|
||||||
if config_key:
|
|
||||||
path.append(config_key)
|
|
||||||
|
|
||||||
elif error.validator == 'type':
|
|
||||||
msg_format = "{path} contains an invalid type, it should be {msg}"
|
|
||||||
error_msg = _parse_valid_types_from_validator(error.validator_value)
|
|
||||||
|
|
||||||
elif error.validator == 'required':
|
|
||||||
error_msg = ", ".join(error.validator_value)
|
|
||||||
msg_format = "{path} is invalid, {msg} is required."
|
|
||||||
|
|
||||||
elif error.validator == 'dependencies':
|
|
||||||
config_key = list(error.validator_value.keys())[0]
|
|
||||||
required_keys = ",".join(error.validator_value[config_key])
|
|
||||||
|
|
||||||
msg_format = "{path} is invalid: {msg}"
|
|
||||||
path.append(config_key)
|
|
||||||
error_msg = "when defining '{}' you must set '{}' as well".format(
|
|
||||||
config_key,
|
|
||||||
required_keys)
|
|
||||||
|
|
||||||
elif error.cause:
|
|
||||||
error_msg = six.text_type(error.cause)
|
|
||||||
msg_format = "{path} is invalid: {msg}"
|
|
||||||
|
|
||||||
elif error.path:
|
|
||||||
msg_format = "{path} value {msg}"
|
|
||||||
|
|
||||||
if msg_format:
|
|
||||||
return msg_format.format(path=path_string(path), msg=error_msg)
|
|
||||||
|
|
||||||
return error.message
|
|
||||||
|
|
||||||
|
|
||||||
def parse_key_from_error_msg(error):
|
|
||||||
return error.message.split("'")[1]
|
|
||||||
|
|
||||||
|
|
||||||
def path_string(path):
|
|
||||||
return ".".join(c for c in path if isinstance(c, six.string_types))
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_valid_types_from_validator(validator):
|
|
||||||
"""A validator value can be either an array of valid types or a string of
|
|
||||||
a valid type. Parse the valid types and prefix with the correct article.
|
|
||||||
"""
|
|
||||||
if not isinstance(validator, list):
|
|
||||||
return anglicize_json_type(validator)
|
|
||||||
|
|
||||||
if len(validator) == 1:
|
|
||||||
return anglicize_json_type(validator[0])
|
|
||||||
|
|
||||||
return "{}, or {}".format(
|
|
||||||
", ".join([anglicize_json_type(validator[0])] + validator[1:-1]),
|
|
||||||
anglicize_json_type(validator[-1]))
|
|
||||||
|
|
||||||
|
|
||||||
def _parse_oneof_validator(error):
|
|
||||||
"""oneOf has multiple schemas, so we need to reason about which schema, sub
|
|
||||||
schema or constraint the validation is failing on.
|
|
||||||
Inspecting the context value of a ValidationError gives us information about
|
|
||||||
which sub schema failed and which kind of error it is.
|
|
||||||
"""
|
|
||||||
types = []
|
|
||||||
for context in error.context:
|
|
||||||
|
|
||||||
if context.validator == 'oneOf':
|
|
||||||
_, error_msg = _parse_oneof_validator(context)
|
|
||||||
return path_string(context.path), error_msg
|
|
||||||
|
|
||||||
if context.validator == 'required':
|
|
||||||
return (None, context.message)
|
|
||||||
|
|
||||||
if context.validator == 'additionalProperties':
|
|
||||||
invalid_config_key = parse_key_from_error_msg(context)
|
|
||||||
return (None, "contains unsupported option: '{}'".format(invalid_config_key))
|
|
||||||
|
|
||||||
if context.path:
|
|
||||||
return (
|
|
||||||
path_string(context.path),
|
|
||||||
"contains {}, which is an invalid type, it should be {}".format(
|
|
||||||
json.dumps(context.instance),
|
|
||||||
_parse_valid_types_from_validator(context.validator_value)),
|
|
||||||
)
|
|
||||||
|
|
||||||
if context.validator == 'uniqueItems':
|
|
||||||
return (
|
|
||||||
None,
|
|
||||||
"contains non unique items, please remove duplicates from {}".format(
|
|
||||||
context.instance),
|
|
||||||
)
|
|
||||||
|
|
||||||
if context.validator == 'type':
|
|
||||||
types.append(context.validator_value)
|
|
||||||
|
|
||||||
valid_types = _parse_valid_types_from_validator(types)
|
|
||||||
return (None, "contains an invalid type, it should be {}".format(valid_types))
|
|
||||||
|
|
||||||
|
|
||||||
def process_service_constraint_errors(error, service_name, version):
|
|
||||||
if version == V1:
|
|
||||||
if 'image' in error.instance and 'build' in error.instance:
|
|
||||||
return (
|
|
||||||
"Service {} has both an image and build path specified. "
|
|
||||||
"A service can either be built to image or use an existing "
|
|
||||||
"image, not both.".format(service_name))
|
|
||||||
|
|
||||||
if 'image' in error.instance and 'dockerfile' in error.instance:
|
|
||||||
return (
|
|
||||||
"Service {} has both an image and alternate Dockerfile. "
|
|
||||||
"A service can either be built to image or use an existing "
|
|
||||||
"image, not both.".format(service_name))
|
|
||||||
|
|
||||||
if 'image' not in error.instance and 'build' not in error.instance:
|
|
||||||
return (
|
|
||||||
"Service {} has neither an image nor a build context specified. "
|
|
||||||
"At least one must be provided.".format(service_name))
|
|
||||||
|
|
||||||
|
|
||||||
def process_config_schema_errors(error):
|
|
||||||
path = list(error.path)
|
|
||||||
|
|
||||||
if 'id' in error.schema:
|
|
||||||
error_msg = handle_error_for_schema_with_id(error, path)
|
|
||||||
if error_msg:
|
|
||||||
return error_msg
|
|
||||||
|
|
||||||
return handle_generic_error(error, path)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_against_config_schema(config_file):
|
|
||||||
schema = load_jsonschema(config_file.version)
|
|
||||||
format_checker = FormatChecker(["ports", "expose"])
|
|
||||||
validator = Draft4Validator(
|
|
||||||
schema,
|
|
||||||
resolver=RefResolver(get_resolver_path(), schema),
|
|
||||||
format_checker=format_checker)
|
|
||||||
handle_errors(
|
|
||||||
validator.iter_errors(config_file.config),
|
|
||||||
process_config_schema_errors,
|
|
||||||
config_file.filename)
|
|
||||||
|
|
||||||
|
|
||||||
def validate_service_constraints(config, service_name, version):
|
|
||||||
def handler(errors):
|
|
||||||
return process_service_constraint_errors(errors, service_name, version)
|
|
||||||
|
|
||||||
schema = load_jsonschema(version)
|
|
||||||
validator = Draft4Validator(schema['definitions']['constraints']['service'])
|
|
||||||
handle_errors(validator.iter_errors(config), handler, None)
|
|
||||||
|
|
||||||
|
|
||||||
def get_schema_path():
|
|
||||||
return os.path.dirname(os.path.abspath(__file__))
|
|
||||||
|
|
||||||
|
|
||||||
def load_jsonschema(version):
|
|
||||||
filename = os.path.join(
|
|
||||||
get_schema_path(),
|
|
||||||
"config_schema_v{0}.json".format(version))
|
|
||||||
|
|
||||||
with open(filename, "r") as fh:
|
|
||||||
return json.load(fh)
|
|
||||||
|
|
||||||
|
|
||||||
def get_resolver_path():
|
|
||||||
schema_path = get_schema_path()
|
|
||||||
if sys.platform == "win32":
|
|
||||||
scheme = "///"
|
|
||||||
# TODO: why is this necessary?
|
|
||||||
schema_path = schema_path.replace('\\', '/')
|
|
||||||
else:
|
|
||||||
scheme = "//"
|
|
||||||
return "file:{}{}/".format(scheme, schema_path)
|
|
||||||
|
|
||||||
|
|
||||||
def handle_errors(errors, format_error_func, filename):
|
|
||||||
"""jsonschema returns an error tree full of information to explain what has
|
|
||||||
gone wrong. Process each error and pull out relevant information and re-write
|
|
||||||
helpful error messages that are relevant.
|
|
||||||
"""
|
|
||||||
errors = list(sorted(errors, key=str))
|
|
||||||
if not errors:
|
|
||||||
return
|
|
||||||
|
|
||||||
error_msg = '\n'.join(format_error_func(error) for error in errors)
|
|
||||||
raise ConfigurationError(
|
|
||||||
"The Compose file{file_msg} is invalid because:\n{error_msg}".format(
|
|
||||||
file_msg=" '{}'".format(filename) if filename else "",
|
|
||||||
error_msg=error_msg))
|
|
||||||
|
|
@ -1,28 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import sys
|
|
||||||
|
|
||||||
DEFAULT_TIMEOUT = 10
|
|
||||||
HTTP_TIMEOUT = 60
|
|
||||||
IMAGE_EVENTS = ['delete', 'import', 'pull', 'push', 'tag', 'untag']
|
|
||||||
IS_WINDOWS_PLATFORM = (sys.platform == "win32")
|
|
||||||
LABEL_CONTAINER_NUMBER = 'com.docker.compose.container-number'
|
|
||||||
LABEL_ONE_OFF = 'com.docker.compose.oneoff'
|
|
||||||
LABEL_PROJECT = 'com.docker.compose.project'
|
|
||||||
LABEL_SERVICE = 'com.docker.compose.service'
|
|
||||||
LABEL_VERSION = 'com.docker.compose.version'
|
|
||||||
LABEL_CONFIG_HASH = 'com.docker.compose.config-hash'
|
|
||||||
|
|
||||||
COMPOSEFILE_V1 = '1'
|
|
||||||
COMPOSEFILE_V2_0 = '2.0'
|
|
||||||
|
|
||||||
API_VERSIONS = {
|
|
||||||
COMPOSEFILE_V1: '1.21',
|
|
||||||
COMPOSEFILE_V2_0: '1.22',
|
|
||||||
}
|
|
||||||
|
|
||||||
API_VERSION_TO_ENGINE_VERSION = {
|
|
||||||
API_VERSIONS[COMPOSEFILE_V1]: '1.9.0',
|
|
||||||
API_VERSIONS[COMPOSEFILE_V2_0]: '1.10.0'
|
|
||||||
}
|
|
||||||
|
|
@ -1,272 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from functools import reduce
|
|
||||||
|
|
||||||
import six
|
|
||||||
|
|
||||||
from .const import LABEL_CONTAINER_NUMBER
|
|
||||||
from .const import LABEL_PROJECT
|
|
||||||
from .const import LABEL_SERVICE
|
|
||||||
|
|
||||||
|
|
||||||
class Container(object):
|
|
||||||
"""
|
|
||||||
Represents a Docker container, constructed from the output of
|
|
||||||
GET /containers/:id:/json.
|
|
||||||
"""
|
|
||||||
def __init__(self, client, dictionary, has_been_inspected=False):
|
|
||||||
self.client = client
|
|
||||||
self.dictionary = dictionary
|
|
||||||
self.has_been_inspected = has_been_inspected
|
|
||||||
self.log_stream = None
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_ps(cls, client, dictionary, **kwargs):
|
|
||||||
"""
|
|
||||||
Construct a container object from the output of GET /containers/json.
|
|
||||||
"""
|
|
||||||
name = get_container_name(dictionary)
|
|
||||||
if name is None:
|
|
||||||
return None
|
|
||||||
|
|
||||||
new_dictionary = {
|
|
||||||
'Id': dictionary['Id'],
|
|
||||||
'Image': dictionary['Image'],
|
|
||||||
'Name': '/' + name,
|
|
||||||
}
|
|
||||||
return cls(client, new_dictionary, **kwargs)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_id(cls, client, id):
|
|
||||||
return cls(client, client.inspect_container(id), has_been_inspected=True)
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def create(cls, client, **options):
|
|
||||||
response = client.create_container(**options)
|
|
||||||
return cls.from_id(client, response['Id'])
|
|
||||||
|
|
||||||
@property
|
|
||||||
def id(self):
|
|
||||||
return self.dictionary['Id']
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image(self):
|
|
||||||
return self.dictionary['Image']
|
|
||||||
|
|
||||||
@property
|
|
||||||
def image_config(self):
|
|
||||||
return self.client.inspect_image(self.image)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def short_id(self):
|
|
||||||
return self.id[:12]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name(self):
|
|
||||||
return self.dictionary['Name'][1:]
|
|
||||||
|
|
||||||
@property
|
|
||||||
def service(self):
|
|
||||||
return self.labels.get(LABEL_SERVICE)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def name_without_project(self):
|
|
||||||
project = self.labels.get(LABEL_PROJECT)
|
|
||||||
|
|
||||||
if self.name.startswith('{0}_{1}'.format(project, self.service)):
|
|
||||||
return '{0}_{1}'.format(self.service, self.number)
|
|
||||||
else:
|
|
||||||
return self.name
|
|
||||||
|
|
||||||
@property
|
|
||||||
def number(self):
|
|
||||||
number = self.labels.get(LABEL_CONTAINER_NUMBER)
|
|
||||||
if not number:
|
|
||||||
raise ValueError("Container {0} does not have a {1} label".format(
|
|
||||||
self.short_id, LABEL_CONTAINER_NUMBER))
|
|
||||||
return int(number)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def ports(self):
|
|
||||||
self.inspect_if_not_inspected()
|
|
||||||
return self.get('NetworkSettings.Ports') or {}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def human_readable_ports(self):
|
|
||||||
def format_port(private, public):
|
|
||||||
if not public:
|
|
||||||
return private
|
|
||||||
return '{HostIp}:{HostPort}->{private}'.format(
|
|
||||||
private=private, **public[0])
|
|
||||||
|
|
||||||
return ', '.join(format_port(*item)
|
|
||||||
for item in sorted(six.iteritems(self.ports)))
|
|
||||||
|
|
||||||
@property
|
|
||||||
def labels(self):
|
|
||||||
return self.get('Config.Labels') or {}
|
|
||||||
|
|
||||||
@property
|
|
||||||
def stop_signal(self):
|
|
||||||
return self.get('Config.StopSignal')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def log_config(self):
|
|
||||||
return self.get('HostConfig.LogConfig') or None
|
|
||||||
|
|
||||||
@property
|
|
||||||
def human_readable_state(self):
|
|
||||||
if self.is_paused:
|
|
||||||
return 'Paused'
|
|
||||||
if self.is_restarting:
|
|
||||||
return 'Restarting'
|
|
||||||
if self.is_running:
|
|
||||||
return 'Ghost' if self.get('State.Ghost') else 'Up'
|
|
||||||
else:
|
|
||||||
return 'Exit %s' % self.get('State.ExitCode')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def human_readable_command(self):
|
|
||||||
entrypoint = self.get('Config.Entrypoint') or []
|
|
||||||
cmd = self.get('Config.Cmd') or []
|
|
||||||
return ' '.join(entrypoint + cmd)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def environment(self):
|
|
||||||
def parse_env(var):
|
|
||||||
if '=' in var:
|
|
||||||
return var.split("=", 1)
|
|
||||||
return var, None
|
|
||||||
return dict(parse_env(var) for var in self.get('Config.Env') or [])
|
|
||||||
|
|
||||||
@property
|
|
||||||
def exit_code(self):
|
|
||||||
return self.get('State.ExitCode')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_running(self):
|
|
||||||
return self.get('State.Running')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_restarting(self):
|
|
||||||
return self.get('State.Restarting')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def is_paused(self):
|
|
||||||
return self.get('State.Paused')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def log_driver(self):
|
|
||||||
return self.get('HostConfig.LogConfig.Type')
|
|
||||||
|
|
||||||
@property
|
|
||||||
def has_api_logs(self):
|
|
||||||
log_type = self.log_driver
|
|
||||||
return not log_type or log_type != 'none'
|
|
||||||
|
|
||||||
def attach_log_stream(self):
|
|
||||||
"""A log stream can only be attached if the container uses a json-file
|
|
||||||
log driver.
|
|
||||||
"""
|
|
||||||
if self.has_api_logs:
|
|
||||||
self.log_stream = self.attach(stdout=True, stderr=True, stream=True)
|
|
||||||
|
|
||||||
def get(self, key):
|
|
||||||
"""Return a value from the container or None if the value is not set.
|
|
||||||
|
|
||||||
:param key: a string using dotted notation for nested dictionary
|
|
||||||
lookups
|
|
||||||
"""
|
|
||||||
self.inspect_if_not_inspected()
|
|
||||||
|
|
||||||
def get_value(dictionary, key):
|
|
||||||
return (dictionary or {}).get(key)
|
|
||||||
|
|
||||||
return reduce(get_value, key.split('.'), self.dictionary)
|
|
||||||
|
|
||||||
def get_local_port(self, port, protocol='tcp'):
|
|
||||||
port = self.ports.get("%s/%s" % (port, protocol))
|
|
||||||
return "{HostIp}:{HostPort}".format(**port[0]) if port else None
|
|
||||||
|
|
||||||
def get_mount(self, mount_dest):
|
|
||||||
for mount in self.get('Mounts'):
|
|
||||||
if mount['Destination'] == mount_dest:
|
|
||||||
return mount
|
|
||||||
return None
|
|
||||||
|
|
||||||
def start(self, **options):
|
|
||||||
return self.client.start(self.id, **options)
|
|
||||||
|
|
||||||
def stop(self, **options):
|
|
||||||
return self.client.stop(self.id, **options)
|
|
||||||
|
|
||||||
def pause(self, **options):
|
|
||||||
return self.client.pause(self.id, **options)
|
|
||||||
|
|
||||||
def unpause(self, **options):
|
|
||||||
return self.client.unpause(self.id, **options)
|
|
||||||
|
|
||||||
def kill(self, **options):
|
|
||||||
return self.client.kill(self.id, **options)
|
|
||||||
|
|
||||||
def restart(self, **options):
|
|
||||||
return self.client.restart(self.id, **options)
|
|
||||||
|
|
||||||
def remove(self, **options):
|
|
||||||
return self.client.remove_container(self.id, **options)
|
|
||||||
|
|
||||||
def create_exec(self, command, **options):
|
|
||||||
return self.client.exec_create(self.id, command, **options)
|
|
||||||
|
|
||||||
def start_exec(self, exec_id, **options):
|
|
||||||
return self.client.exec_start(exec_id, **options)
|
|
||||||
|
|
||||||
def rename_to_tmp_name(self):
|
|
||||||
"""Rename the container to a hopefully unique temporary container name
|
|
||||||
by prepending the short id.
|
|
||||||
"""
|
|
||||||
self.client.rename(
|
|
||||||
self.id,
|
|
||||||
'%s_%s' % (self.short_id, self.name)
|
|
||||||
)
|
|
||||||
|
|
||||||
def inspect_if_not_inspected(self):
|
|
||||||
if not self.has_been_inspected:
|
|
||||||
self.inspect()
|
|
||||||
|
|
||||||
def wait(self):
|
|
||||||
return self.client.wait(self.id)
|
|
||||||
|
|
||||||
def logs(self, *args, **kwargs):
|
|
||||||
return self.client.logs(self.id, *args, **kwargs)
|
|
||||||
|
|
||||||
def inspect(self):
|
|
||||||
self.dictionary = self.client.inspect_container(self.id)
|
|
||||||
self.has_been_inspected = True
|
|
||||||
return self.dictionary
|
|
||||||
|
|
||||||
def attach(self, *args, **kwargs):
|
|
||||||
return self.client.attach(self.id, *args, **kwargs)
|
|
||||||
|
|
||||||
def __repr__(self):
|
|
||||||
return '<Container: %s (%s)>' % (self.name, self.id[:6])
|
|
||||||
|
|
||||||
def __eq__(self, other):
|
|
||||||
if type(self) != type(other):
|
|
||||||
return False
|
|
||||||
return self.id == other.id
|
|
||||||
|
|
||||||
def __hash__(self):
|
|
||||||
return self.id.__hash__()
|
|
||||||
|
|
||||||
|
|
||||||
def get_container_name(container):
|
|
||||||
if not container.get('Name') and not container.get('Names'):
|
|
||||||
return None
|
|
||||||
# inspect
|
|
||||||
if 'Name' in container:
|
|
||||||
return container['Name']
|
|
||||||
# ps
|
|
||||||
shortest_name = min(container['Names'], key=lambda n: len(n.split('/')))
|
|
||||||
return shortest_name.split('/')[-1]
|
|
||||||
|
|
@ -1,7 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
|
|
||||||
class OperationFailedError(Exception):
|
|
||||||
def __init__(self, reason):
|
|
||||||
self.msg = reason
|
|
||||||
|
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 28 KiB |
|
Before Width: | Height: | Size: 69 KiB After Width: | Height: | Size: 69 KiB |
|
Before Width: | Height: | Size: 69 KiB After Width: | Height: | Size: 69 KiB |
|
Before Width: | Height: | Size: 29 KiB After Width: | Height: | Size: 29 KiB |
|
Before Width: | Height: | Size: 61 KiB After Width: | Height: | Size: 61 KiB |
|
|
@ -1,190 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import logging
|
|
||||||
|
|
||||||
from docker.errors import NotFound
|
|
||||||
from docker.utils import create_ipam_config
|
|
||||||
from docker.utils import create_ipam_pool
|
|
||||||
|
|
||||||
from .config import ConfigurationError
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
class Network(object):
|
|
||||||
def __init__(self, client, project, name, driver=None, driver_opts=None,
|
|
||||||
ipam=None, external_name=None):
|
|
||||||
self.client = client
|
|
||||||
self.project = project
|
|
||||||
self.name = name
|
|
||||||
self.driver = driver
|
|
||||||
self.driver_opts = driver_opts
|
|
||||||
self.ipam = create_ipam_config_from_dict(ipam)
|
|
||||||
self.external_name = external_name
|
|
||||||
|
|
||||||
def ensure(self):
|
|
||||||
if self.external_name:
|
|
||||||
try:
|
|
||||||
self.inspect()
|
|
||||||
log.debug(
|
|
||||||
'Network {0} declared as external. No new '
|
|
||||||
'network will be created.'.format(self.name)
|
|
||||||
)
|
|
||||||
except NotFound:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Network {name} declared as external, but could'
|
|
||||||
' not be found. Please create the network manually'
|
|
||||||
' using `{command} {name}` and try again.'.format(
|
|
||||||
name=self.external_name,
|
|
||||||
command='docker network create'
|
|
||||||
)
|
|
||||||
)
|
|
||||||
return
|
|
||||||
|
|
||||||
try:
|
|
||||||
data = self.inspect()
|
|
||||||
if self.driver and data['Driver'] != self.driver:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Network "{}" needs to be recreated - driver has changed'
|
|
||||||
.format(self.full_name))
|
|
||||||
if data['Options'] != (self.driver_opts or {}):
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Network "{}" needs to be recreated - options have changed'
|
|
||||||
.format(self.full_name))
|
|
||||||
except NotFound:
|
|
||||||
driver_name = 'the default driver'
|
|
||||||
if self.driver:
|
|
||||||
driver_name = 'driver "{}"'.format(self.driver)
|
|
||||||
|
|
||||||
log.info(
|
|
||||||
'Creating network "{}" with {}'
|
|
||||||
.format(self.full_name, driver_name)
|
|
||||||
)
|
|
||||||
|
|
||||||
self.client.create_network(
|
|
||||||
name=self.full_name,
|
|
||||||
driver=self.driver,
|
|
||||||
options=self.driver_opts,
|
|
||||||
ipam=self.ipam,
|
|
||||||
)
|
|
||||||
|
|
||||||
def remove(self):
|
|
||||||
if self.external_name:
|
|
||||||
log.info("Network %s is external, skipping", self.full_name)
|
|
||||||
return
|
|
||||||
|
|
||||||
log.info("Removing network {}".format(self.full_name))
|
|
||||||
self.client.remove_network(self.full_name)
|
|
||||||
|
|
||||||
def inspect(self):
|
|
||||||
return self.client.inspect_network(self.full_name)
|
|
||||||
|
|
||||||
@property
|
|
||||||
def full_name(self):
|
|
||||||
if self.external_name:
|
|
||||||
return self.external_name
|
|
||||||
return '{0}_{1}'.format(self.project, self.name)
|
|
||||||
|
|
||||||
|
|
||||||
def create_ipam_config_from_dict(ipam_dict):
|
|
||||||
if not ipam_dict:
|
|
||||||
return None
|
|
||||||
|
|
||||||
return create_ipam_config(
|
|
||||||
driver=ipam_dict.get('driver'),
|
|
||||||
pool_configs=[
|
|
||||||
create_ipam_pool(
|
|
||||||
subnet=config.get('subnet'),
|
|
||||||
iprange=config.get('ip_range'),
|
|
||||||
gateway=config.get('gateway'),
|
|
||||||
aux_addresses=config.get('aux_addresses'),
|
|
||||||
)
|
|
||||||
for config in ipam_dict.get('config', [])
|
|
||||||
],
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def build_networks(name, config_data, client):
|
|
||||||
network_config = config_data.networks or {}
|
|
||||||
networks = {
|
|
||||||
network_name: Network(
|
|
||||||
client=client, project=name, name=network_name,
|
|
||||||
driver=data.get('driver'),
|
|
||||||
driver_opts=data.get('driver_opts'),
|
|
||||||
ipam=data.get('ipam'),
|
|
||||||
external_name=data.get('external_name'),
|
|
||||||
)
|
|
||||||
for network_name, data in network_config.items()
|
|
||||||
}
|
|
||||||
|
|
||||||
if 'default' not in networks:
|
|
||||||
networks['default'] = Network(client, name, 'default')
|
|
||||||
|
|
||||||
return networks
|
|
||||||
|
|
||||||
|
|
||||||
class ProjectNetworks(object):
|
|
||||||
|
|
||||||
def __init__(self, networks, use_networking):
|
|
||||||
self.networks = networks or {}
|
|
||||||
self.use_networking = use_networking
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_services(cls, services, networks, use_networking):
|
|
||||||
service_networks = {
|
|
||||||
network: networks.get(network)
|
|
||||||
for service in services
|
|
||||||
for network in get_network_names_for_service(service)
|
|
||||||
}
|
|
||||||
unused = set(networks) - set(service_networks) - {'default'}
|
|
||||||
if unused:
|
|
||||||
log.warn(
|
|
||||||
"Some networks were defined but are not used by any service: "
|
|
||||||
"{}".format(", ".join(unused)))
|
|
||||||
return cls(service_networks, use_networking)
|
|
||||||
|
|
||||||
def remove(self):
|
|
||||||
if not self.use_networking:
|
|
||||||
return
|
|
||||||
for network in self.networks.values():
|
|
||||||
try:
|
|
||||||
network.remove()
|
|
||||||
except NotFound:
|
|
||||||
log.warn("Network %s not found.", network.full_name)
|
|
||||||
|
|
||||||
def initialize(self):
|
|
||||||
if not self.use_networking:
|
|
||||||
return
|
|
||||||
|
|
||||||
for network in self.networks.values():
|
|
||||||
network.ensure()
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_defs_for_service(service_dict):
|
|
||||||
if 'network_mode' in service_dict:
|
|
||||||
return {}
|
|
||||||
networks = service_dict.get('networks', {'default': None})
|
|
||||||
return dict(
|
|
||||||
(net, (config or {}))
|
|
||||||
for net, config in networks.items()
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def get_network_names_for_service(service_dict):
|
|
||||||
return get_network_defs_for_service(service_dict).keys()
|
|
||||||
|
|
||||||
|
|
||||||
def get_networks(service_dict, network_definitions):
|
|
||||||
networks = {}
|
|
||||||
for name, netdef in get_network_defs_for_service(service_dict).items():
|
|
||||||
network = network_definitions.get(name)
|
|
||||||
if network:
|
|
||||||
networks[network.full_name] = netdef
|
|
||||||
else:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Service "{}" uses an undefined network "{}"'
|
|
||||||
.format(service_dict['name'], name))
|
|
||||||
|
|
||||||
return networks
|
|
||||||
|
|
@ -1,254 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import logging
|
|
||||||
import operator
|
|
||||||
import sys
|
|
||||||
from threading import Thread
|
|
||||||
|
|
||||||
from docker.errors import APIError
|
|
||||||
from six.moves import _thread as thread
|
|
||||||
from six.moves.queue import Empty
|
|
||||||
from six.moves.queue import Queue
|
|
||||||
|
|
||||||
from compose.cli.signals import ShutdownException
|
|
||||||
from compose.errors import OperationFailedError
|
|
||||||
from compose.utils import get_output_stream
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
STOP = object()
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_execute(objects, func, get_name, msg, get_deps=None):
|
|
||||||
"""Runs func on objects in parallel while ensuring that func is
|
|
||||||
ran on object only after it is ran on all its dependencies.
|
|
||||||
|
|
||||||
get_deps called on object must return a collection with its dependencies.
|
|
||||||
get_name called on object must return its name.
|
|
||||||
"""
|
|
||||||
objects = list(objects)
|
|
||||||
stream = get_output_stream(sys.stderr)
|
|
||||||
|
|
||||||
writer = ParallelStreamWriter(stream, msg)
|
|
||||||
for obj in objects:
|
|
||||||
writer.initialize(get_name(obj))
|
|
||||||
|
|
||||||
events = parallel_execute_iter(objects, func, get_deps)
|
|
||||||
|
|
||||||
errors = {}
|
|
||||||
results = []
|
|
||||||
error_to_reraise = None
|
|
||||||
|
|
||||||
for obj, result, exception in events:
|
|
||||||
if exception is None:
|
|
||||||
writer.write(get_name(obj), 'done')
|
|
||||||
results.append(result)
|
|
||||||
elif isinstance(exception, APIError):
|
|
||||||
errors[get_name(obj)] = exception.explanation
|
|
||||||
writer.write(get_name(obj), 'error')
|
|
||||||
elif isinstance(exception, OperationFailedError):
|
|
||||||
errors[get_name(obj)] = exception.msg
|
|
||||||
writer.write(get_name(obj), 'error')
|
|
||||||
elif isinstance(exception, UpstreamError):
|
|
||||||
writer.write(get_name(obj), 'error')
|
|
||||||
else:
|
|
||||||
errors[get_name(obj)] = exception
|
|
||||||
error_to_reraise = exception
|
|
||||||
|
|
||||||
for obj_name, error in errors.items():
|
|
||||||
stream.write("\nERROR: for {} {}\n".format(obj_name, error))
|
|
||||||
|
|
||||||
if error_to_reraise:
|
|
||||||
raise error_to_reraise
|
|
||||||
|
|
||||||
return results, errors
|
|
||||||
|
|
||||||
|
|
||||||
def _no_deps(x):
|
|
||||||
return []
|
|
||||||
|
|
||||||
|
|
||||||
class State(object):
|
|
||||||
"""
|
|
||||||
Holds the state of a partially-complete parallel operation.
|
|
||||||
|
|
||||||
state.started: objects being processed
|
|
||||||
state.finished: objects which have been processed
|
|
||||||
state.failed: objects which either failed or whose dependencies failed
|
|
||||||
"""
|
|
||||||
def __init__(self, objects):
|
|
||||||
self.objects = objects
|
|
||||||
|
|
||||||
self.started = set()
|
|
||||||
self.finished = set()
|
|
||||||
self.failed = set()
|
|
||||||
|
|
||||||
def is_done(self):
|
|
||||||
return len(self.finished) + len(self.failed) >= len(self.objects)
|
|
||||||
|
|
||||||
def pending(self):
|
|
||||||
return set(self.objects) - self.started - self.finished - self.failed
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_execute_iter(objects, func, get_deps):
|
|
||||||
"""
|
|
||||||
Runs func on objects in parallel while ensuring that func is
|
|
||||||
ran on object only after it is ran on all its dependencies.
|
|
||||||
|
|
||||||
Returns an iterator of tuples which look like:
|
|
||||||
|
|
||||||
# if func returned normally when run on object
|
|
||||||
(object, result, None)
|
|
||||||
|
|
||||||
# if func raised an exception when run on object
|
|
||||||
(object, None, exception)
|
|
||||||
|
|
||||||
# if func raised an exception when run on one of object's dependencies
|
|
||||||
(object, None, UpstreamError())
|
|
||||||
"""
|
|
||||||
if get_deps is None:
|
|
||||||
get_deps = _no_deps
|
|
||||||
|
|
||||||
results = Queue()
|
|
||||||
state = State(objects)
|
|
||||||
|
|
||||||
while True:
|
|
||||||
feed_queue(objects, func, get_deps, results, state)
|
|
||||||
|
|
||||||
try:
|
|
||||||
event = results.get(timeout=0.1)
|
|
||||||
except Empty:
|
|
||||||
continue
|
|
||||||
# See https://github.com/docker/compose/issues/189
|
|
||||||
except thread.error:
|
|
||||||
raise ShutdownException()
|
|
||||||
|
|
||||||
if event is STOP:
|
|
||||||
break
|
|
||||||
|
|
||||||
obj, _, exception = event
|
|
||||||
if exception is None:
|
|
||||||
log.debug('Finished processing: {}'.format(obj))
|
|
||||||
state.finished.add(obj)
|
|
||||||
else:
|
|
||||||
log.debug('Failed: {}'.format(obj))
|
|
||||||
state.failed.add(obj)
|
|
||||||
|
|
||||||
yield event
|
|
||||||
|
|
||||||
|
|
||||||
def producer(obj, func, results):
|
|
||||||
"""
|
|
||||||
The entry point for a producer thread which runs func on a single object.
|
|
||||||
Places a tuple on the results queue once func has either returned or raised.
|
|
||||||
"""
|
|
||||||
try:
|
|
||||||
result = func(obj)
|
|
||||||
results.put((obj, result, None))
|
|
||||||
except Exception as e:
|
|
||||||
results.put((obj, None, e))
|
|
||||||
|
|
||||||
|
|
||||||
def feed_queue(objects, func, get_deps, results, state):
|
|
||||||
"""
|
|
||||||
Starts producer threads for any objects which are ready to be processed
|
|
||||||
(i.e. they have no dependencies which haven't been successfully processed).
|
|
||||||
|
|
||||||
Shortcuts any objects whose dependencies have failed and places an
|
|
||||||
(object, None, UpstreamError()) tuple on the results queue.
|
|
||||||
"""
|
|
||||||
pending = state.pending()
|
|
||||||
log.debug('Pending: {}'.format(pending))
|
|
||||||
|
|
||||||
for obj in pending:
|
|
||||||
deps = get_deps(obj)
|
|
||||||
|
|
||||||
if any(dep in state.failed for dep in deps):
|
|
||||||
log.debug('{} has upstream errors - not processing'.format(obj))
|
|
||||||
results.put((obj, None, UpstreamError()))
|
|
||||||
state.failed.add(obj)
|
|
||||||
elif all(
|
|
||||||
dep not in objects or dep in state.finished
|
|
||||||
for dep in deps
|
|
||||||
):
|
|
||||||
log.debug('Starting producer thread for {}'.format(obj))
|
|
||||||
t = Thread(target=producer, args=(obj, func, results))
|
|
||||||
t.daemon = True
|
|
||||||
t.start()
|
|
||||||
state.started.add(obj)
|
|
||||||
|
|
||||||
if state.is_done():
|
|
||||||
results.put(STOP)
|
|
||||||
|
|
||||||
|
|
||||||
class UpstreamError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
class ParallelStreamWriter(object):
|
|
||||||
"""Write out messages for operations happening in parallel.
|
|
||||||
|
|
||||||
Each operation has it's own line, and ANSI code characters are used
|
|
||||||
to jump to the correct line, and write over the line.
|
|
||||||
"""
|
|
||||||
|
|
||||||
def __init__(self, stream, msg):
|
|
||||||
self.stream = stream
|
|
||||||
self.msg = msg
|
|
||||||
self.lines = []
|
|
||||||
|
|
||||||
def initialize(self, obj_index):
|
|
||||||
if self.msg is None:
|
|
||||||
return
|
|
||||||
self.lines.append(obj_index)
|
|
||||||
self.stream.write("{} {} ... \r\n".format(self.msg, obj_index))
|
|
||||||
self.stream.flush()
|
|
||||||
|
|
||||||
def write(self, obj_index, status):
|
|
||||||
if self.msg is None:
|
|
||||||
return
|
|
||||||
position = self.lines.index(obj_index)
|
|
||||||
diff = len(self.lines) - position
|
|
||||||
# move up
|
|
||||||
self.stream.write("%c[%dA" % (27, diff))
|
|
||||||
# erase
|
|
||||||
self.stream.write("%c[2K\r" % 27)
|
|
||||||
self.stream.write("{} {} ... {}\r".format(self.msg, obj_index, status))
|
|
||||||
# move back down
|
|
||||||
self.stream.write("%c[%dB" % (27, diff))
|
|
||||||
self.stream.flush()
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_operation(containers, operation, options, message):
|
|
||||||
parallel_execute(
|
|
||||||
containers,
|
|
||||||
operator.methodcaller(operation, **options),
|
|
||||||
operator.attrgetter('name'),
|
|
||||||
message)
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_remove(containers, options):
|
|
||||||
stopped_containers = [c for c in containers if not c.is_running]
|
|
||||||
parallel_operation(stopped_containers, 'remove', options, 'Removing')
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_start(containers, options):
|
|
||||||
parallel_operation(containers, 'start', options, 'Starting')
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_pause(containers, options):
|
|
||||||
parallel_operation(containers, 'pause', options, 'Pausing')
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_unpause(containers, options):
|
|
||||||
parallel_operation(containers, 'unpause', options, 'Unpausing')
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_kill(containers, options):
|
|
||||||
parallel_operation(containers, 'kill', options, 'Killing')
|
|
||||||
|
|
||||||
|
|
||||||
def parallel_restart(containers, options):
|
|
||||||
parallel_operation(containers, 'restart', options, 'Restarting')
|
|
||||||
|
|
@ -1,112 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
from compose import utils
|
|
||||||
|
|
||||||
|
|
||||||
class StreamOutputError(Exception):
|
|
||||||
pass
|
|
||||||
|
|
||||||
|
|
||||||
def stream_output(output, stream):
|
|
||||||
is_terminal = hasattr(stream, 'isatty') and stream.isatty()
|
|
||||||
stream = utils.get_output_stream(stream)
|
|
||||||
all_events = []
|
|
||||||
lines = {}
|
|
||||||
diff = 0
|
|
||||||
|
|
||||||
for event in utils.json_stream(output):
|
|
||||||
all_events.append(event)
|
|
||||||
is_progress_event = 'progress' in event or 'progressDetail' in event
|
|
||||||
|
|
||||||
if not is_progress_event:
|
|
||||||
print_output_event(event, stream, is_terminal)
|
|
||||||
stream.flush()
|
|
||||||
continue
|
|
||||||
|
|
||||||
if not is_terminal:
|
|
||||||
continue
|
|
||||||
|
|
||||||
# if it's a progress event and we have a terminal, then display the progress bars
|
|
||||||
image_id = event.get('id')
|
|
||||||
if not image_id:
|
|
||||||
continue
|
|
||||||
|
|
||||||
if image_id in lines:
|
|
||||||
diff = len(lines) - lines[image_id]
|
|
||||||
else:
|
|
||||||
lines[image_id] = len(lines)
|
|
||||||
stream.write("\n")
|
|
||||||
diff = 0
|
|
||||||
|
|
||||||
# move cursor up `diff` rows
|
|
||||||
stream.write("%c[%dA" % (27, diff))
|
|
||||||
|
|
||||||
print_output_event(event, stream, is_terminal)
|
|
||||||
|
|
||||||
if 'id' in event:
|
|
||||||
# move cursor back down
|
|
||||||
stream.write("%c[%dB" % (27, diff))
|
|
||||||
|
|
||||||
stream.flush()
|
|
||||||
|
|
||||||
return all_events
|
|
||||||
|
|
||||||
|
|
||||||
def print_output_event(event, stream, is_terminal):
|
|
||||||
if 'errorDetail' in event:
|
|
||||||
raise StreamOutputError(event['errorDetail']['message'])
|
|
||||||
|
|
||||||
terminator = ''
|
|
||||||
|
|
||||||
if is_terminal and 'stream' not in event:
|
|
||||||
# erase current line
|
|
||||||
stream.write("%c[2K\r" % 27)
|
|
||||||
terminator = "\r"
|
|
||||||
elif 'progressDetail' in event:
|
|
||||||
return
|
|
||||||
|
|
||||||
if 'time' in event:
|
|
||||||
stream.write("[%s] " % event['time'])
|
|
||||||
|
|
||||||
if 'id' in event:
|
|
||||||
stream.write("%s: " % event['id'])
|
|
||||||
|
|
||||||
if 'from' in event:
|
|
||||||
stream.write("(from %s) " % event['from'])
|
|
||||||
|
|
||||||
status = event.get('status', '')
|
|
||||||
|
|
||||||
if 'progress' in event:
|
|
||||||
stream.write("%s %s%s" % (status, event['progress'], terminator))
|
|
||||||
elif 'progressDetail' in event:
|
|
||||||
detail = event['progressDetail']
|
|
||||||
total = detail.get('total')
|
|
||||||
if 'current' in detail and total:
|
|
||||||
percentage = float(detail['current']) / float(total) * 100
|
|
||||||
stream.write('%s (%.1f%%)%s' % (status, percentage, terminator))
|
|
||||||
else:
|
|
||||||
stream.write('%s%s' % (status, terminator))
|
|
||||||
elif 'stream' in event:
|
|
||||||
stream.write("%s%s" % (event['stream'], terminator))
|
|
||||||
else:
|
|
||||||
stream.write("%s%s\n" % (status, terminator))
|
|
||||||
|
|
||||||
|
|
||||||
def get_digest_from_pull(events):
|
|
||||||
for event in events:
|
|
||||||
status = event.get('status')
|
|
||||||
if not status or 'Digest' not in status:
|
|
||||||
continue
|
|
||||||
|
|
||||||
_, digest = status.split(':', 1)
|
|
||||||
return digest.strip()
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def get_digest_from_push(events):
|
|
||||||
for event in events:
|
|
||||||
digest = event.get('aux', {}).get('Digest')
|
|
||||||
if digest:
|
|
||||||
return digest
|
|
||||||
return None
|
|
||||||
|
|
@ -1,563 +0,0 @@
|
||||||
from __future__ import absolute_import
|
|
||||||
from __future__ import unicode_literals
|
|
||||||
|
|
||||||
import datetime
|
|
||||||
import logging
|
|
||||||
import operator
|
|
||||||
from functools import reduce
|
|
||||||
|
|
||||||
import enum
|
|
||||||
from docker.errors import APIError
|
|
||||||
|
|
||||||
from . import parallel
|
|
||||||
from .config import ConfigurationError
|
|
||||||
from .config.config import V1
|
|
||||||
from .config.sort_services import get_container_name_from_network_mode
|
|
||||||
from .config.sort_services import get_service_name_from_network_mode
|
|
||||||
from .const import DEFAULT_TIMEOUT
|
|
||||||
from .const import IMAGE_EVENTS
|
|
||||||
from .const import LABEL_ONE_OFF
|
|
||||||
from .const import LABEL_PROJECT
|
|
||||||
from .const import LABEL_SERVICE
|
|
||||||
from .container import Container
|
|
||||||
from .network import build_networks
|
|
||||||
from .network import get_networks
|
|
||||||
from .network import ProjectNetworks
|
|
||||||
from .service import BuildAction
|
|
||||||
from .service import ContainerNetworkMode
|
|
||||||
from .service import ConvergenceStrategy
|
|
||||||
from .service import NetworkMode
|
|
||||||
from .service import Service
|
|
||||||
from .service import ServiceNetworkMode
|
|
||||||
from .utils import microseconds_from_time_nano
|
|
||||||
from .volume import ProjectVolumes
|
|
||||||
|
|
||||||
|
|
||||||
log = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
|
|
||||||
@enum.unique
|
|
||||||
class OneOffFilter(enum.Enum):
|
|
||||||
include = 0
|
|
||||||
exclude = 1
|
|
||||||
only = 2
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def update_labels(cls, value, labels):
|
|
||||||
if value == cls.only:
|
|
||||||
labels.append('{0}={1}'.format(LABEL_ONE_OFF, "True"))
|
|
||||||
elif value == cls.exclude:
|
|
||||||
labels.append('{0}={1}'.format(LABEL_ONE_OFF, "False"))
|
|
||||||
elif value == cls.include:
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
raise ValueError("Invalid value for one_off: {}".format(repr(value)))
|
|
||||||
|
|
||||||
|
|
||||||
class Project(object):
|
|
||||||
"""
|
|
||||||
A collection of services.
|
|
||||||
"""
|
|
||||||
def __init__(self, name, services, client, networks=None, volumes=None):
|
|
||||||
self.name = name
|
|
||||||
self.services = services
|
|
||||||
self.client = client
|
|
||||||
self.volumes = volumes or ProjectVolumes({})
|
|
||||||
self.networks = networks or ProjectNetworks({}, False)
|
|
||||||
|
|
||||||
def labels(self, one_off=OneOffFilter.exclude):
|
|
||||||
labels = ['{0}={1}'.format(LABEL_PROJECT, self.name)]
|
|
||||||
|
|
||||||
OneOffFilter.update_labels(one_off, labels)
|
|
||||||
return labels
|
|
||||||
|
|
||||||
@classmethod
|
|
||||||
def from_config(cls, name, config_data, client):
|
|
||||||
"""
|
|
||||||
Construct a Project from a config.Config object.
|
|
||||||
"""
|
|
||||||
use_networking = (config_data.version and config_data.version != V1)
|
|
||||||
networks = build_networks(name, config_data, client)
|
|
||||||
project_networks = ProjectNetworks.from_services(
|
|
||||||
config_data.services,
|
|
||||||
networks,
|
|
||||||
use_networking)
|
|
||||||
volumes = ProjectVolumes.from_config(name, config_data, client)
|
|
||||||
project = cls(name, [], client, project_networks, volumes)
|
|
||||||
|
|
||||||
for service_dict in config_data.services:
|
|
||||||
service_dict = dict(service_dict)
|
|
||||||
if use_networking:
|
|
||||||
service_networks = get_networks(service_dict, networks)
|
|
||||||
else:
|
|
||||||
service_networks = {}
|
|
||||||
|
|
||||||
service_dict.pop('networks', None)
|
|
||||||
links = project.get_links(service_dict)
|
|
||||||
network_mode = project.get_network_mode(
|
|
||||||
service_dict, list(service_networks.keys())
|
|
||||||
)
|
|
||||||
volumes_from = get_volumes_from(project, service_dict)
|
|
||||||
|
|
||||||
if config_data.version != V1:
|
|
||||||
service_dict['volumes'] = [
|
|
||||||
volumes.namespace_spec(volume_spec)
|
|
||||||
for volume_spec in service_dict.get('volumes', [])
|
|
||||||
]
|
|
||||||
|
|
||||||
project.services.append(
|
|
||||||
Service(
|
|
||||||
service_dict.pop('name'),
|
|
||||||
client=client,
|
|
||||||
project=name,
|
|
||||||
use_networking=use_networking,
|
|
||||||
networks=service_networks,
|
|
||||||
links=links,
|
|
||||||
network_mode=network_mode,
|
|
||||||
volumes_from=volumes_from,
|
|
||||||
**service_dict)
|
|
||||||
)
|
|
||||||
|
|
||||||
return project
|
|
||||||
|
|
||||||
@property
|
|
||||||
def service_names(self):
|
|
||||||
return [service.name for service in self.services]
|
|
||||||
|
|
||||||
def get_service(self, name):
|
|
||||||
"""
|
|
||||||
Retrieve a service by name. Raises NoSuchService
|
|
||||||
if the named service does not exist.
|
|
||||||
"""
|
|
||||||
for service in self.services:
|
|
||||||
if service.name == name:
|
|
||||||
return service
|
|
||||||
|
|
||||||
raise NoSuchService(name)
|
|
||||||
|
|
||||||
def validate_service_names(self, service_names):
|
|
||||||
"""
|
|
||||||
Validate that the given list of service names only contains valid
|
|
||||||
services. Raises NoSuchService if one of the names is invalid.
|
|
||||||
"""
|
|
||||||
valid_names = self.service_names
|
|
||||||
for name in service_names:
|
|
||||||
if name not in valid_names:
|
|
||||||
raise NoSuchService(name)
|
|
||||||
|
|
||||||
def get_services(self, service_names=None, include_deps=False):
|
|
||||||
"""
|
|
||||||
Returns a list of this project's services filtered
|
|
||||||
by the provided list of names, or all services if service_names is None
|
|
||||||
or [].
|
|
||||||
|
|
||||||
If include_deps is specified, returns a list including the dependencies for
|
|
||||||
service_names, in order of dependency.
|
|
||||||
|
|
||||||
Preserves the original order of self.services where possible,
|
|
||||||
reordering as needed to resolve dependencies.
|
|
||||||
|
|
||||||
Raises NoSuchService if any of the named services do not exist.
|
|
||||||
"""
|
|
||||||
if service_names is None or len(service_names) == 0:
|
|
||||||
service_names = self.service_names
|
|
||||||
|
|
||||||
unsorted = [self.get_service(name) for name in service_names]
|
|
||||||
services = [s for s in self.services if s in unsorted]
|
|
||||||
|
|
||||||
if include_deps:
|
|
||||||
services = reduce(self._inject_deps, services, [])
|
|
||||||
|
|
||||||
uniques = []
|
|
||||||
[uniques.append(s) for s in services if s not in uniques]
|
|
||||||
|
|
||||||
return uniques
|
|
||||||
|
|
||||||
def get_services_without_duplicate(self, service_names=None, include_deps=False):
|
|
||||||
services = self.get_services(service_names, include_deps)
|
|
||||||
for service in services:
|
|
||||||
service.remove_duplicate_containers()
|
|
||||||
return services
|
|
||||||
|
|
||||||
def get_links(self, service_dict):
|
|
||||||
links = []
|
|
||||||
if 'links' in service_dict:
|
|
||||||
for link in service_dict.get('links', []):
|
|
||||||
if ':' in link:
|
|
||||||
service_name, link_name = link.split(':', 1)
|
|
||||||
else:
|
|
||||||
service_name, link_name = link, None
|
|
||||||
try:
|
|
||||||
links.append((self.get_service(service_name), link_name))
|
|
||||||
except NoSuchService:
|
|
||||||
raise ConfigurationError(
|
|
||||||
'Service "%s" has a link to service "%s" which does not '
|
|
||||||
'exist.' % (service_dict['name'], service_name))
|
|
||||||
del service_dict['links']
|
|
||||||
return links
|
|
||||||
|
|
||||||
def get_network_mode(self, service_dict, networks):
|
|
||||||
network_mode = service_dict.pop('network_mode', None)
|
|
||||||
if not network_mode:
|
|
||||||
if self.networks.use_networking:
|
|
||||||
return NetworkMode(networks[0]) if networks else NetworkMode('none')
|
|
||||||
return NetworkMode(None)
|
|
||||||
|
|
||||||
service_name = get_service_name_from_network_mode(network_mode)
|
|
||||||
if service_name:
|
|
||||||
return ServiceNetworkMode(self.get_service(service_name))
|
|
||||||
|
|
||||||
container_name = get_container_name_from_network_mode(network_mode)
|
|
||||||
if container_name:
|
|
||||||
try:
|
|
||||||
return ContainerNetworkMode(Container.from_id(self.client, container_name))
|
|
||||||
except APIError:
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service '{name}' uses the network stack of container '{dep}' which "
|
|
||||||
"does not exist.".format(name=service_dict['name'], dep=container_name))
|
|
||||||
|
|
||||||
return NetworkMode(network_mode)
|
|
||||||
|
|
||||||
def start(self, service_names=None, **options):
|
|
||||||
containers = []
|
|
||||||
|
|
||||||
def start_service(service):
|
|
||||||
service_containers = service.start(quiet=True, **options)
|
|
||||||
containers.extend(service_containers)
|
|
||||||
|
|
||||||
services = self.get_services(service_names)
|
|
||||||
|
|
||||||
def get_deps(service):
|
|
||||||
return {self.get_service(dep) for dep in service.get_dependency_names()}
|
|
||||||
|
|
||||||
parallel.parallel_execute(
|
|
||||||
services,
|
|
||||||
start_service,
|
|
||||||
operator.attrgetter('name'),
|
|
||||||
'Starting',
|
|
||||||
get_deps)
|
|
||||||
|
|
||||||
return containers
|
|
||||||
|
|
||||||
def stop(self, service_names=None, one_off=OneOffFilter.exclude, **options):
|
|
||||||
containers = self.containers(service_names, one_off=one_off)
|
|
||||||
|
|
||||||
def get_deps(container):
|
|
||||||
# actually returning inversed dependencies
|
|
||||||
return {other for other in containers
|
|
||||||
if container.service in
|
|
||||||
self.get_service(other.service).get_dependency_names()}
|
|
||||||
|
|
||||||
parallel.parallel_execute(
|
|
||||||
containers,
|
|
||||||
operator.methodcaller('stop', **options),
|
|
||||||
operator.attrgetter('name'),
|
|
||||||
'Stopping',
|
|
||||||
get_deps)
|
|
||||||
|
|
||||||
def pause(self, service_names=None, **options):
|
|
||||||
containers = self.containers(service_names)
|
|
||||||
parallel.parallel_pause(reversed(containers), options)
|
|
||||||
return containers
|
|
||||||
|
|
||||||
def unpause(self, service_names=None, **options):
|
|
||||||
containers = self.containers(service_names)
|
|
||||||
parallel.parallel_unpause(containers, options)
|
|
||||||
return containers
|
|
||||||
|
|
||||||
def kill(self, service_names=None, **options):
|
|
||||||
parallel.parallel_kill(self.containers(service_names), options)
|
|
||||||
|
|
||||||
def remove_stopped(self, service_names=None, one_off=OneOffFilter.exclude, **options):
|
|
||||||
parallel.parallel_remove(self.containers(
|
|
||||||
service_names, stopped=True, one_off=one_off
|
|
||||||
), options)
|
|
||||||
|
|
||||||
def down(self, remove_image_type, include_volumes, remove_orphans=False):
|
|
||||||
self.stop(one_off=OneOffFilter.include)
|
|
||||||
self.find_orphan_containers(remove_orphans)
|
|
||||||
self.remove_stopped(v=include_volumes, one_off=OneOffFilter.include)
|
|
||||||
|
|
||||||
self.networks.remove()
|
|
||||||
|
|
||||||
if include_volumes:
|
|
||||||
self.volumes.remove()
|
|
||||||
|
|
||||||
self.remove_images(remove_image_type)
|
|
||||||
|
|
||||||
def remove_images(self, remove_image_type):
|
|
||||||
for service in self.get_services():
|
|
||||||
service.remove_image(remove_image_type)
|
|
||||||
|
|
||||||
def restart(self, service_names=None, **options):
|
|
||||||
containers = self.containers(service_names, stopped=True)
|
|
||||||
parallel.parallel_restart(containers, options)
|
|
||||||
return containers
|
|
||||||
|
|
||||||
def build(self, service_names=None, no_cache=False, pull=False, force_rm=False):
|
|
||||||
for service in self.get_services(service_names):
|
|
||||||
if service.can_be_built():
|
|
||||||
service.build(no_cache, pull, force_rm)
|
|
||||||
else:
|
|
||||||
log.info('%s uses an image, skipping' % service.name)
|
|
||||||
|
|
||||||
def create(
|
|
||||||
self,
|
|
||||||
service_names=None,
|
|
||||||
strategy=ConvergenceStrategy.changed,
|
|
||||||
do_build=BuildAction.none,
|
|
||||||
):
|
|
||||||
services = self.get_services_without_duplicate(service_names, include_deps=True)
|
|
||||||
|
|
||||||
for svc in services:
|
|
||||||
svc.ensure_image_exists(do_build=do_build)
|
|
||||||
plans = self._get_convergence_plans(services, strategy)
|
|
||||||
|
|
||||||
for service in services:
|
|
||||||
service.execute_convergence_plan(
|
|
||||||
plans[service.name],
|
|
||||||
detached=True,
|
|
||||||
start=False)
|
|
||||||
|
|
||||||
def events(self, service_names=None):
|
|
||||||
def build_container_event(event, container):
|
|
||||||
time = datetime.datetime.fromtimestamp(event['time'])
|
|
||||||
time = time.replace(
|
|
||||||
microsecond=microseconds_from_time_nano(event['timeNano']))
|
|
||||||
return {
|
|
||||||
'time': time,
|
|
||||||
'type': 'container',
|
|
||||||
'action': event['status'],
|
|
||||||
'id': container.id,
|
|
||||||
'service': container.service,
|
|
||||||
'attributes': {
|
|
||||||
'name': container.name,
|
|
||||||
'image': event['from'],
|
|
||||||
},
|
|
||||||
'container': container,
|
|
||||||
}
|
|
||||||
|
|
||||||
service_names = set(service_names or self.service_names)
|
|
||||||
for event in self.client.events(
|
|
||||||
filters={'label': self.labels()},
|
|
||||||
decode=True
|
|
||||||
):
|
|
||||||
# The first part of this condition is a guard against some events
|
|
||||||
# broadcasted by swarm that don't have a status field.
|
|
||||||
# See https://github.com/docker/compose/issues/3316
|
|
||||||
if 'status' not in event or event['status'] in IMAGE_EVENTS:
|
|
||||||
# We don't receive any image events because labels aren't applied
|
|
||||||
# to images
|
|
||||||
continue
|
|
||||||
|
|
||||||
# TODO: get labels from the API v1.22 , see github issue 2618
|
|
||||||
try:
|
|
||||||
# this can fail if the conatiner has been removed
|
|
||||||
container = Container.from_id(self.client, event['id'])
|
|
||||||
except APIError:
|
|
||||||
continue
|
|
||||||
if container.service not in service_names:
|
|
||||||
continue
|
|
||||||
yield build_container_event(event, container)
|
|
||||||
|
|
||||||
def up(self,
|
|
||||||
service_names=None,
|
|
||||||
start_deps=True,
|
|
||||||
strategy=ConvergenceStrategy.changed,
|
|
||||||
do_build=BuildAction.none,
|
|
||||||
timeout=DEFAULT_TIMEOUT,
|
|
||||||
detached=False,
|
|
||||||
remove_orphans=False):
|
|
||||||
|
|
||||||
warn_for_swarm_mode(self.client)
|
|
||||||
|
|
||||||
self.initialize()
|
|
||||||
self.find_orphan_containers(remove_orphans)
|
|
||||||
|
|
||||||
services = self.get_services_without_duplicate(
|
|
||||||
service_names,
|
|
||||||
include_deps=start_deps)
|
|
||||||
|
|
||||||
for svc in services:
|
|
||||||
svc.ensure_image_exists(do_build=do_build)
|
|
||||||
plans = self._get_convergence_plans(services, strategy)
|
|
||||||
|
|
||||||
def do(service):
|
|
||||||
return service.execute_convergence_plan(
|
|
||||||
plans[service.name],
|
|
||||||
timeout=timeout,
|
|
||||||
detached=detached
|
|
||||||
)
|
|
||||||
|
|
||||||
def get_deps(service):
|
|
||||||
return {self.get_service(dep) for dep in service.get_dependency_names()}
|
|
||||||
|
|
||||||
results, errors = parallel.parallel_execute(
|
|
||||||
services,
|
|
||||||
do,
|
|
||||||
operator.attrgetter('name'),
|
|
||||||
None,
|
|
||||||
get_deps
|
|
||||||
)
|
|
||||||
if errors:
|
|
||||||
raise ProjectError(
|
|
||||||
'Encountered errors while bringing up the project.'
|
|
||||||
)
|
|
||||||
|
|
||||||
return [
|
|
||||||
container
|
|
||||||
for svc_containers in results
|
|
||||||
if svc_containers is not None
|
|
||||||
for container in svc_containers
|
|
||||||
]
|
|
||||||
|
|
||||||
def initialize(self):
|
|
||||||
self.networks.initialize()
|
|
||||||
self.volumes.initialize()
|
|
||||||
|
|
||||||
def _get_convergence_plans(self, services, strategy):
|
|
||||||
plans = {}
|
|
||||||
|
|
||||||
for service in services:
|
|
||||||
updated_dependencies = [
|
|
||||||
name
|
|
||||||
for name in service.get_dependency_names()
|
|
||||||
if name in plans and
|
|
||||||
plans[name].action in ('recreate', 'create')
|
|
||||||
]
|
|
||||||
|
|
||||||
if updated_dependencies and strategy.allows_recreate:
|
|
||||||
log.debug('%s has upstream changes (%s)',
|
|
||||||
service.name,
|
|
||||||
", ".join(updated_dependencies))
|
|
||||||
plan = service.convergence_plan(ConvergenceStrategy.always)
|
|
||||||
else:
|
|
||||||
plan = service.convergence_plan(strategy)
|
|
||||||
|
|
||||||
plans[service.name] = plan
|
|
||||||
|
|
||||||
return plans
|
|
||||||
|
|
||||||
def pull(self, service_names=None, ignore_pull_failures=False):
|
|
||||||
for service in self.get_services(service_names, include_deps=False):
|
|
||||||
service.pull(ignore_pull_failures)
|
|
||||||
|
|
||||||
def push(self, service_names=None, ignore_push_failures=False):
|
|
||||||
for service in self.get_services(service_names, include_deps=False):
|
|
||||||
service.push(ignore_push_failures)
|
|
||||||
|
|
||||||
def _labeled_containers(self, stopped=False, one_off=OneOffFilter.exclude):
|
|
||||||
return list(filter(None, [
|
|
||||||
Container.from_ps(self.client, container)
|
|
||||||
for container in self.client.containers(
|
|
||||||
all=stopped,
|
|
||||||
filters={'label': self.labels(one_off=one_off)})])
|
|
||||||
)
|
|
||||||
|
|
||||||
def containers(self, service_names=None, stopped=False, one_off=OneOffFilter.exclude):
|
|
||||||
if service_names:
|
|
||||||
self.validate_service_names(service_names)
|
|
||||||
else:
|
|
||||||
service_names = self.service_names
|
|
||||||
|
|
||||||
containers = self._labeled_containers(stopped, one_off)
|
|
||||||
|
|
||||||
def matches_service_names(container):
|
|
||||||
return container.labels.get(LABEL_SERVICE) in service_names
|
|
||||||
|
|
||||||
return [c for c in containers if matches_service_names(c)]
|
|
||||||
|
|
||||||
def find_orphan_containers(self, remove_orphans):
|
|
||||||
def _find():
|
|
||||||
containers = self._labeled_containers()
|
|
||||||
for ctnr in containers:
|
|
||||||
service_name = ctnr.labels.get(LABEL_SERVICE)
|
|
||||||
if service_name not in self.service_names:
|
|
||||||
yield ctnr
|
|
||||||
orphans = list(_find())
|
|
||||||
if not orphans:
|
|
||||||
return
|
|
||||||
if remove_orphans:
|
|
||||||
for ctnr in orphans:
|
|
||||||
log.info('Removing orphan container "{0}"'.format(ctnr.name))
|
|
||||||
ctnr.kill()
|
|
||||||
ctnr.remove(force=True)
|
|
||||||
else:
|
|
||||||
log.warning(
|
|
||||||
'Found orphan containers ({0}) for this project. If '
|
|
||||||
'you removed or renamed this service in your compose '
|
|
||||||
'file, you can run this command with the '
|
|
||||||
'--remove-orphans flag to clean it up.'.format(
|
|
||||||
', '.join(["{}".format(ctnr.name) for ctnr in orphans])
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
def _inject_deps(self, acc, service):
|
|
||||||
dep_names = service.get_dependency_names()
|
|
||||||
|
|
||||||
if len(dep_names) > 0:
|
|
||||||
dep_services = self.get_services(
|
|
||||||
service_names=list(set(dep_names)),
|
|
||||||
include_deps=True
|
|
||||||
)
|
|
||||||
else:
|
|
||||||
dep_services = []
|
|
||||||
|
|
||||||
dep_services.append(service)
|
|
||||||
return acc + dep_services
|
|
||||||
|
|
||||||
|
|
||||||
def get_volumes_from(project, service_dict):
|
|
||||||
volumes_from = service_dict.pop('volumes_from', None)
|
|
||||||
if not volumes_from:
|
|
||||||
return []
|
|
||||||
|
|
||||||
def build_volume_from(spec):
|
|
||||||
if spec.type == 'service':
|
|
||||||
try:
|
|
||||||
return spec._replace(source=project.get_service(spec.source))
|
|
||||||
except NoSuchService:
|
|
||||||
pass
|
|
||||||
|
|
||||||
if spec.type == 'container':
|
|
||||||
try:
|
|
||||||
container = Container.from_id(project.client, spec.source)
|
|
||||||
return spec._replace(source=container)
|
|
||||||
except APIError:
|
|
||||||
pass
|
|
||||||
|
|
||||||
raise ConfigurationError(
|
|
||||||
"Service \"{}\" mounts volumes from \"{}\", which is not the name "
|
|
||||||
"of a service or container.".format(
|
|
||||||
service_dict['name'],
|
|
||||||
spec.source))
|
|
||||||
|
|
||||||
return [build_volume_from(vf) for vf in volumes_from]
|
|
||||||
|
|
||||||
|
|
||||||
def warn_for_swarm_mode(client):
|
|
||||||
info = client.info()
|
|
||||||
if info.get('Swarm', {}).get('LocalNodeState') == 'active':
|
|
||||||
log.warn(
|
|
||||||
"The Docker Engine you're using is running in swarm mode.\n\n"
|
|
||||||
"Compose does not use swarm mode to deploy services to multiple nodes in a swarm. "
|
|
||||||
"All containers will be scheduled on the current node.\n\n"
|
|
||||||
"To deploy your application across the swarm, "
|
|
||||||
"use the bundle feature of the Docker experimental build.\n\n"
|
|
||||||
"More info:\n"
|
|
||||||
"https://docs.docker.com/compose/bundles\n"
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
class NoSuchService(Exception):
|
|
||||||
def __init__(self, name):
|
|
||||||
self.name = name
|
|
||||||
self.msg = "No such service: %s" % self.name
|
|
||||||
|
|
||||||
def __str__(self):
|
|
||||||
return self.msg
|
|
||||||
|
|
||||||
|
|
||||||
class ProjectError(Exception):
|
|
||||||
def __init__(self, msg):
|
|
||||||
self.msg = msg
|
|
||||||