This exposes a generalized configuration API base on dividing the
configuration space up into subsystems. Within a given subsystem,
the configuration is read/written in one json blob.
This also does some slight tweaks to the logging subsystem based on this
new API structure.
This wires Orca up to support remote syslog endpoints.
The configuration is driven through the KV store, and
requires manually running curl commands (we can add UI/API
for this later.)
This also lays the foundation for a general watching facility for
configuration. In a subsequent change I'll update this to address other
global configuration for the daemon.
This revamps the product and image names. After merging this change,
the bootstrapper image will be known as "dockerorca/ucp" since it is the
primary image customers interact with. The controller will be known as
"dockerorca/ucp-controller" and the corresponding container names are
"ucp" and "ucp-controller". Once we get closer to GA, we'll move the
images under the "docker" org, so the product name will flow nicely from
that "docker/ucp" for the bootstrapping tool, and "docker/ucp-controller"
for the server image.
This change re-wires the way we have CFSSL hooked up so
that it requires mutual TLS to access the service.
Instead of using command line arguments, and thus relying on environment
variables from linking, this change also switches to registering the
CAs via KV store entries.
The current CFSSL implementation does not support mutual TLS natively,
so I've leveraged socat and a proxy container (much like we do for
docker) in the interest of expediency. (so under the covers it's still
a link between cfss and the proxy.) Once upstream supports mutual TLS
(or if we decide to fork/patch it) we can drop the proxy and eliminate
all the links.
We may have scenarios where we need to show users how to mitigate problems
by accessing the KV store directly. This short doc shows how they can
do it with admin bundles.
It turns out that our support dump logic is *really* fast and compact.
Even on a large node (hundreds of containers and thousands of images)
it runs in ~10 seconds and weighs in at a few hundred K. Since we're
running all the dumps in parallel, there's really no need for the added
complexity of saving them to a DB.
This change revamps and simplifies the support dump API. Now you simply
POST to the API endpoint, and it will stream the full zip file containing
all the nodes payloads within. If a node is unreachable, times out,
or has some other catastrophic problem, the contents for that node will
be an error message instead of the normal tar.gz bundle.
I've tested this with a swarm of multiple nodes, confirmed the dumps
match up to the hosts, and the system handles offline nodes, reporting
an error message within the bundle. (it does take a long time in the
failure cases due to a bug in swarm that's slated to be fixed in 1.9,
but curl doesn't give up so this still works fine.)
updates per @dsheets comments
Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>
copyedits per @dsheets
Signed-off-by: Victoria Bialas <victoria.bialas@docker.com>
I've added in the '--rm' argument to the 'script method' and to the 'add your user to docker group' examples to match the first instruction. Purely a consistency thing, and so new users can appreciate not having a few hello-world containers spawned. Ta.