- Close#194 and fix
- Fix and close#425
- Fix and close#417
- Fix and close#420
- Fix and close#422
- Adding in documentation build scripts
- Fix and close#431
- Fix and close#438, and Fix and close#429
- Work on 441
- Adding in commands reference
- Updating all the options to tables
- Updating per Vivek #498
- Adding vivek's last suggestions
Signed-off-by: Mary Anthony <mary@docker.com>
This adds an option in the user pull down to generate a support dump.
While not totally ideal from a UE perspective, we don't really have
a page to do admin tasks, so this'll have to do for now. With this
we can remove the rather ugly docs we have explaining how to get
support dumps via curl.
Non admin users will get the standard permission denied page, as with
all the other admin-only tasks we have.
Banjot added some changes to the language around use of SANs. We have to be clear that the SAN can be either a private or public IP, it all depends on what URL they type in their browser to connect to their UCP controller. In most cases, I will expect customers will use private IP addresses or a private IP network they create on AWS. Most will not expose UCP to public IP addresses since UCP is likely not a public-facing service, it's an internal Ops service. Pubic IPs are what allows AWS instances to talk to each other but it's not how most users will configure their IP networking on AWS for a UCP deployment that's internal to their organization.
Enter Evan's comments
Signed-off-by: Mary Anthony <mary@docker.com>
This allows a user to add an existing public key to client bundles, this
is used where the CA is externally managed (e.g. verisign) and we do not
have the authority to sign certs.
Fixes#367
Signed-off-by: Tom Barlow <tomwbarlow@gmail.com>
Creating specs directory; may be moved later
Adding fix for Issue #348
Adding in updates for networking
Updating with Dan's comments: removing old -beta
Updating networking after talking to Madhu
Updated install with HA as optional
Moved HA spec into specs
Did "customer-facing" HA page
Renamed server > controller in docs
Entering comments from reviewers
Signed-off-by: Mary Anthony <mary@docker.com>
This refines our logging and auditing a bit to make
things easier to search for within kibana (or similar external systems)
See ./docs/logging.md for more details.
This exposes a generalized configuration API base on dividing the
configuration space up into subsystems. Within a given subsystem,
the configuration is read/written in one json blob.
This also does some slight tweaks to the logging subsystem based on this
new API structure.
This wires Orca up to support remote syslog endpoints.
The configuration is driven through the KV store, and
requires manually running curl commands (we can add UI/API
for this later.)
This also lays the foundation for a general watching facility for
configuration. In a subsequent change I'll update this to address other
global configuration for the daemon.
This revamps the product and image names. After merging this change,
the bootstrapper image will be known as "dockerorca/ucp" since it is the
primary image customers interact with. The controller will be known as
"dockerorca/ucp-controller" and the corresponding container names are
"ucp" and "ucp-controller". Once we get closer to GA, we'll move the
images under the "docker" org, so the product name will flow nicely from
that "docker/ucp" for the bootstrapping tool, and "docker/ucp-controller"
for the server image.
This change re-wires the way we have CFSSL hooked up so
that it requires mutual TLS to access the service.
Instead of using command line arguments, and thus relying on environment
variables from linking, this change also switches to registering the
CAs via KV store entries.
The current CFSSL implementation does not support mutual TLS natively,
so I've leveraged socat and a proxy container (much like we do for
docker) in the interest of expediency. (so under the covers it's still
a link between cfss and the proxy.) Once upstream supports mutual TLS
(or if we decide to fork/patch it) we can drop the proxy and eliminate
all the links.
We may have scenarios where we need to show users how to mitigate problems
by accessing the KV store directly. This short doc shows how they can
do it with admin bundles.
It turns out that our support dump logic is *really* fast and compact.
Even on a large node (hundreds of containers and thousands of images)
it runs in ~10 seconds and weighs in at a few hundred K. Since we're
running all the dumps in parallel, there's really no need for the added
complexity of saving them to a DB.
This change revamps and simplifies the support dump API. Now you simply
POST to the API endpoint, and it will stream the full zip file containing
all the nodes payloads within. If a node is unreachable, times out,
or has some other catastrophic problem, the contents for that node will
be an error message instead of the normal tar.gz bundle.
I've tested this with a swarm of multiple nodes, confirmed the dumps
match up to the hosts, and the system handles offline nodes, reporting
an error message within the bundle. (it does take a long time in the
failure cases due to a bug in swarm that's slated to be fixed in 1.9,
but curl doesn't give up so this still works fine.)