Structured the release notes to make them look cleaner, and
included new release notes for v0.9 and v1.0.
Signed-off-by: Joao Fernandes <joao.fernandes@docker.com>
* LDAP Settings material
* Closes#651
* Combining work into single branch
* Updating index
* Fixing code examples
* Adding in note from Johnny's feedabck
* Menu positions, HA terms
* Copy edit of users
* Adding deploy an application
* Updating the overview page to include more text
* Updating with comments from review page
* Updating the constraints and port
* Layout check/fix
Signed-off-by: Mary Anthony <mary@docker.com>
During the UCP beta we had created a quickstart guide
that included installation prerequisites (like ports that
need to be open), and the installation procedure.
Now we're breaking that information in two different documents.
This makes the information more accessible to someone who just
wants to prepare the installation.
Few tweaks on check
Update with comments from Dan
Last comments;fix some build breaks
Tighten language add reconfigure info
Signed-off-by: Mary Anthony <mary@docker.com>
The original implementation assumed that if you brought your own server
cert, then users certs would be signed by the same CA, but this will
make it quite challenging for large enterprises who would be forced to
manage certs for users, or worse, buy them from the same external CA.
Since the UCP controller already trusts multiple root, there's no reason
we can't add another.
Prior to this change the CA stored next to the server cert was the
"full" trust chain including the root CAs. With this change, we flip
that around and use the swarm cert CA for the controller. This is a
tiny bit messy, because we have to be careful not to accidentally wind
up with that CA on the cluster components other than the controller,
so I've enhanced our integration tests to cover this case specifically
and make sure we don't mistakenly open the system up. In doing so,
I had to refine the integration test so all the servers were signed by
the same CA (the prior code was sloppy and used a fresh CA for each HA
node, which meant the bundles broke on the replica controllers.)
In the future, we'll likely have intermediaries with differnet
privileges/scopes, and may revisit the multiple root CA model, so this
seems like a reasonable compromise to keep the code churn down for now.
- Close#194 and fix
- Fix and close#425
- Fix and close#417
- Fix and close#420
- Fix and close#422
- Adding in documentation build scripts
- Fix and close#431
- Fix and close#438, and Fix and close#429
- Work on 441
- Adding in commands reference
- Updating all the options to tables
- Updating per Vivek #498
- Adding vivek's last suggestions
Signed-off-by: Mary Anthony <mary@docker.com>
This adds an option in the user pull down to generate a support dump.
While not totally ideal from a UE perspective, we don't really have
a page to do admin tasks, so this'll have to do for now. With this
we can remove the rather ugly docs we have explaining how to get
support dumps via curl.
Non admin users will get the standard permission denied page, as with
all the other admin-only tasks we have.
Banjot added some changes to the language around use of SANs. We have to be clear that the SAN can be either a private or public IP, it all depends on what URL they type in their browser to connect to their UCP controller. In most cases, I will expect customers will use private IP addresses or a private IP network they create on AWS. Most will not expose UCP to public IP addresses since UCP is likely not a public-facing service, it's an internal Ops service. Pubic IPs are what allows AWS instances to talk to each other but it's not how most users will configure their IP networking on AWS for a UCP deployment that's internal to their organization.
Enter Evan's comments
Signed-off-by: Mary Anthony <mary@docker.com>
This allows a user to add an existing public key to client bundles, this
is used where the CA is externally managed (e.g. verisign) and we do not
have the authority to sign certs.
Fixes#367
Signed-off-by: Tom Barlow <tomwbarlow@gmail.com>
Creating specs directory; may be moved later
Adding fix for Issue #348
Adding in updates for networking
Updating with Dan's comments: removing old -beta
Updating networking after talking to Madhu
Updated install with HA as optional
Moved HA spec into specs
Did "customer-facing" HA page
Renamed server > controller in docs
Entering comments from reviewers
Signed-off-by: Mary Anthony <mary@docker.com>
This refines our logging and auditing a bit to make
things easier to search for within kibana (or similar external systems)
See ./docs/logging.md for more details.
This exposes a generalized configuration API base on dividing the
configuration space up into subsystems. Within a given subsystem,
the configuration is read/written in one json blob.
This also does some slight tweaks to the logging subsystem based on this
new API structure.
This wires Orca up to support remote syslog endpoints.
The configuration is driven through the KV store, and
requires manually running curl commands (we can add UI/API
for this later.)
This also lays the foundation for a general watching facility for
configuration. In a subsequent change I'll update this to address other
global configuration for the daemon.
This revamps the product and image names. After merging this change,
the bootstrapper image will be known as "dockerorca/ucp" since it is the
primary image customers interact with. The controller will be known as
"dockerorca/ucp-controller" and the corresponding container names are
"ucp" and "ucp-controller". Once we get closer to GA, we'll move the
images under the "docker" org, so the product name will flow nicely from
that "docker/ucp" for the bootstrapping tool, and "docker/ucp-controller"
for the server image.
This change re-wires the way we have CFSSL hooked up so
that it requires mutual TLS to access the service.
Instead of using command line arguments, and thus relying on environment
variables from linking, this change also switches to registering the
CAs via KV store entries.
The current CFSSL implementation does not support mutual TLS natively,
so I've leveraged socat and a proxy container (much like we do for
docker) in the interest of expediency. (so under the covers it's still
a link between cfss and the proxy.) Once upstream supports mutual TLS
(or if we decide to fork/patch it) we can drop the proxy and eliminate
all the links.
We may have scenarios where we need to show users how to mitigate problems
by accessing the KV store directly. This short doc shows how they can
do it with admin bundles.