docs snapshot for crossplane version `master`

This commit is contained in:
Crossplane 2019-05-16 18:00:44 +00:00
parent f5f9937edc
commit 379a27324a
24 changed files with 2236 additions and 40 deletions

View File

@ -1,17 +1,28 @@
# Crossplane
GitLab-Controller is Crossplane native application which enables provisioning a production grade GitLab services
accros multiple supported cloud providers. GitLab-Controller leverages Crossplane core constructs such as
CloudProvider(s), ResourceClass(es), and ResourceClaim(s) to satisfy GitLab Services dependencies on public cloud
managed services. GitLab-Controller utilizes Crossplane Workloads to provision GitLab services and all its
dependencies on target Kubernetes clusters managed and provisioned by the Crossplane.
Crossplane is an open source multicloud control plane. It introduces workload and resource abstractions on-top of existing managed services that enables a high degree of workload portability across cloud providers. A single crossplane enables the provisioning and full-lifecycle management of services and infrastructure across a wide range of providers, offerings, vendors, regions, and clusters. Crossplane offers a universal API for cloud computing, a workload scheduler, and a set of smart controllers that can automate work across clouds.
<h4 align="center"><img src="media/arch.png" alt="Crossplane" height="400"></h4>
Crossplane presents a declarative management style API that covers a wide range of portable abstractions including databases, message queues, buckets, data pipelines, serverless, clusters, and many more coming. Its based on the declarative resource model of the popular [Kubernetes](https://github.com/kubernetes/kubernetes) project, and applies many of the lessons learned in container orchestration to multicloud workload and resource orchestration.
Crossplane supports a clean separation of concerns between developers and administrators. Developers define workloads without having to worry about implementation details, environment constraints, and policies. Administrators can define environment specifics, and policies. The separation of concern leads to a higher degree of reusability and reduces complexity.
Crossplane includes a workload scheduler that can factor a number of criteria including capabilities, availability, reliability, cost, regions, and performance while deploying workloads and their resources. The scheduler works alongside specialized resource controllers to ensure policies set by administrators are honored.
## Architecture and Vision
The design draft of the Crossplane GitLab-Controller
[initial design](https://docs.google.com/document/d/1_pD0w5rmkx6Rch5IRYhuVIYuSbFCJGlRGNiUxpTfrZ0/edit?usp=sharing).
The full architecture and vision of the Crossplane project is described in depth in the [architecture document](https://docs.google.com/document/d/1whncqdUeU2cATGEJhHvzXWC9xdK29Er45NJeoemxebo/edit?usp=sharing). It is the best place to learn more about how Crossplane fits into the Kubernetes ecosystem, the intended use cases, and comparisons to existing projects.
## Table of Contents
* [Quick Start Guide](quick-start.md)
* [Getting Started](getting-started.md)
* [Installing Crossplane](install-crossplane.md)
* [Adding Your Cloud Providers](cloud-providers.md)
* [Deploying Workloads](deploy.md)
* [Running Resources](running-resources.md)
* [Troubleshooting](troubleshoot.md)
* [Concepts](concepts.md)
* [FAQs](faqs.md)
* [Contributing](contributing.md)

View File

@ -0,0 +1,14 @@
---
title: Adding Your Cloud Providers
toc: true
weight: 330
indent: true
---
# Adding Your Cloud Providers
In order for Crossplane to be able to manage resources across all your clouds, you will need to add your cloud provider credentials to Crossplane.
Use the links below for specific instructions to add each of the following cloud providers:
* [Google Cloud Platform (GCP)](cloud-providers/gcp/gcp-provider.md)
* [Microsoft Azure](cloud-providers/azure/azure-provider.md)
* [Amazon Web Services (AWS)](cloud-providers/aws/aws-provider.md)

View File

@ -0,0 +1,33 @@
# Adding Amazon Web Services (AWS) to Crossplane
In this guide, we will walk through the steps necessary to configure your AWS account to be ready for integration with Crossplane.
## AWS Credentials
### Option 1: aws Command Line Tool
If you have already installed and configured the [`aws` command line tool](https://aws.amazon.com/cli/), you can simply find your AWS credentials file in `~/.aws/credentials`.
### Option 2: AWS Console in Web Browser
If you do not have the `aws` tool installed, you can alternatively log into the [AWS console](https://aws.amazon.com/console/) and export the credentials.
The steps to follow below are from the [AWS SDK for GO](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/setting-up.html):
1. Open the IAM console.
1. On the navigation menu, choose Users.
1. Choose your IAM user name (not the check box).
1. Open the Security credentials tab, and then choose Create access key.
1. To see the new access key, choose Show. Your credentials resemble the following:
- Access key ID: AKIAIOSFODNN7EXAMPLE
- Secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
1. To download the key pair, choose Download .csv file.
Then convert the `*.csv` file to the below format and save it to `~/.aws/credentials`:
```
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```
After the steps above, you should have your AWS credentials stored in `~/.aws/credentials`.

View File

@ -0,0 +1,61 @@
# Adding Microsoft Azure to Crossplane
In this guide, we will walk through the steps necessary to configure your Azure account to be ready for integration with Crossplane.
The general steps we will take are summarized below:
* Create a new service principal (account) that Crossplane will use to create and manage Azure resources
* Add the required permissions to the account
* Consent to the permissions using an administrator account
## Preparing your Microsoft Azure Account
In order to manage resources in Azure, you must provide credentials for a Azure service principal that Crossplane can use to authenticate.
This assumes that you have already [set up the Azure CLI client](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest) with your credentials.
Create a JSON file that contains all the information needed to connect and authenticate to Azure:
```console
# create service principal with Owner role
az ad sp create-for-rbac --sdk-auth --role Owner > crossplane-azure-provider-key.json
```
Take note of the `clientID` value from the JSON file that we just created, and save it to an environment variable:
```console
export AZURE_CLIENT_ID=<clientId value from json file>
```
Now add the required permissions to the service principal that will allow it to manage the necessary resources in Azure:
```console
# add required Azure Active Directory permissions
az ad app permission add --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --api-permissions 1cda74f2-2616-4834-b122-5cb1b07f8a59=Role 78c8a3c8-a07e-4b9e-af1b-b5ccab50a175=Role
# grant (activate) the permissions
az ad app permission grant --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --expires never
```
You might see an error similar to the following, but that is OK, the permissions should have gone through still:
```console
Operation failed with status: 'Conflict'. Details: 409 Client Error: Conflict for url: https://graph.windows.net/e7985bc4-a3b3-4f37-b9d2-fa256023b1ae/oauth2PermissionGrants?api-version=1.6
```
After these steps are completed, you should have the following file on your local filesystem:
* `crossplane-azure-provider-key.json`
## Grant Consent to Application Permissions
One more step is required to fully grant the permissions to the new service principal.
From the Azure Portal, you need to grant consent for the permissions using an admin account.
The steps to perform this action are listed below:
1. `echo ${AZURE_CLIENT_ID}` and note this ID value
1. Navigate to the Azure Portal: https://portal.azure.com
1. Click `Azure Active Directory`, or find it in the `All services` list
1. Click `App registrations (Preview)`
1. Click on the application from the list where the application (client) ID matches the value from step 1
1. Click `API permissions`
1. Click `Grant admin consent for Default Directory`
1. Click `Yes`

View File

@ -0,0 +1,99 @@
# Adding Google Cloud Platform (GCP) to Crossplane
In this guide, we will walk through the steps necessary to configure your GCP account to be ready for integration with Crossplane.
The general steps we will take are summarized below:
* Create a new example project that all resources will be deployed to
* Enable required APIs such as Kubernetes and CloudSQL
* Create a service account that will be used to perform GCP operations from Crossplane
* Assign necessary roles to the service account
* Enable billing
For your convenience, the specific steps to accomplish those tasks are provided for you below using either the `gcloud` command line tool, or the GCP console in a web browser.
You can choose whichever you are more comfortable with.
## Option 1: gcloud Command Line Tool
If you have the `gcloud` tool installed, you can run below commands from the example directory.
It
Instructions for installing `gcloud` can be found in the [Google docs](https://cloud.google.com/sdk/install).
```bash
# list your organizations (if applicable), take note of the specific organization ID you want to use
# if you have more than one organization (not common)
gcloud organizations list
# create a new project
export EXAMPLE_PROJECT_NAME=crossplane-example-123
gcloud projects create $EXAMPLE_PROJECT_NAME --enable-cloud-apis [--organization ORGANIZATION_ID]
# record the PROJECT_ID value of the newly created project
export EXAMPLE_PROJECT_ID=$(gcloud projects list --filter NAME=$EXAMPLE_PROJECT_NAME --format="value(PROJECT_ID)")
# enable Kubernetes API
gcloud --project $EXAMPLE_PROJECT_ID services enable container.googleapis.com
# enable CloudSQL API
gcloud --project $EXAMPLE_PROJECT_ID services enable sqladmin.googleapis.com
# create service account
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts create example-123 --display-name "Crossplane Example"
# export service account email
export EXAMPLE_SA="example-123@$EXAMPLE_PROJECT_ID.iam.gserviceaccount.com"
# create service account key (this will create a `crossplane-gcp-provider-key.json` file in your current working directory)
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts keys create --iam-account $EXAMPLE_SA crossplane-gcp-provider-key.json
# assign roles
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/container.admin"
```
## Option 2: GCP Console in a Web Browser
If you chose to use the `gcloud` tool, you can skip this section entirely.
Create a GCP example project which we will use to host our example GKE cluster, as well as our example CloudSQL instance.
- Login into [GCP Console](https://console.cloud.google.com)
- Create a new project (either stand alone or under existing organization)
- Create Example Service Account
- Navigate to: [Create Service Account](https://console.cloud.google.com/iam-admin/serviceaccounts)
- `Service Account Name`: type "example"
- `Service Account ID`: leave auto assigned
- `Service Account Description`: type "Crossplane example"
- Click `Create` button
- This should advance to the next section `2 Grant this service account to project (optional)`
- We will assign this account 3 roles:
- `Service Account User`
- `Cloud SQL Admin`
- `Kubernetes Engine Admin`
- Click `Create` button
- This should advance to the next section `3 Grant users access to this service account (optional)`
- We don't need to assign any user or admin roles to this account for the example purposes, so you can leave following two fields blank:
- `Service account users role`
- `Service account admins role`
- Next, we will create and export service account key
- Click `+ Create Key` button.
- This should open a `Create Key` side panel
- Select `json` for the Key type (should be selected by default)
- Click `Create`
- This should show `Private key saved to your computer` confirmation dialog
- You also should see `crossplane-example-1234-[suffix].json` file in your browser's Download directory
- Save (copy or move) this file into example (this) directory, with new name `crossplane-gcp-provider-key.json`
- Enable `Cloud SQL API`
- Navigate to [Cloud SQL Admin API](https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview)
- Click `Enable`
- Enable `Kubernetes Engine API`
- Navigate to [Kubernetes Engine API](https://console.developers.google.com/apis/api/container.googleapis.com/overview)
- Click `Enable`
## Enable Billing
No matter what option you chose to configure the previous steps, you will need to enable billing for your account in order to create and use Kubernetes clusters with GKE.
- Go to [GCP Console](https://console.cloud.google.com)
- Select example project
- Click `Enable Billing`
- Go to [Kubernetes Clusters](https://console.cloud.google.com/kubernetes/list)
- Click `Enable Billing`

58
docs/master/concepts.md Normal file
View File

@ -0,0 +1,58 @@
---
title: Concepts
toc: true
weight: 410
---
# Concepts
## Control Plane
Crossplane is an open source multicloud control plane that consists of smart controllers that can work across clouds to enable workload portability, provisioning and full-lifecycle management of infrastructure across a wide range of providers, vendors, regions, and offerings.
The control plane presents a declarative management style API that covers a wide range of portable abstractions that facilitate these goals across disparate environments, clusters, regions, and clouds.
Crossplane can be thought of as a higher-order orchestrator across cloud providers.
For convenience, Crossplane can run directly on-top of an existing Kubernetes cluster without requiring any changes, even though Crossplane does not necessarily schedule or run any containers on the host cluster.
## Resources and Workloads
In Crossplane, a *resource* represents an external piece of infrastructure ranging from low level services like clusters and servers, to higher level infrastructure like databases, message queues, buckets, and more.
Resources are represented as persistent object within the crossplane, and they typically manage one or more pieces of external infrastructure within a cloud provider or cloud offering.
Resources can also represent local or in-cluster services.
We model *workloads* as schedulable units of work that the user intends to run on a cloud provider.
Crossplane will support multiple types of workloads including container and serverless.
You can think of workloads as units that run **your** code and applications.
Every type of workload has a different kind of payload.
For example, a container workload can include a set of objects that will be deployed on a managed Kubernetes cluster, or a reference to helm chart, etc.
A serverless workload could include a function that will run on a serverless managed service.
Workloads can contain requirements for where and how the workload can run, including regions, providers, affinity, cost, and others that the scheduler can use when assigning the workload.
## Resource Claims and Resource Classes
To support workload portability we expose the concept of a resource claim and a resource class.
A resource claim is a persistent object that captures the desired configuration of a resource from the perspective of a workload or application.
Its configuration is cloud-provider and cloud-offering independent and its free of implementation and/or environmental details.
A resource claim can be thought of as a request for an actual resource and is typically created by a developer or application owner.
A resource class is configuration that contains implementation details specific to a certain environment or deployment, and policies related to a kind of resource.
A ResourceClass acts as a template with implementation details and policy for resources that will be dynamically provisioned by the workload at deployment time.
A resource class is typically created by an admin or infrastructure owner.
## Dynamic and Static Provisioning
A resource can be statically or dynamically provisioned.
Static provisioning is when an administrator creates the resource manually.
They set the configuration required to provision and manage the corresponding external resource within a cloud provider or cloud offering.
Once provisioned, resources are available to be bound to resource claims.
Dynamic provisioning is when an resource claim does not find a matching resource and provisions a new one instead.
The newly provisioned resource is automatically bound to the resource claim.
To enable dynamic provisioning the administrator needs to create one or more resource class objects.
## Connection Secrets
Workloads reference all the resources the consume in their `resources` section.
This helps Crossplane setup connectivity between the workload and resource, and create objects that hold connection information.
For example, for a database provisioned and managed by Crossplane, a secret will be created that contains a connection string, user and password.
This secret will be propagated to the target cluster so that it can be used by the workload.

View File

@ -5,9 +5,7 @@ weight: 710
---
# Contributing
GitLab-Controller is a community driven project and we welcome contributions.
That includes [opening issues](https://github.com/crossplaneio/gitlab-controller/issues)
for improvements you'd like to see as well as submitting changes to the code base.
Crossplane is a community driven project and we welcome contributions.
That includes [opening issues](https://github.com/crossplaneio/crossplane/issues) for improvements you'd like to see as well as submitting changes to the code base.
For more information about the contribution process, please see the
[contribution guide](https://github.com/crossplaneio/gitlab-controller/blob/master/CONTRIBUTING.md).
For more information about the contribution process, please see the [contribution guide](https://github.com/crossplaneio/crossplane/blob/master/CONTRIBUTING.md).

136
docs/master/deploy.md Normal file
View File

@ -0,0 +1,136 @@
---
title: Deploying Workloads
toc: true
weight: 340
indent: true
---
# Deploying Workloads
## Guides
This section will walk you through how to deploy workloads to various cloud provider environments in a highly portable way.
For detailed instructions on how to deploy workloads to your cloud provider of choice, please visit the following guides:
* [Deploying a Workload on Google Cloud Platform (GCP)](workloads/gcp/wordpress-gcp.md)
* [Deploying a Workload on Microsoft Azure](workloads/azure/wordpress-azure.md)
* [Deploying a Workload on Amazon Web Services](workloads/aws/wordpress-aws.md)
## Workload Overview
A workload is a schedulable unit of work and contains a payload as well as defines its requirements for how the workload should run and what resources it will consume.
This helps Crossplane setup connectivity between the workload and resources, and make intelligent decisions about where and how to provision and manage the resources in their entirety.
Crossplane's scheduler is responsible for deploying the workload to a target cluster, which in this guide we will also be using Crossplane to deploy within your chosen cloud provider.
This walkthrough also demonstrates Crossplane's concept of a clean "separation of concerns" between developers and administrators.
Developers define workloads without having to worry about implementation details, environment constraints, and policies.
Administrators can define environment specifics, and policies.
The separation of concern leads to a higher degree of reusability and reduces complexity.
During this walkthrough, we will assume two separate identities:
1. Administrator (cluster or cloud) - responsible for setting up credentials and defining resource classes
2. Application Owner (developer) - responsible for defining and deploying the application and its dependencies
## Workload Example
### Dependency Resource
Let's take a closer look at a dependency resource that a workload will declare:
```yaml
## WordPress MySQL Database Instance
apiVersion: storage.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: demo
namespace: default
spec:
classReference:
name: standard-mysql
namespace: crossplane-system
engineVersion: "5.7"
```
This will request to create a `MySQLInstance` version 5.7, which will be fulfilled by the `standard-mysql` `ResourceClass`.
Note that the application developer is not aware of any further specifics when it comes to the `MySQLInstance` beyond their requested engine version.
This enables highly portable workloads, since the environment specific details of the database are defined by the administrator in a `ResourceClass`.
### Workload
Now let's look at the workload itself, which will reference the dependency resource from above, as well as other information such as the target cluster to deploy to.
```yaml
## WordPress Workload
apiVersion: compute.crossplane.io/v1alpha1
kind: Workload
metadata:
name: demo
namespace: default
spec:
resources:
- name: demo
secretName: demo
targetCluster:
name: demo-gke-cluster
namespace: crossplane-system
targetDeployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
app: wordpress
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4.6.1-apache
env:
- name: WORDPRESS_DB_HOST
valueFrom:
secretKeyRef:
name: demo
key: endpoint
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: demo
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: demo
key: password
ports:
- containerPort: 80
name: wordpress
targetNamespace: demo
targetService:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
type: LoadBalancer
```
This `Workload` definition contains multiple components that informs Crossplane on how to deploy the workload and its resources:
- Resources: list of the resources required by the payload application
- TargetCluster: the cluster where the payload application and all its requirements should be deployed
- TargetNamespace: the namespace on the target cluster
- Workload Payload:
- TargetDeployment
- TargetService

48
docs/master/faqs.md Normal file
View File

@ -0,0 +1,48 @@
---
title: FAQs
toc: true
weight: 610
---
# Frequently Asked Questions (FAQs)
### Where did the name Crossplane come from?
Crossplane is the fusing of cross-cloud control plane. We wanted to use a noun that refers to the entity responsible for connecting different cloud providers and acts as control plane across them. Cross implies “cross-cloud” and “plane” brings in “control plane”.
### What's up with popsicle?
We believe in a multi-flavor cloud.
### Why is Upbound open sourcing this project? What are Upbounds monetization plans?
Upbounds mission is to create a more open cloud-computing platform, with more choice and less lock-in. We believe the Crossplane as an important step towards this vision and that its going to take a village to solve this problem. We believe that multicloud control plane is a new category of open source software, and it will ultimately disrupt closed source and proprietary models. Upbound aspires to be a commercial provider of a more open cloud-computing platform.
### What kind of governance model will be used for Crossplane?
Crossplane will be an independent project and we plan on making a community driven project and not a vendor driven project. It will have an independent brand, github organization, and an open governance model. It will not be tied to single organization or individual.
### Will Crossplane be donated to an open source foundation?
We dont know yet. We are open to doing so but wed like to revisit this after the project has gotten some end-user community traction.
### Does using multicloud mean you will use the lowest common denominator across clouds?
Not necessarily. There are numerous best of breed cloud offerings that run on multiple clouds. For example, CockroachDB and ElasticSearch are world class implementations of platform software and run well on cloud providers. They compete with managed services offered by a cloud provider. We believe that by having an open control plane for them to integrate with, and providing a common API, CLI and UI for all of these services, that more of these offerings will exist and get first-class experience in the cloud.
### How are resources and claims related to PersistentVolumes in Kubernetes?
We modeled resource claims and classes after PersistentVolumes and PersistentVolumeClaims in Kubernetes. We believe many of the lessons learned from managing volumes in Kubernetes apply to managing resources within cloud providers. One notable exception is that we avoided creating a plugin model within Crossplane.
### How is workload scheduling related to pod scheduling in Kubernetes?
We modeled workload scheduling after the Pod scheduler in Kubernetes. We believe many of the lessons learned from Pod scheduling apply to scheduling workloads across cloud providers.
### Can I use Crossplane to consistently provision and manage multiple Kubernetes clusters?
Crossplane includes a portable API for Kubernetes clusters that will include common configuration including node pools, auto-scalers, taints, admission controllers, etc. These will be applied to the specific implementations within the cloud providers like EKS, GKE and AKS. We see the Kubernetes Cluster API to be something that will be used by administrators and not developers.
### Other attempts at building a higher level API on-top of a multitude of inconsistent lower level APIs have not been successful, will Crossplane not have the same issues?
We agree that building a consistent higher level API on top of multitudes of inconsistent lower level API's is well known to be fraught with peril (e.g. dumbing down to lowest common denominator, or resulting in so loosely defined an API as to be impossible to practically develop real portable applications on top of it).
Crossplane follows a different approach here. The portable API extracts the pieces that are common across all implementations, and from the perspective of the workload. The rest of the implementation details are captured in full fidelity by the admin in resource classes. The combination of the two is what results in full configuration that can be deployed. We believe this to be a reasonable tradeoff that avoids the dumbing down to lowest common denominator problem, while still enabling portability.

View File

@ -0,0 +1,12 @@
---
title: Getting Started
toc: true
weight: 310
---
# Getting Started
* [Installing Crossplane](install-crossplane.md)
* [Adding Your Cloud Providers](cloud-providers.md)
* [Deploying Workloads](deploy.md)
* [Running Resources](running-resources.md)
* [Troubleshooting](troubleshoot.md)

View File

@ -0,0 +1,485 @@
# Deploying GitLab in GCP
This user guide will walk you through GitLab application deployment using Crossplane managed resources and
the official GitLab Helm chart.
## Pre-requisites
* [Kubernetes cluster](https://kubernetes.io/docs/setup/)
* For example [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/), minimum version `v0.28+`
* [Helm](https://docs.helm.sh/using_helm/), minimum version `v2.9.1+`.
* [jq](https://stedolan.github.io/jq/) - commandline JSON processor `v1.5+`
## Preparation
### Crossplane
- Install Crossplane using the [Crossplane Installation Guide](../install-crossplane.md)
- Obtain [Cloud Provider Credentials](../cloud-providers.md)
#### GCP Provider
It is essential to make sure that the GCP Service Account used by the Crossplane GCP Provider has the following Roles:
Cloud SQL Admin
Kubernetes Engine Admin
Service Account User
Cloud Memorystore Redis Admin
Storage Admin
Using GCP Service Account `gcp-credentials.json`:
- Generate BASE64ENCODED_GCP_PROVIDER_CREDS encoded value:
```bash
cat gcp-credentials.json | base64
#cat gcp-credentials.json | base64 -w0 # linux variant
```
- Update [provider.yaml](../../cluster/examples/gitlab/gcp/provider.yaml) replacing `BASE64ENCODED_GCP_PROVIDER_CREDS`
- Update [provider.yaml](../../cluster/examples/gitlab/gcp/provider.yaml) replacing `PROJECT_ID` with `project_id` from the credentials.json
#### GCS
It is recommended to create a separate GCP Service Account dedicated to storage operations only, i.e. with a reduced IAM role set, for example: `StorageAdmin` only.
Follow the same step as for GCP credentials to create and obtain `gcs-credentials.json`
- Generate BASE64ENCODED_GCS_PROVIDER_CREDS encoded value:
```bash
cat gcs-credentials.json | base64 -w0
```
Otherwise, you can use `BASE64ENCODED_GCP_PROVIDER_CREDS` in place of `BASE64ENCODED_GCS_PROVIDER_CREDS`
- Update [provider.yaml](../../cluster/examples/gitlab/gcp/provider.yaml) replacing `BASE64ENCODED_GCS_PROVIDER_CREDS`
##### GCS Interoperability
- Navigate to: https://console.cloud.google.com/storage/settings in your GCP project
- Click on `Interoperability` Tab
- Using `Interoperable storage access keys` generate `BASE64ENCODED` values
- `BASE64ENCODED_GCS_INTEROP_ACCESS_KEY`
- `BASE64ENCODED_GCS_INTEROP_SECRET`
- Update [provider.yaml](../../cluster/examples/gitlab/gcp/provider.yaml) replacing:
- `BASE64ENCODED_GCS_INTEROP_ACCESS_KEY`
- `BASE64ENCODED_GCS_INTEROP_SECRET`
#### Create
- Create GCP provider:
```bash
kubectl create -f cluster/examples/gitlab/gcp/provider.yaml
```
- Verify GCP provider was successfully registered by the crossplane
```bash
kubectl get providers.gcp.crossplane.io -n crossplane-system
kubectl get secrets -n crossplane-system
```
- You should see output similar to:
```bash
NAME PROJECT-ID AGE
demo-gcp your-project-123456 11m
NAME TYPE DATA AGE
default-token-974db kubernetes.io/service-account-token 3 2d16h
demo-gcp-creds Opaque 1 103s
demo-gcs-creds Opaque 3 2d11h
```
#### Resource Classes
Create Crossplane Resource Class needed to provision managed resources for GitLab applications
```bash
kubectl create -f cluster/examples/gitlab/gcp/resource-classes/
```
```
resourceclass.core.crossplane.io/standard-gcp-bucket created
resourceclass.core.crossplane.io/standard-gcp-cluster created
resourceclass.core.crossplane.io/standard-gcp-postgres created
resourceclass.core.crossplane.io/standard-gcp-redis created
```
Verify
```bash
kubectl get resourceclasses -n crossplane-system
```
```
NAME PROVISIONER PROVIDER-REF RECLAIM-POLICY AGE
standard-gcp-bucket bucket.storage.gcp.crossplane.io/v1alpha1 demo-gcp Delete 17s
standard-gcp-cluster gkecluster.compute.gcp.crossplane.io/v1alpha1 demo-gcp Delete 17s
standard-gcp-postgres cloudsqlinstance.database.gcp.crossplane.io/v1alpha1 demo-gcp Delete 17s
standard-gcp-redis cloudmemorystoreinstance.cache.gcp.crossplane.io/v1alpha1 demo-gcp Delete 17s
```
#### Resource Claims
Provision Managed Resources required by GitLab application using Crossplane Resource Claims.
Note: you can use a separate command for each claim file, or create all claims in one command, like so:
```bash
kubectl create -Rf cluster/examples/gitlab/gcp/resource-claims/
```
```
bucket.storage.crossplane.io/gitlab-artifacts created
bucket.storage.crossplane.io/gitlab-backups-tmp created
bucket.storage.crossplane.io/gitlab-backups created
bucket.storage.crossplane.io/gitlab-externaldiffs created
bucket.storage.crossplane.io/gitlab-lfs created
bucket.storage.crossplane.io/gitlab-packages created
bucket.storage.crossplane.io/gitlab-pseudonymizer created
bucket.storage.crossplane.io/gitlab-registry created
bucket.storage.crossplane.io/gitlab-uploads created
kubernetescluster.compute.crossplane.io/gitlab-gke created
postgresqlinstance.storage.crossplane.io/gitlab-postgresql created
rediscluster.cache.crossplane.io/gitlab-redis created
```
Verify that the resource claims were successfully provisioned.
```bash
# check status of kubernetes cluster
kubectl get -f cluster/examples/gitlab/gcp/resource-claims/kubernetes.yaml
kubectl get -f cluster/examples/gitlab/gcp/resource-claims/postgres.yaml
kubectl get -f cluster/examples/gitlab/gcp/resource-claims/redis.yaml
```
```
NAME STATUS CLUSTER-CLASS CLUSTER-REF AGE
gitlab-gke Bound standard-gcp-cluster gke-af012df6-6e2a-11e9-ac37-9cb6d08bde99 4m7s
---
NAME STATUS CLASS VERSION AGE
gitlab-postgresql Bound standard-gcp-postgres 9.6 5m27s
---
NAME STATUS CLASS VERSION AGE
gitlab-redis Bound standard-gcp-redis 3.2 7m10s
```
```bash
# check all bucket claims
kubectl get -f cluster/examples/gitlab/gcp/resource-claims/buckets/
```
```text
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-artifacts Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-backups-tmp Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-backups Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-externaldiffs Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-lfs Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-packages Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-pseudonymizer Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-registry Bound standard-gcp-bucket 4m49s
NAME STATUS CLASS PREDEFINED-ACL LOCAL-PERMISSION AGE
gitlab-uploads Bound standard-gcp-bucket 4m49s
```
What we are looking for is for `STATUS` value to become `Bound` which indicates the managed resource was successfully provisioned and is ready for consumption
##### Resource Claims Connection Secrets
Verify that every resource has created a connection secret
```bash
kubectl get secrets -n default
```
```
NAME TYPE DATA AGE
default-token-mzsgg kubernetes.io/service-account-token 3 5h42m
gitlab-artifacts Opaque 4 6m41s
gitlab-backups Opaque 4 7m6s
gitlab-backups-tmp Opaque 4 7m7s
gitlab-externaldiffs Opaque 4 7m5s
gitlab-lfs Opaque 4 7m4s
gitlab-packages Opaque 4 2m28s
gitlab-postgresql Opaque 3 30m
gitlab-pseudonymizer Opaque 4 7m2s
gitlab-redis Opaque 1 28m
gitlab-registry Opaque 4 7m1s
gitlab-uploads Opaque 4 7m1s
```
Note: Kubernetes cluster claim is created in "privileged" mode; thus the kubernetes cluster resource secret is located in `crossplane-system` namespace, however, you will not need to use this secret for our GitLab demo deployment.
At this point, all GitLab managed resources should be ready to consume and this completes the Crossplane resource provisioning phase.
### GKE Cluster
Following the below steps will prepare the GKE Cluster for GitLab installation.
- First, get the GKE Cluster's name by examining the Kubernetes Resource Claim
```bash
kubectl get -f cluster/examples/gitlab/gcp/resource-claims/kubernetes.yaml
```
```
NAME STATUS CLUSTER-CLASS CLUSTER-REF AGE
gitlab-gke Bound standard-gcp-cluster gke-af012df6-6e2a-11e9-ac37-9cb6d08bde99 71m
```
- Using `CLUSTER-REF` get GKECluster resource
```bash
kubectl get gkecluster [CLUSTER-REF value] -n crossplane-system
```
```
NAME STATUS STATE CLUSTER-NAME ENDPOINT CLUSTER-CLASS LOCATION RECLAIM-POLICY AGE
gke-af012df6-6e2a-11e9-ac37-9cb6d08bde99 Bound RUNNING gke-af11dfb1-6e2a-11e9-ac37-9cb6d08bde99 130.211.208.249 standard-gcp-cluster us-central1-a Delete 72m
```
- Record local cluster context name
```bash
kubect config current-context
```
- Record the `CLUSTER_NAME` value
- Obtain GKE Cluster credentials
- Note: the easiest way to get `gcloud` command is via:
- Go to: https://console.cloud.google.com/kubernetes/list
- Click `Connect` next to cluster with `CLUSTER-NAME` value
```bash
gcloud container clusters [CLUSTER-NAME] --zone [CLUSTER-ZONE] --project my-project-123456
```
Add your user account to the cluster admin role
```bash
kubectl create clusterrolebinding cluster-admin-binding \
--clusterrole cluster-admin \
--user [your-gcp-user-name]
```
#### External DNS
- Fetch the [External-DNS](https://github.com/helm/charts/tree/master/stable/external-dns) helm chart
```bash
helm fetch stable/external-dns
```
If the `helm fetch` command is successful, you should see a new file created in your CWD:
```bash
ls -l external-dns-*
```
```
-rw-r--r-- 1 user user 8913 May 3 23:24 external-dns-1.7.5.tgz
```
- Render the Helm chart into `yaml`, and set values and apply to your GKE cluster
```bash
helm template external-dns-1.7.5.tgz --name gitlab-demo --namespace kube-system \
--set provider=google \
--set txtOwnerId=[gke-cluster-name] \
--set google.project=[gcp-project-id] \
--set rbac.create=true | kubectl -n kube-system apply -f -
```
```
service/release-name-external-dns created
deployment.extensions/release-name-external-dns created
```
- Verify `External-DNS` is up and running
```bash
kubectl get deploy,service -l release=gitlab-demo -n kube-system
```
```
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deployment.extensions/gitlab-demo-external-dns 1 1 1 1 1m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/gitlab-demo-external-dns ClusterIP 10.75.14.226 <none> 7979/TCP 1m
```
#### Managed Resource Secrets
Decide on the GKE cluster namespace where GitLab's application artifacts will be deployed.
We will use: `gitlab`, and for convenience we will [set our current context](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/#setting-the-namespace-preference) to this namespace
```bash
kubectl create ns gitlab
kubectl config set-context $(kubectl config current-context) --namespace=gitlab
```
##### Export and Convert Secrets
GitLab requires to provide connection information in the specific format per cloud provider.
In addition, we need to extract endpoints and additional managed resource properties and add them to helm values.
_There is [current and ongoing effort](https://github.com/crossplaneio/gitlab-controller) to create an alternative experience to deploy GitLab Crossplane application, which alleviates integration difficulties between Crossplane platform and the GitLab Helm chart deployment._
We will use a convenience script for this purpose.
Note: your output may be different
```bash
./cluster/examples/gitlab/gcp/secrets.sh [your-local-k8s-cluster-context: default=minikube]
```
```
Source cluster kubectl context: microk8s
Current cluster kubectl context: gke_you-project-123456_us-central1-a_gke-a2345dfb1-asdf-11e9-ac37-9cb6d08bde99
---
Source cluster secrets:
NAME TYPE DATA AGE
default-token-mzsgg kubernetes.io/service-account-token 3 2d7h
gitlab-artifacts Opaque 4 34h
gitlab-backups Opaque 4 34h
gitlab-backups-tmp Opaque 4 34h
gitlab-externaldiffs Opaque 4 34h
gitlab-lfs Opaque 4 34h
gitlab-packages Opaque 4 34h
gitlab-postgresql Opaque 3 2d2h
gitlab-pseudonymizer Opaque 4 34h
gitlab-redis Opaque 1 2d2h
gitlab-registry Opaque 4 34h
gitlab-uploads Opaque 4 34h
---
Generate PostgreSQL secret and values file
secret/gitlab-postgresql created
---
Generate Redis values file
---
Generate Buckets secrets
secret/bucket-artifacts created
secret/bucket-backups-tmp created
secret/bucket-backups created
secret/bucket-externaldiffs created
secret/bucket-lfs created
secret/bucket-packages created
secret/bucket-pseudonymizer created
secret/bucket-registry created
secret/bucket-uploads created
```
## Install
Render the official GitLab Helm chart with the generated values files, and your settings into a `gitlab-gcp.yaml` file.
See [GitLab Helm Documentation](https://docs.gitlab.com/charts/installation/deployment.html) for the additional details
```bash
helm repo add gitlab https://charts.gitlab.io/
helm repo update
helm fetch gitlab/gitlab --version v1.7.1
helm template gitlab-1.7.1.tgz --name gitlab-demo --namespace gitlab \
-f cluster/examples/gitlab/gcp/values-buckets.yaml \
-f cluster/examples/gitlab/gcp/values-redis.yaml \
-f cluster/examples/gitlab/gcp/values-psql.yaml \
--set global.hosts.domain=your.domain \
--set global.hosts.hostSuffix=demo \
--set certmanager-issuer.email=email@account.io > gitlab-gcp.yaml
```
Examine `gitlab-gcp.yaml` to familiarize yourself with all GitLab components.
Install GitLab
Note: your output may look different:
```bash
kubectl create -f gitlab-gcp.yaml
```
Validate GitLab components:
```bash
kubectl get jobs,deployments,statefulsets
```
It usually takes few minutes for all GitLab components to get initialized and be ready.
Note: During the initialization "wait", some pods could automatically restart, but this should stabilize once all the
dependent components become available.
Note: There also could be intermittent `ImagePullBackOff`, but those, similar to above should clear up by themselves.
Note: It appears the `gitlab-demo-unicorn-test-runner-*` (job/pod) will Error and will not re-run, unless the pod is resubmitted.
After few minutes your output for:
```bash
kubectl get pod
```
Should look similar to:
```bash
NAME READY STATUS RESTARTS AGE
gitlab-demo-certmanager-59f887dc9-jppl7 1/1 Running 0 9m
gitlab-demo-gitaly-0 1/1 Running 0 9m
gitlab-demo-gitlab-runner-fcc9cc7cf-c7pzz 0/1 Init:0/1 0 9m
gitlab-demo-gitlab-shell-57b887755c-kqm89 1/1 Running 0 8m
gitlab-demo-gitlab-shell-57b887755c-vzqkf 1/1 Running 0 9m
gitlab-demo-issuer.0-ddzwp 0/1 Completed 0 9m
gitlab-demo-migrations.0-2h5px 1/1 Running 2 9m
gitlab-demo-nginx-ingress-controller-7bf4f7574d-cznfl 1/1 Running 0 9m
gitlab-demo-nginx-ingress-controller-7bf4f7574d-f5wjz 1/1 Running 0 9m
gitlab-demo-nginx-ingress-controller-7bf4f7574d-mxqpz 1/1 Running 0 9m
gitlab-demo-nginx-ingress-default-backend-5886cb59c7-bjnrt 1/1 Running 0 9m
gitlab-demo-nginx-ingress-default-backend-5886cb59c7-gchhp 1/1 Running 0 9m
gitlab-demo-prometheus-server-64897864cf-p4sd7 2/2 Running 0 9m
gitlab-demo-registry-746bbb488f-xjlhp 1/1 Running 0 8m
gitlab-demo-registry-746bbb488f-xxpcr 1/1 Running 0 9m
gitlab-demo-shared-secrets.0-mr7-2v5cf 0/1 Completed 0 9m
gitlab-demo-sidekiq-all-in-1-5dd8b5b9d-58p72 1/1 Running 0 9m
gitlab-demo-task-runner-7c477b48dc-d5nf6 1/1 Running 0 9m
gitlab-demo-unicorn-6dd757db97-4vqgc 1/2 ImagePullBackOff 0 9m
gitlab-demo-unicorn-6dd757db97-nmglt 2/2 Running 0 8m
gitlab-demo-unicorn-test-runner-f2ttk 0/1 Error 0 9m
```
Note: if `ImagePullBackOff` error Pod does not get auto-cleared, consider deleting the pod.
A new pod should come up with "Running" STATUS.
## Use
Retrieve the DNS name using GitLab ingress componenet:
```bash
kubectl get ingress
```
You should see following ingress configurations:
```
NAME HOSTS ADDRESS PORTS AGE
gitlab-demo-registry registry-demo.upbound.app 35.222.163.203 80, 443 14m
gitlab-demo-unicorn gitlab-demo.upbound.app 35.222.163.203 80, 443 14m
```
Navigate your browser to https://gitlab-demo.upbound.app, and if everything ran successfully, you should see:
![alt test](gitlab-login.png)
## Uninstall
### GitLab
To remove the GitLab application from the GKE cluster: run:
```bash
kubectl delete -f gitlab-gcp.yaml
```
### External-DNS
```bash
kubectl delete deploy,service -l app=external-dns -n kube-system
```
### Crossplane
To remove Crossplane managed resources, switch back to local cluster context `minikube`:
```bash
kubectl config use-context minikube
```
Delete all managed resources by running:
```bash
kubectl delete -Rf cluster/examples/gitlab/gcp/resource-claims
```
```
bucket.storage.crossplane.io "gitlab-artifacts" deleted
bucket.storage.crossplane.io "gitlab-backups-tmp" deleted
bucket.storage.crossplane.io "gitlab-backups" deleted
bucket.storage.crossplane.io "gitlab-externaldiffs" deleted
bucket.storage.crossplane.io "gitlab-lfs" deleted
bucket.storage.crossplane.io "gitlab-packages" deleted
bucket.storage.crossplane.io "gitlab-pseudonymizer" deleted
bucket.storage.crossplane.io "gitlab-registry" deleted
bucket.storage.crossplane.io "gitlab-uploads" deleted
kubernetescluster.compute.crossplane.io "gitlab-gke" deleted
postgresqlinstance.storage.crossplane.io "gitlab-postgresql" deleted
rediscluster.cache.crossplane.io "gitlab-redis" deleted
```
Verify that all resource claims have been removed:
```bash
kubectl get -Rf cluster/examples/gitlab/gcp/resource-claims
```
Note: typically it may take few seconds for Crossplane to process the request.
By running resource and provider removal in the same command or back-to-back, we are running the risk of having orphaned resource.
I.E., a resource that could not be cleaned up because the provider is no longer available.
Delete all resource classes:
```bash
kubectl delete -Rf cluster/examples/gitlab/gcp/resource-classes/
```
```
resourceclass.core.crossplane.io "standard-gcp-bucket" deleted
resourceclass.core.crossplane.io "standard-gcp-cluster" deleted
resourceclass.core.crossplane.io "standard-gcp-postgres" deleted
resourceclass.core.crossplane.io "standard-gcp-redis" deleted
```
Delete gcp-provider and secrets
```bash
kubectl delete -f cluster/examples/gitlab/gcp/provider.yaml
```

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

View File

@ -4,85 +4,82 @@ toc: true
weight: 320
indent: true
---
# Installing GitLab-Controller
# Installing Crossplane
GitLab-Controller can be easily installed into any existing Kubernetes cluster using the regularly published Helm chart.
The Helm chart contains all the custom resources and controllers needed to deploy and configure GitLab-Controller.
Crossplane can be easily installed into any existing Kubernetes cluster using the regularly published Helm chart.
The Helm chart contains all the custom resources and controllers needed to deploy and configure Crossplane.
## Pre-requisites
* [Kubernetes cluster](https://kubernetes.io/docs/setup/)
* For example [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/), minimum version `v0.28+`
* [Helm](https://docs.helm.sh/using_helm/), minimum version `v2.9.1+`.
* [Crossplane](https://github.com/crossplaneio/crossplane), `v0.2.0+`
## Installation
Helm charts for GitLab-Controller are currently published to the `alpha` and `master` channels.
Helm charts for Crossplane are currently published to the `alpha` and `master` channels.
In the future, `beta` and `stable` will also be available.
### Alpha
The alpha channel is the most recent release of GitLab-Controller that is considered ready for testing by the community.
The alpha channel is the most recent release of Crossplane that is considered ready for testing by the community.
```console
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name gitlab-controller --namespace crossplane-system crossplane-alpha/gitlab-controller
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane
```
### Master
The `master` channel contains the latest commits, with all automated tests passing.
`master` is subject to instability, incompatibility, and features may be added or removed without much prior notice.
It is recommended to use one of the more stable channels, but if you want the absolute newest GitLab-Controller
installed, then you can use the `master` channel.
It is recommended to use one of the more stable channels, but if you want the absolute newest Crossplane installed, then you can use the `master` channel.
To install the Helm chart from master, you will need to pass the specific version returned by the `search` command:
```console
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search gitlab-controller
helm install --name gitlab-controller --namespace crossplane-system crossplane-master/gitlab-controller --version <version>
helm search crossplane
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version>
```
For example:
```console
helm install --name gitlab-controller --namespace crossplane-system crossplane-master/gitlab-controlelr --version 0.0.0-249.637ccf9
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version 0.0.0-249.637ccf9
```
## Uninstalling the Chart
To uninstall/delete the `gitlab-controller` application:
To uninstall/delete the `crossplane` deployment:
```console
helm delete --purge gitlab-controller
helm delete --purge crossplane
```
That command removes all Kubernetes components associated with GitLab-Controller,
including all the custom resources and controllers.
That command removes all Kubernetes components associated with Crossplane, including all the custom resources and controllers.
## Configuration
The following tables lists the configurable parameters of the GitLab-Controller chart and their default values.
The following tables lists the configurable parameters of the Crossplane chart and their default values.
| Parameter | Description | Default |
| ------------------------- | --------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Image | `crossplane/gitlab-controller` |
| `image.repository` | Image | `crossplane/crossplane` |
| `image.tag` | Image tag | `master` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `imagePullSecrets` | Names of image pull secrets to use | `dockerhub` |
| `replicas` | The number of replicas to run for the GitLab-Controller operator | `1` |
| `deploymentStrategy` | The deployment strategy for the GitLab-Controller operator | `RollingUpdate` |
| `replicas` | The number of replicas to run for the Crossplane operator | `1` |
| `deploymentStrategy` | The deployment strategy for the Crossplane operator | `RollingUpdate` |
### Command Line
You can pass the settings with helm command line parameters.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
For example, the following command will install GitLab-Controller with an image pull policy of `IfNotPresent`.
For example, the following command will install Crossplane with an image pull policy of `IfNotPresent`.
```console
helm install --name gitlab-controller --namespace crossplane-system crossplane-alpha/gitlab-controller --set image.pullPolicy=IfNotPresent
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane --set image.pullPolicy=IfNotPresent
```
### Settings File
@ -90,7 +87,7 @@ helm install --name gitlab-controller --namespace crossplane-system crossplane-a
Alternatively, a yaml file that specifies the values for the above parameters (`values.yaml`) can be provided while installing the chart.
```console
helm install --name gitlab-controller --namespace crossplane-system crossplane-alpha/gitlab-controller -f values.yaml
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane -f values.yaml
```
Here are the sample settings to get you started.
@ -101,7 +98,7 @@ replicas: 1
deploymentStrategy: RollingUpdate
image:
repository: crossplane/gitlab-controller
repository: crossplane/crossplane
tag: master
pullPolicy: Always

BIN
docs/master/media/arch.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 292 KiB

310
docs/master/media/logo.svg Normal file
View File

@ -0,0 +1,310 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 23.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 1312.19 279.51" style="enable-background:new 0 0 1312.19 279.51;" xml:space="preserve">
<style type="text/css">
.st0{clip-path:url(#SVGID_2_);fill:#F7D186;}
.st1{clip-path:url(#SVGID_4_);fill:#FF9234;}
.st2{clip-path:url(#SVGID_6_);enable-background:new ;}
.st3{clip-path:url(#SVGID_8_);}
.st4{clip-path:url(#SVGID_10_);}
.st5{clip-path:url(#SVGID_12_);fill:#FFCD3C;}
.st6{clip-path:url(#SVGID_14_);enable-background:new ;}
.st7{clip-path:url(#SVGID_16_);}
.st8{clip-path:url(#SVGID_18_);}
.st9{clip-path:url(#SVGID_20_);fill:#F3807B;}
.st10{clip-path:url(#SVGID_22_);enable-background:new ;}
.st11{clip-path:url(#SVGID_24_);}
.st12{clip-path:url(#SVGID_26_);}
.st13{clip-path:url(#SVGID_28_);fill:#35D0BA;}
.st14{clip-path:url(#SVGID_30_);fill:#D8AE64;}
.st15{clip-path:url(#SVGID_32_);fill:#004680;}
.st16{clip-path:url(#SVGID_34_);fill:#004680;}
.st17{clip-path:url(#SVGID_36_);fill:#004680;}
.st18{clip-path:url(#SVGID_38_);fill:#004680;}
.st19{clip-path:url(#SVGID_40_);fill:#004680;}
.st20{clip-path:url(#SVGID_42_);fill:#004680;}
.st21{clip-path:url(#SVGID_44_);fill:#004680;}
.st22{clip-path:url(#SVGID_46_);fill:#004680;}
.st23{clip-path:url(#SVGID_48_);fill:#004680;}
.st24{clip-path:url(#SVGID_50_);fill:#004680;}
</style>
<g>
<g>
<defs>
<path id="SVGID_1_" d="M115.47,94.13c-8.4,0-15.22,6.81-15.22,15.22v143.2c0,8.4,6.81,15.22,15.22,15.22s15.22-6.81,15.22-15.22
v-143.2C130.68,100.94,123.87,94.13,115.47,94.13"/>
</defs>
<clipPath id="SVGID_2_">
<use xlink:href="#SVGID_1_" style="overflow:visible;"/>
</clipPath>
<rect x="89.53" y="83.41" class="st0" width="51.87" height="195.07"/>
</g>
<g>
<defs>
<path id="SVGID_3_" d="M176.53,75.36c0.05-0.96,0.07-1.93,0.07-2.9c0-0.95-0.02-1.89-0.07-2.82
c-1.47-32.22-28.06-57.88-60.64-57.88S56.72,37.42,55.25,69.64c-0.04,0.94-0.07,1.88-0.07,2.82c0,1.04,0.03,2.07,0.08,3.09
c-0.02,0.5-0.08,1-0.08,1.51v99.64c0,19.06,15.59,34.65,34.65,34.65h52.14c19.06,0,34.65-15.59,34.65-34.65V77.07
C176.62,76.49,176.56,75.93,176.53,75.36"/>
</defs>
<clipPath id="SVGID_4_">
<use xlink:href="#SVGID_3_" style="overflow:visible;"/>
</clipPath>
<rect x="44.47" y="1.04" class="st1" width="142.87" height="221.04"/>
</g>
<g>
<defs>
<path id="SVGID_5_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_6_">
<use xlink:href="#SVGID_5_" style="overflow:visible;"/>
</clipPath>
<g class="st2">
<g>
<defs>
<rect id="SVGID_7_" x="16.08" y="24.9" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_8_">
<use xlink:href="#SVGID_7_" style="overflow:visible;"/>
</clipPath>
<g class="st3">
<defs>
<rect id="SVGID_9_" x="9.23" y="92.99" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -54.1638 118.2926)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_10_">
<use xlink:href="#SVGID_9_" style="overflow:visible;"/>
</clipPath>
<g class="st4">
<defs>
<rect id="SVGID_11_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_12_">
<use xlink:href="#SVGID_11_" style="overflow:visible;"/>
</clipPath>
<rect x="7.4" y="16.22" class="st5" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_13_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_14_">
<use xlink:href="#SVGID_13_" style="overflow:visible;"/>
</clipPath>
<g class="st6">
<g>
<defs>
<rect id="SVGID_15_" x="-37.52" y="-28.7" width="207.96" height="207.96"/>
</defs>
<clipPath id="SVGID_16_">
<use xlink:href="#SVGID_15_" style="overflow:visible;"/>
</clipPath>
<g class="st7">
<defs>
<rect id="SVGID_17_" x="-40.95" y="35.1" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -33.3744 68.1028)" width="212.95" height="78.48"/>
</defs>
<clipPath id="SVGID_18_">
<use xlink:href="#SVGID_17_" style="overflow:visible;"/>
</clipPath>
<g class="st8">
<defs>
<rect id="SVGID_19_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_20_">
<use xlink:href="#SVGID_19_" style="overflow:visible;"/>
</clipPath>
<rect x="-48.24" y="-39.42" class="st9" width="227.51" height="227.51"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_21_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_22_">
<use xlink:href="#SVGID_21_" style="overflow:visible;"/>
</clipPath>
<g class="st10">
<g>
<defs>
<rect id="SVGID_23_" x="61.1" y="69.92" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_24_">
<use xlink:href="#SVGID_23_" style="overflow:visible;"/>
</clipPath>
<g class="st11">
<defs>
<rect id="SVGID_25_" x="53.98" y="137.74" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -72.6974 163.0359)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_26_">
<use xlink:href="#SVGID_25_" style="overflow:visible;"/>
</clipPath>
<g class="st12">
<defs>
<rect id="SVGID_27_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_28_">
<use xlink:href="#SVGID_27_" style="overflow:visible;"/>
</clipPath>
<rect x="52.14" y="60.96" class="st13" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_29_" d="M104.38,211.52l26.4,26.39V211.3C130.78,211.3,103.72,211.52,104.38,211.52"/>
</defs>
<clipPath id="SVGID_30_">
<use xlink:href="#SVGID_29_" style="overflow:visible;"/>
</clipPath>
<rect x="93.65" y="200.58" class="st14" width="47.85" height="48.06"/>
</g>
<g>
<defs>
<path id="SVGID_31_" d="M307.52,195.1c-38.8,0-70.21-31.6-70.21-70.41c0-38.6,31.4-70.21,70.21-70.21c20.2,0,39.6,8.8,52.81,24
c4.2,5,3.8,12.2-1,16.4c-4.8,4.4-12.2,3.8-16.4-1c-9-10.2-21.8-16-35.4-16c-25.8,0-47.01,21-47.01,46.8
c0,26,21.2,47.01,47.01,47.01c13.6,0,26.4-5.8,35.4-16c4.2-4.8,11.6-5.4,16.4-1c4.8,4.2,5.2,11.4,1,16.4
C347.12,186.3,327.72,195.1,307.52,195.1"/>
</defs>
<use xlink:href="#SVGID_31_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_32_">
<use xlink:href="#SVGID_31_" style="overflow:visible;"/>
</clipPath>
<rect x="226.59" y="43.77" class="st15" width="147.35" height="162.05"/>
</g>
<g>
<defs>
<path id="SVGID_33_" d="M438.53,98.89c0,6.4-5.2,11.6-11.8,11.6c-12.8,0-22.4,10.4-22.4,24.6v48.41c0,6.4-5.2,11.6-11.6,11.6
c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,9.8,3.6,11.2,8.6c6.8-4,14.6-6.2,22.8-6.2
C433.33,87.29,438.53,92.49,438.53,98.89"/>
</defs>
<use xlink:href="#SVGID_33_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_34_">
<use xlink:href="#SVGID_33_" style="overflow:visible;"/>
</clipPath>
<rect x="370.4" y="74.17" class="st16" width="78.84" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_35_" d="M497.53,195.7c-30.4,0-55-24.8-55-55c0-30.4,24.6-55.21,55-55.21c30.4,0,55.21,24.8,55.21,55.21
C552.74,170.9,527.94,195.7,497.53,195.7 M497.53,108.69c-17.6,0-31.8,14.4-31.8,32c0,17.4,14.2,31.8,31.8,31.8
c17.6,0,31.8-14.4,31.8-31.8C529.34,123.09,515.14,108.69,497.53,108.69"/>
</defs>
<use xlink:href="#SVGID_35_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_36_">
<use xlink:href="#SVGID_35_" style="overflow:visible;"/>
</clipPath>
<rect x="431.81" y="74.77" class="st17" width="131.65" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_37_" d="M571.94,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.8,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C590.94,195.1,577.54,187.3,571.94,174.9"/>
</defs>
<use xlink:href="#SVGID_37_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_38_">
<use xlink:href="#SVGID_37_" style="overflow:visible;"/>
</clipPath>
<rect x="560.02" y="74.17" class="st18" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_39_" d="M663.75,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.81,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C682.75,195.1,669.35,187.3,663.75,174.9"/>
</defs>
<use xlink:href="#SVGID_39_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_40_">
<use xlink:href="#SVGID_39_" style="overflow:visible;"/>
</clipPath>
<rect x="651.83" y="74.17" class="st19" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_41_" d="M859.17,139.9c0,14.8-5,28.4-14.4,38.61c-9.8,10.6-23.2,16.6-38,16.6c-10.6,0-20.6-3.2-29-8.8v47.2
c0,6.4-5.4,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,10.2,3.8,11.4,9
c8.6-5.8,18.8-9,29.4-9c14.8,0,28.2,5.8,38,16.4C854.17,111.49,859.17,125.29,859.17,139.9 M835.96,139.9
c0-18.4-12.2-31.8-29.2-31.8c-16.8,0-29,13.4-29,31.8c0,18.4,12.2,31.8,29,31.8C823.77,171.7,835.96,158.3,835.96,139.9"/>
</defs>
<use xlink:href="#SVGID_41_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_42_">
<use xlink:href="#SVGID_41_" style="overflow:visible;"/>
</clipPath>
<rect x="743.64" y="74.17" class="st20" width="126.25" height="181.65"/>
</g>
<g>
<defs>
<path id="SVGID_43_" d="M889.77,195.1c-6.4,0-11.6-5.2-11.6-11.6V66.29c0-6.4,5.2-11.6,11.6-11.6c6.4,0,11.8,5.2,11.8,11.6V183.5
C901.57,189.9,896.17,195.1,889.77,195.1"/>
</defs>
<use xlink:href="#SVGID_43_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_44_">
<use xlink:href="#SVGID_43_" style="overflow:visible;"/>
</clipPath>
<rect x="867.45" y="43.97" class="st21" width="44.84" height="161.85"/>
</g>
<g>
<defs>
<path id="SVGID_45_" d="M1025.38,96.49v87.01c0,6.4-5.2,11.6-11.6,11.6c-5.6,0-10.2-3.8-11.4-9c-8.4,5.8-18.6,9-29.4,9
c-14.8,0-28.2-5.8-38.01-16.6c-9.2-10-14.4-23.8-14.4-38.4c0-14.8,5.2-28.6,14.4-38.61c9.8-10.8,23.21-16.6,38.01-16.6
c10.8,0,21,3.2,29.4,9c1.2-5.2,5.8-9,11.4-9C1020.18,84.89,1025.38,90.09,1025.38,96.49 M1002.18,140.1c0-18.6-12.4-32-29.2-32
c-17,0-29.2,13.4-29.2,32c0,18.4,12.2,31.8,29.2,31.8C989.78,171.9,1002.18,158.5,1002.18,140.1"/>
</defs>
<use xlink:href="#SVGID_45_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_46_">
<use xlink:href="#SVGID_45_" style="overflow:visible;"/>
</clipPath>
<rect x="909.85" y="74.17" class="st22" width="126.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_47_" d="M1136.79,132.7v50.8c0,6.4-5.2,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6v-50.8
c0-11.8-6.6-24.6-21.4-24.6c-13.4,0-23.4,10.6-23.4,24.6v0.8v0.8v49.2c0,6.4-5.2,11.6-11.6,11.6c-6.4,0-11.6-5.2-11.6-11.6v-49.4
v-1.4V96.49c0-6.4,5.2-11.6,11.6-11.6c4.8,0,8.8,2.8,10.6,6.8c7-4.4,15.4-6.8,24.4-6.8
C1117.39,84.89,1136.79,105.49,1136.79,132.7"/>
</defs>
<use xlink:href="#SVGID_47_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_48_">
<use xlink:href="#SVGID_47_" style="overflow:visible;"/>
</clipPath>
<rect x="1034.66" y="74.17" class="st23" width="112.85" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_49_" d="M1207.2,196.1c-14.8,0-28.2-6.4-38.01-17.2c-9.4-10-14.4-23.81-14.4-38.4c0-31.61,22.2-55.21,51.4-55.21
c29.41,0,50.8,23.2,50.8,55.21c0,6.4-5.2,11.6-11.8,11.6h-65.41c4,12.2,14.4,20.8,27.4,20.8c7.83,0,14.48-1.65,19.23-6.21
c1.44-1.38,2.7-3.03,3.77-4.99c3.4-5.6,10.6-7.2,16-4c5.6,3.4,7.2,10.6,4,16C1241.2,189.9,1225.6,196.1,1207.2,196.1
M1179.59,128.7h52.61c-4-13.8-15-20.2-26-20.2C1195.4,108.49,1183.79,114.89,1179.59,128.7"/>
</defs>
<use xlink:href="#SVGID_49_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_50_">
<use xlink:href="#SVGID_49_" style="overflow:visible;"/>
</clipPath>
<rect x="1144.07" y="74.57" class="st24" width="123.65" height="132.25"/>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 14 KiB

133
docs/master/postgresql.md Normal file
View File

@ -0,0 +1,133 @@
# Deploying PostgreSQL Databases
This user guide will walk you through how to deploy a PostgreSQL database across many different environments with a focus on portability and reusability.
The database will be dynamically provisioned in the cloud provider of your choice at the request of the application developer via a `ResourceClaim` and created with the environment specific information that the administrator providers in a `ResourceClass`.
The commands in this guide assume you are running from a terminal/shell at the root of the [Crossplane repo](https://github.com/crossplaneio/crossplane/).
## Install Crossplane
The first step will be to install Crossplane by following the steps in the [Crossplane install guide](install-crossplane.md).
## Add Cloud Provider
Next you'll need to add your cloud provider credentials to Crossplane using [these provider specific steps](cloud-providers.md).
After those steps are completed, you should have the cloud provider credentials saved in a file on your local filesystem, for which the path will be stored in the environment variable `PROVIDER_KEY_FILE` in the next section.
## Set Environment Variables
After your cloud provider credentials have been created/added, let's set the following environment variables that have different values for each provider,
but will allow the rest of the steps to be consistent across all of them.
You only need to set the variables for your chosen cloud provider, you can ignore the other ones.
### Google Cloud Platform (GCP)
```console
export PROVIDER=GCP
export provider=gcp
export PROVIDER_KEY_FILE=crossplane-${provider}-provider-key.json
export DATABASE_TYPE=cloudsqlinstances
export versionfield=databaseVersion
```
### Microsoft Azure
```console
export PROVIDER=AZURE
export provider=azure
export PROVIDER_KEY_FILE=crossplane-${provider}-provider-key.json
export DATABASE_TYPE=postgresqlservers
export versionfield=version
```
### Amazon Web Services (AWS)
```console
export PROVIDER=AWS
export provider=aws
export PROVIDER_KEY_FILE=~/.aws/credentials
export DATABASE_TYPE=rdsinstances
export versionfield=engineVersion
```
## Create a PostgreSQL Resource Class
Let's create a `ResourceClass` that acts as a "blueprint" that contains the environment specific details of how a general request from the application to create a PostgreSQL database should be fulfilled.
This is a task that the administrator should complete, since they will have the knowledge and privileges for the specific environment details.
```console
sed "s/BASE64ENCODED_${PROVIDER}_PROVIDER_CREDS/`cat ${PROVIDER_KEY_FILE}|base64|tr -d '\n'`/g;" cluster/examples/database/${provider}/postgresql/provider.yaml | kubectl create -f -
kubectl create -f cluster/examples/database/${provider}/postgresql/resource-class.yaml
```
## Create a PostgreSQL Resource Claim
After the administrator has created the PostgreSQL `ResourceClass` "blueprint", the application developer is now free to create a PostgreSQL `ResourceClaim`.
This is a general request for a PostgreSQL database to be used by their application and it requires no environment specific information, allowing our applications to express their need for a database in a very portable way.
```console
kubectl create namespace demo
kubectl -n demo create -f cluster/examples/database/${provider}/postgresql/resource-claim.yaml
```
## Check Status of PostgreSQL Provisioning
We can follow along with the status of the provisioning of the database resource with the below commands.
Note that the first command gives us the status of the `ResourceClaim` (general request for a database by the application),
and the second command gives the status of the environment specific database resource that Crossplane is provisioning using the `ResourceClass` "blueprint".
```console
kubectl -n demo get postgresqlinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.bindingPhase,CLASS:.spec.classReference.name,VERSION:.spec.engineVersion,AGE:.metadata.creationTimestamp
kubectl -n crossplane-system get ${DATABASE_TYPE} -o custom-columns=NAME:.metadata.name,STATUS:.status.bindingPhase,STATE:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.${versionfield},AGE:.metadata.creationTimestamp
```
## Access the PostgreSQL Database
Once the dynamic provisioning process has finished creating and preparing the database, the status output will look similar to the following:
```console
> kubectl -n demo get postgresqlinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.bindingPhase,CLASS:.spec.classReference.name,VERSION:.spec.engineVersion,AGE:.metadata.creationTimestamp
NAME STATUS CLASS VERSION AGE
cloud-postgresql-claim Bound cloud-postgresql 9.6 2018-12-23T04:00:11Z
> kubectl -n crossplane-system get ${DATABASE_TYPE} -o custom-columns=NAME:.metadata.name,STATUS:.status.bindingPhase,STATE:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.${versionfield},AGE:.metadata.creationTimestamp
NAME STATUS STATE CLASS VERSION AGE
postgresql-3ef70bf9-0667-11e9-99e1-080027cf2340 Bound Ready cloud-postgresql 9.6 2018-12-23T04:00:12Z
```
Note that both the general `postgresqlinstance` `ResourceClaim` and the cloud provider specific PostgreSQL database have the `Bound` status, meaning the dynamic provisioning is done and the resource is ready for consumption.
The connection information will be stored in a secret with the same name as the `ResourceClaim`.
Since the secret is base64 encoded, we'll need to decode its fields to view them in plain-text.
To view all the connection information in plain-text, run the following command:
```console
for r in endpoint username password; do echo -n "${r}: "; kubectl -n demo get secret cloud-postgresql-claim -o jsonpath='{.data.'"${r}"'}' | base64 -D; echo; done
```
A workload or pod manifest will usually reference this connection information through injecting the secret contents into environment variables in the manifest.
You can see this in action as an example in the [Azure Workload example](https://github.com/crossplaneio/crossplane/blob/release-0.1/cluster/examples/workloads/wordpress-azure/workload.yaml#L47-L62).
More information about consuming secrets from manifests can be found in the [Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#use-cases).
## Clean-up
When you are finished with the PostgreSQL instance from this guide, you can clean up all the resources by running the below commands.
First, delete the resource claim, which will start the operation of deleting the PostgreSQL database from your cloud provider.
```console
kubectl -n demo delete -f cluster/examples/database/${provider}/postgresql/resource-claim.yaml
```
Next. delete the `ResourceClass` "blueprint":
```console
kubectl delete -f cluster/examples/database/${provider}/postgresql/resource-class.yaml
```
Finally, delete the cloud provider credentials from your local environment:
```console
kubectl delete -f cluster/examples/database/${provider}/postgresql/provider.yaml
```

View File

@ -12,7 +12,6 @@ The Workload will be deployed into the target Kubernetes cluster, and be configu
The general steps for this example are as follows:
1. Install Crossplane and Configure Crossplane:
[Crossplane Quick Start Guide](https://github.com/crossplaneio/crossplane/blob/master/docs/quick-start.md)
1. [Install GitLab-Controller](install.md)
1. Deploy a GitLab Application to the cloud provider: [Deploying Workloads](deploy.md)
1. Install Crossplane so it is ready to manage resources on your behalf: [Install Crossplane](install-crossplane.md)
1. Set up a cloud provider and add it to Crossplane: [Adding a Cloud Provider](cloud-providers.md)
1. Deploy a portable workload to the cloud provider: [Deploying Workloads](deploy.md)

View File

@ -0,0 +1,32 @@
---
title: Related Projects
toc: true
weight: 510
---
# Related Projects
While there are many projects that address similar issues, none of them encapsulate the full use case that Crossplane addresses. This list is not exhaustive and is not meant to provide a deep analysis of the following projects, but instead to motivate why Crossplane was created.
## Open Service Broker and Service Catalog
The [Open Service Broker](https://www.openservicebrokerapi.org/) and the [Kubernetes Service Catalog](https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/) are able to dynamically provision managed services in multiple cloud providers from Kubernetes. As a result it shares similar goals with Crossplane. However, service broker is not designed for workload portability, does not have a good separation of concern, and does not offer any integration with workload and resource scheduling. Service brokers can not span multiple cloud providers at once.
## Kubernetes Federation
The [federation-v2](https://github.com/kubernetes-sigs/federation-v2) project offers a single control plane that can span multiple Kubernetes clusters. Its being incubated in SIG-multicluster. Crossplane shares some of the goals of managing multiple Kubernetes clusters and also the core principles of creating a higher level control plane, scheduler and controllers that span clusters. While the federation-v2 project is scoped to just Kubernetes clusters, Crossplane supports non-container workloads, and orchestrating resources that run as managed services including databases, message queues, buckets, and others. The federation effort focuses on defining Kubernetes objects that can be templatized, and propagated to other Kubernetes clusters. Crossplane focuses on defining portable workload abstractions across cloud providers and offerings. We have considered taking a dependency on the federation-v2 work within Crossplane, although its not clear at this point if this would accelerate the Crossplane effort.
## AWS Service Operator
The [AWS Service Operator](https://github.com/awslabs/aws-service-operator) is a recent project that implements a set of Kubernetes controllers that are able to provision managed services in AWS. It defines a set of CRDs for managed services like DynamoDB, and controllers that can provision them via AWS CloudFormation. It is similar to Crossplane in that it can provision managed services in AWS. Crossplane goes a lot further by offering workload portability across cloud multiple cloud providers, separation of concern, and a scheduler for workload and resources.
## AWS CloudFormation, GCP Deployment Manager, and Others
These products offer a declarative model for deploying and provisioning infrastructure in each of the respective cloud providers. They only work for one cloud provider and do not solve the problem of workload portability. These products are generally closed source, and offer little or no extensibility points. We have considered using some of these products as a way to implement resource controllers in Crossplane.
## Terraform
[Terraform](https://www.terraform.io/) is a popular tool for provisioning infrastructure across cloud providers. It offers a declarative configuration language with support for templating, composability, referential integrity and dependency management. Terraform can dynamically provision infrastructure and perform changes when the tool is run by a human. Unlike Crossplane, Terraform does not support workload portability across cloud providers, and does not have any active controllers that can react to failures, or make changes to running infrastructure without human intervention. Terraform attempts to solve multicloud at the tool level, while Crossplane is at the API and control plane level. Terraform is open source under a MPL license, and follows an open core business model, with a number of its features closed source. We are evaluating whether we can use Terraform to accelerate the development of resource controllers in Crossplane.
## Pulumi
[Pulumi](https://www.pulumi.com/) is a product that is based on terraform and uses most of its providers. Instead of using a configuration language, Pulumi uses popular programming languages like Typescript to capture the configuration. At runtime, Pulumi generates a DAG of resources just like terraform and applies it to cloud providers. Pulumi has an early model for workload portability that is implemented using language abstractions. Unlike Crossplane, it does not have any active controllers that can react to failures, or make changes to running infrastructure without human intervention, nor does it support workload scheduling. Pulumi attempts to solve multicloud scenarios at the language level, while Crossplane is at the API and control plane level. Pulumi is open source under a APL2 license but a number of features require using their SaaS offering.

View File

@ -0,0 +1,97 @@
---
title: Running Resources
toc: true
weight: 350
indent: true
---
# Running Resources
Crossplane enables you to run a number of different resources in a portable and cloud agnostic way, allowing you to author an application that runs without modifications on multiple environments and cloud providers.
A single Crossplane enables the provisioning and full-lifecycle management of infrastructure across a wide range of providers, vendors, regions, and offerings.
## Guides
The list below contains some direct links to user guides that will walk you through how to deploy specific types of resources into your environments.
* [Workload user guides - WordPress application, MySQL database, Kubernetes cluster](deploy.md#guides)
* [PostgreSQL database](postgresql.md)
## Running Databases
Database managed services can be statically or dynamically provisioned by Crossplane in AWS, GCP, and Azure.
An application developer simply has to specify their general need for a database such as MySQL, without any specific knowledge of what environment that database will run in or even what specific type of database it will be at runtime.
The following sample is all the application developer needs to specify in order to get the correct MySQL database (CloudSQL, RDS, Azure MySQL) provisioned and configured for their application:
```yaml
apiVersion: storage.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: demo-mysql
spec:
classReference:
name: standard-mysql
namespace: crossplane-system
engineVersion: "5.7"
```
The cluster administrator specifies a resource class that acts as a template with the implementation details and policy specific to the environment that the generic MySQL resource is being deployed to.
This enables the database to be dynamically provisioned at deployment time without the application developer needing to know any of the details, which promotes portability and reusability.
An example resource class that will provision a CloudSQL instance in GCP in order to fulfill the applications general MySQL requirement would look like this:
```yaml
apiVersion: core.crossplane.io/v1alpha1
kind: ResourceClass
metadata:
name: standard-mysql
namespace: crossplane-system
parameters:
tier: db-n1-standard-1
region: us-west2
storageType: PD_SSD
provisioner: cloudsqlinstance.database.gcp.crossplane.io/v1alpha1
providerRef:
name: gcp-provider
reclaimPolicy: Delete
```
## Running Kubernetes Clusters
Kubernetes clusters are another type of resource that can be dynamically provisioned using a generic resource claim by the application developer and an environment specific resource class by the cluster administrator.
Generic Kubernetes cluster resource claim created by the application developer:
```yaml
apiVersion: compute.crossplane.io/v1alpha1
kind: KubernetesCluster
metadata:
name: demo-cluster
namespace: crossplane-system
spec:
classReference:
name: standard-cluster
namespace: crossplane-system
```
Environment specific GKE cluster resource class created by the admin:
```yaml
apiVersion: core.crossplane.io/v1alpha1
kind: ResourceClass
metadata:
name: standard-cluster
namespace: crossplane-system
parameters:
machineType: n1-standard-1
numNodes: "1"
zone: us-central1-a
provisioner: gkecluster.compute.gcp.crossplane.io/v1alpha1
providerRef:
name: gcp-provider
reclaimPolicy: Delete
```
## Future support
As the project continues to grow with support from the community, support for more resources will be added.
This includes all of the essential managed services from cloud providers as well as local or in-cluster services that deploy using the operator pattern.
Crossplane will provide support for serverless, databases, object storage (buckets), analytics, big data, AI, ML, message queues, key-value stores, and more.

View File

@ -0,0 +1,98 @@
---
title: Troubleshooting
toc: true
weight: 360
indent: true
---
# Troubleshooting
* [Crossplane Logs](#crossplane-logs)
* [Resource Status and Conditions](#resource-status-and-conditions)
* [Pausing Crossplane](#pausing-crossplane)
* [Deleting a Resource Hangs](#deleting-a-resource-hangs)
## Crossplane Logs
The first place to look to get more information or investigate a failure would be in the Crossplane pod logs, which should be running in the `crossplane-system` namespace.
To get the current Crossplane logs, run the following:
```console
kubectl -n crossplane-system logs $(kubectl -n crossplane-system get pod -l app=crossplane -o jsonpath='{.items[0].metadata.name}')
```
## Resource Status and Conditions
All of the objects that represent managed resources such as databases, clusters, etc. have a `status` section that can give good insight into the current state of that particular object.
In general, simply getting the `yaml` output of a Crossplane object will give insightful information about its condition:
```console
kubetl get <resource-type> -o yaml
```
For example, to get complete information about an Azure AKS cluster object, the following command will generate the below sample (truncated) output:
```console
> kubectl -n crossplane-system get akscluster -o yaml
...
status:
Conditions:
- LastTransitionTime: 2018-12-04T08:03:01Z
Message: 'failed to start create operation for AKS cluster aks-demo-cluster:
containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request:
StatusCode=400 -- Please see https://aka.ms/acs-sp-help for more details."'
Reason: failed to create cluster
Status: "False"
Type: Failed
- LastTransitionTime: 2018-12-04T08:03:14Z
Message: ""
Reason: ""
Status: "False"
Type: Creating
- LastTransitionTime: 2018-12-04T09:59:43Z
Message: ""
Reason: ""
Status: "True"
Type: Ready
bindingPhase: Bound
endpoint: crossplane-aks-14af6e93.hcp.centralus.azmk8s.io
state: Succeeded
```
We can see a few conditions in that AKS cluster's history.
It first encountered a failure, then it moved into the `Creating` state, then it finally became `Ready` later on.
Conditions that have `Status: "True"` are currently active, while conditions with `Status: "False"` happened in the past, but are no longer happening currently.
## Pausing Crossplane
Sometimes, it can be useful to pause Crossplane if you want to stop it from actively attempting to manage your resources, for instance if you have encountered a bug.
To pause Crossplane without deleting all of its resources, run the following command to simply scale down its deployment:
```console
kubectl -n crossplane-system scale --replicas=0 deployment/crossplane
```
Once you have been able to rectify the problem or smooth things out, you can unpause Crossplane simply by scaling its deployment back up:
```console
kubectl -n crossplane-system scale --replicas=1 deployment/crossplane
```
## Deleting a Resource Hangs
The resources that Crossplane manages will automatically be cleaned up so as not to leave anything running behind.
This is accomplished by using finalizers, but in certain scenarios, the finalizer can prevent the Kubernetes object from getting deleted.
To deal with this, we essentially want to patch the object to remove its finalizer, which will then allow it to be deleted completely.
Note that this won't necessarily delete the external resource that Crossplane was managing, so you will want to go to your cloud provider's console and look there for any lingering resources to clean up.
In general, a finalizer can be removed from an object with this command:
```console
kubectl patch <resource-type> <resource-name> -p '{"metadata":{"finalizers": []}}' --type=merge
```
For example, for a Workload object (`workloads.compute.crossplane.io`) named `test-workload`, you can remove its finalizer with:
```console
kubectl patch workloads.compute.crossplane.io test-workload -p '{"metadata":{"finalizers": []}}' --type=merge
```

View File

@ -0,0 +1,250 @@
# Deploying a WordPress Workload on AWS
This guide walks you through how to use Crossplane to deploy a stateful workload in a portable way on AWS.
The following components are dynamically provisioned and configured during this guide:
* An EKS Kubernetes cluster
* An RDS MySQL database
* A sample WordPress application
## Pre-requisites
Before starting this guide, you should have already [configured your AWS account](../../cloud-providers/aws/aws-provider.md) for use with Crossplane.
You should also have an AWS credentials file at `~/.aws/credentials` already on your local filesystem.
## Administrator Tasks
This section covers tasks performed by the cluster or cloud administrator. These include:
- Importing AWS provider credentials
- Defining resource classes for cluster and database resources
- Creating all EKS pre-requisite artifacts
- Creating a target EKS cluster (using dynamic provisioning with the cluster resource class)
> Note: All artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
restricted access, i.e. `Application Owner(s)` should not have access to them.
To successfully follow this guide, make sure your `kubectl` context points to the cluster where `Crossplane` was deployed.
### Configuring EKS Cluster Pre-requisites
EKS cluster deployment is somewhat of an arduous process right now.
A number of artifacts and configurations need to be set up within the AWS console prior to provisioning an EKS cluster using Crossplane.
We anticipate that AWS will make improvements on this user experience in the near future.
#### Create a named keypair
1. Use an existing ec2 key pair or create a new key pair by following [these steps](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
1. Export the key pair name as the EKS_WORKER_KEY_NAME environment variable
```console
export EKS_WORKER_KEY_NAME=replace-with-key-name
```
#### Create your Amazon EKS Service Role
[Original Source Guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
1. Open the [IAM console](https://console.aws.amazon.com/iam/).
1. Choose Roles, then Create role.
1. Choose EKS from the list of services, then "Allows EKS to manage clusters on your behalf", then Next: Permissions.
1. Choose Next: Tags.
1. Choose Next: Review.
1. For the Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.
1. Set the EKS_ROLE_ARN environment variable to the name of your role ARN
```console
export EKS_ROLE_ARN=replace-with-full-role-arn
```
#### Create your Amazon EKS Cluster VPC
[Original Source Guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation).
1. From the navigation bar, select a Region that supports Amazon EKS.
> Note: Amazon EKS is available in the following Regions at this time:
> * US West (Oregon) (us-west-2)
> * US East (N. Virginia) (us-east-1)
> * EU (Ireland) (eu-west-1)
1. Set the REGION environment variable to your region
```console
export REGION=replace-with-region
```
1. Choose Create stack.
1. For Choose a template, select Specify an Amazon S3 template URL.
1. Paste the following URL into the text area and choose Next.
```
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-11-07/amazon-eks-vpc-sample.yaml
```
1. On the Specify Details page, fill out the parameters accordingly, and choose Next.
```
* Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.
* VpcBlock: Choose a CIDR range for your VPC. You may leave the default value.
* Subnet01Block: Choose a CIDR range for subnet 1. You may leave the default value.
* Subnet02Block: Choose a CIDR range for subnet 2. You may leave the default value.
* Subnet03Block: Choose a CIDR range for subnet 3. You may leave the default value.
```
1. (Optional) On the Options page, tag your stack resources and choose Next.
1. On the Review page, choose Create.
1. When your stack is created, select it in the console and choose Outputs.
1. Using values from outputs, export the following environment variables.
```console
export EKS_VPC=replace-with-eks-vpcId
export EKS_SUBNETS=replace-with-eks-subnetIds01,replace-with-eks-subnetIds02,replace-with-eks-subnetIds03
export EKS_SECURITY_GROUP=replace-with-eks-securityGroupId
```
#### Create an RDS subnet group
1. Navigate to the aws console in same region as the EKS clsuter
1. Navigate to `RDS` service
1. Navigate to `Subnet groups` in left hand pane
1. Click `Create DB Subnet Group`
1. Name your subnet i.e. `eks-db-subnets`
1. Select the VPC created in the EKS VPC step
1. Click `Add all subnets related to this VPC`
1. Click Create
1. Export the db subnet group name
```console
export RDS_SUBNET_GROUP_NAME=replace-with-DBSubnetgroup-name
```
#### Create an RDS Security Group (example only)
> Note: This will make your RDS instance visible from anywhere on the internet.
This is for **EXAMPLE PURPOSES ONLY** and is **NOT RECOMMENDED** for production system.
1. Navigate to ec2 in the same region as the EKS cluster
1. Click: security groups
1. Click `Create Security Group`
1. Name it, ex. `demo-rds-public-visibility`
1. Give it a description
1. Select the same VPC as the EKS cluster.
1. On the Inbound Rules tab, choose Edit.
- For Type, choose `MYSQL/Aurora`
- For Port Range, type `3306`
- For Source, choose `Anywhere` from drop down or type: `0.0.0.0/0`
1. Choose Add another rule if you need to add more IP addresses or different port ranges.
1. Click: Create
1. Export the security group id
```console
export RDS_SECURITY_GROUP=replace-with-security-group-id
```
### Deploy all Workload Resources
Now deploy all the workload resources, including the RDS database and EKS cluster with the following commands:
Create provider:
```console
sed -e "s|BASE64ENCODED_AWS_PROVIDER_CREDS|`cat ~/.aws/credentials|base64|tr -d '\n'`|g;s|EKS_WORKER_KEY_NAME|$EKS_WORKER_KEY_NAME|g;s|EKS_ROLE_ARN|$EKS_ROLE_ARN|g;s|REGION|$REGION|g;s|EKS_VPC|$EKS_VPC|g;s|EKS_SUBNETS|$EKS_SUBNETS|g;s|EKS_SECURITY_GROUP|$EKS_SECURITY_GROUP|g;s|RDS_SUBNET_GROUP_NAME|$RDS_SUBNET_GROUP_NAME|g;s|RDS_SECURITY_GROUP|$RDS_SECURITY_GROUP|g" cluster/examples/workloads/wordpress-aws/provider.yaml | kubectl create -f -
```
Create cluster:
```console
kubectl create -f cluster/examples/workloads/wordpress-aws/cluster.yaml
```
It will take a while (~15 minutes) for the EKS cluster to be deployed and become available.
You can keep an eye on its status with the following command:
```console
kubectl -n crossplane-system get ekscluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
```
Once the cluster is done provisioning, you should see output similar to the following
> Note: the `STATE` field is `ACTIVE` and the `ENDPOINT` field has a value):
```console
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
eks-8f1f32c7-f6b4-11e8-844c-025000000001 ACTIVE <none> https://B922855C944FC0567E9050FCD75B6AE5.yl4.us-west-2.eks.amazonaws.com <none> standard-cluster Delete
```
## Application Developer Tasks
This section covers tasks performed by an application developer. These include:
- Defining a Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
- Defining the resource's dependency requirements, in this case a `MySQL` database
Now that the EKS cluster is ready, let's begin deploying the workload as the application developer:
```console
kubectl create -f cluster/examples/workloads/wordpress-aws/workload.yaml
```
This will also take awhile to complete, since the MySQL database needs to be deployed before the WordPress pod can consume it.
You can follow along with the MySQL database deployment with the following:
```console
kubectl -n crossplane-system get rdsinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.version
```
Once the `STATUS` column is `available` as seen below, the WordPress pod should be able to connect to it:
```console
NAME STATUS CLASS VERSION
mysql-2a0be04f-f748-11e8-844c-025000000001 available standard-mysql <none>
```
Now we can watch the WordPress pod come online and a public IP address will get assigned to it:
```console
kubectl -n demo get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
```
When a public IP address has been assigned, you'll see output similar to the following:
```console
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
demo demo-cluster demo wordpress 104.43.240.15
```
Once WordPress is running and has a public IP address through its service, we can get the URL with the following command:
```console
echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"
```
Paste that URL into your browser and you should see WordPress running and ready for you to walk through its setup experience. You may need to wait a few minutes for this to become accessible via the AWS load balancer.
## Connecting to your EKSCluster (optional)
Requires:
* awscli
* aws-iam-authenticator
Please see [Install instructions](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) section: `Install and Configure kubectl for Amazon EKS`
When the EKSCluster is up and running, you can update your kubeconfig with:
```console
aws eks update-kubeconfig --name <replace-me-eks-cluster-name>
```
Node pool is created after the master is up, so expect a few more minutes to wait, but eventually you can see that nodes joined with:
```console
kubectl config use-context <context-from-last-command>
kubectl get nodes
```
## Clean-up
First delete the workload, which will delete WordPress and the MySQL database:
```console
kubectl delete -f cluster/examples/workloads/wordpress-aws/workload.yaml
```
Then delete the EKS cluster:
```console
kubectl delete -f cluster/examples/workloads/wordpress-aws/cluster.yaml
```
Finally, delete the provider credentials:
```console
kubectl delete -f cluster/examples/workloads/wordpress-aws/provider.yaml
```
> Note: There may still be an ELB that was not properly cleaned up, and you will need
to go to EC2 > ELBs and delete it manually.

View File

@ -0,0 +1,131 @@
# Deploying a WordPress Workload on Microsoft Azure
This guide will walk you through how to use Crossplane to deploy a stateful workload in a portable way to Azure.
In this environment, the following components will be dynamically provisioned and configured during this guide:
* AKS Kubernetes cluster
* Azure MySQL database
* WordPress application
## Pre-requisites
Before starting this guide, you should have already [configured your Azure account](../../cloud-providers/azure/azure-provider.md) for usage by Crossplane.
- You should have a `crossplane-azure-provider-key.json` file on your local filesystem, preferably at the root of where you cloned the [Crossplane repo](https://github.com/crossplaneio/crossplane).
- You should have a azure resource group with name `group-westus-1`. If not, change the value of `resourceGroupName` to an existing resource group in `cluster/examples/workloads/wordpress-azure/provider.yaml`
## Administrator Tasks
This section covers the tasks performed by the cluster or cloud administrator, which includes:
- Import Azure provider credentials
- Define Resource classes for cluster and database resources
- Create a target Kubernetes cluster (using dynamic provisioning with the cluster resource class)
**Note**: all artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
restricted access, i.e. `Application Owner(s)` should not have access to them.
For the next steps, make sure your `kubectl` context points to the cluster where `Crossplane` was deployed.
- Create the Azure provider object in your cluster:
```console
sed "s/BASE64ENCODED_AZURE_PROVIDER_CREDS/`cat crossplane-azure-provider-key.json|base64|tr -d '\n'`/g;" cluster/examples/workloads/wordpress-azure/provider.yaml | kubectl create -f -
```
- Next, create the AKS cluster that will eventually be the target cluster for your Workload deployment:
```console
kubectl create -f cluster/examples/workloads/wordpress-azure/cluster.yaml
```
It will take a while (~15 minutes) for the AKS cluster to be deployed and becoming ready. You can keep an eye on its status with the following command:
```console
kubectl -n crossplane-system get akscluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
```
Once the cluster is done provisioning, you should see output similar to the following (note the `STATE` field is `Succeeded` and the `ENDPOINT` field has a value):
```console
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
aks-587762b3-f72b-11e8-bcbe-0800278fedb1 Succeeded aks-587762b3-f72b-11e8-bcbe-080 crossplane-aks-653c32ef.hcp.centralus.azmk8s.io Central US standard-cluster Delete
```
To recap the operations that we just performed as the administrator:
- Defined a `Provider` with Microsoft Azure service principal credentials
- Defined `ResourceClasses` for `KubernetesCluster` and `MySQLInstance`
- Provisioned (dynamically) an AKS Cluster using the `ResourceClass`
## Application Developer Tasks
This section covers the tasks performed by the application developer, which includes:
- Define Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
- Define the dependency resource requirements, in this case a `MySQL` database
Let's begin deploying the workload as the application developer:
- Now that the target AKS cluster is ready, we can deploy the Workload that contains all the Wordpress resources, including the SQL database, with the following single command:
```console
kubectl create -f cluster/examples/workloads/wordpress-azure/workload.yaml
```
This will also take awhile to complete, since the MySQL database needs to be deployed before the Wordpress pod can consume it.
You can follow along with the MySQL database deployment with the following:
```console
kubectl -n crossplane-system get mysqlserver -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.version
```
Once the `STATUS` column is `Ready` like below, then the Wordpress pod should be able to connect to it:
```console
NAME STATUS CLASS VERSION
mysql-58425bda-f72d-11e8-bcbe-0800278fedb1 Ready standard-mysql 5.7
```
- Now we can watch the Wordpress pod come online and a public IP address will get assigned to it:
```console
kubectl get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
```
When a public IP address has been assigned, you'll see output similar to the following:
```console
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
test-workload demo-cluster demo wordpress 104.43.240.15
```
- Once Wordpress is running and has a public IP address through its service, we can get the URL with the following command:
```console
echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"
```
- Paste that URL into your browser and you should see Wordpress running and ready for you to walk through the setup experience.
## Clean-up
First delete the workload, which will delete Wordpress and the MySQL database:
```console
kubectl delete -f cluster/examples/workloads/wordpress-azure/workload.yaml
```
Then delete the AKS cluster:
```console
kubectl delete -f cluster/examples/workloads/wordpress-azure/cluster.yaml
```
Finally, delete the provider credentials:
```console
kubectl delete -f cluster/examples/workloads/wordpress-azure/provider.yaml
```

View File

@ -0,0 +1,194 @@
# Deploying a WordPress Workload on GCP
This guide will walk you through how to use Crossplane to deploy a stateful workload in a portable way to GCP.
In this environment, the following components will be dynamically provisioned and configured during this guide:
* GKE Kubernetes cluster
* CloudSQL database
* WordPress application
## Pre-requisites
Before starting this guide, you should have already [configured your GCP account](../../cloud-providers/gcp/gcp-provider.md) for usage by Crossplane.
You should have a `crossplane-gcp-provider-key.json` file on your local filesystem, preferably at the root of where you cloned the [Crossplane repo](https://github.com/crossplaneio/crossplane).
## Administrator Tasks
This section covers the tasks performed by the cluster or cloud administrator, which includes:
- Import GCP provider credentials
- Define Resource classes for cluster and database resources
- Create a target Kubernetes cluster (using dynamic provisioning with the cluster resource class)
**Note**: all artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
restricted access, i.e. `Application Owner(s)` should not have access to them.
For the next steps, make sure your `kubectl` context points to the cluster where `Crossplane` was deployed.
- Export Project ID
**NOTE** you can skip this step if you generated GCP Service Account using `gcloud`
```bash
export DEMO_PROJECT_ID=[your-demo-project-id]
```
- Patch and Apply `provider.yaml`:
```bash
sed "s/BASE64ENCODED_CREDS/`cat key.json|base64 | tr -d '\n'`/g;s/DEMO_PROJECT_ID/$DEMO_PROJECT_ID/g" cluster/examples/workloads/wordpress-gcp/provider.yaml | kubectl create -f -
```
- Verify that GCP Provider is in `Ready` state
```bash
kubectl -n crossplane-system get providers.gcp.crossplane.io -o custom-columns=NAME:.metadata.name,STATUS:'.status.Conditions[?(@.Status=="True")].Type',PROJECT-ID:.spec.projectID
```
Your output should look similar to:
```bash
NAME STATUS PROJECT-ID
gcp-provider Ready [your-project-id]
```
- Verify that Resource Classes have been created
```bash
kubectl -n crossplane-system get resourceclass -o custom-columns=NAME:metadata.name,PROVISIONER:.provisioner,PROVIDER:.providerRef.name,RECLAIM-POLICY:.reclaimPolicy
```
Your output should be:
```bash
NAME PROVISIONER PROVIDER RECLAIM-POLICY
standard-cluster gkecluster.compute.gcp.crossplane.io/v1alpha1 gcp-provider Delete
standard-mysql cloudsqlinstance.database.gcp.crossplane.io/v1alpha1 gcp-provider Delete
```
- Create a target Kubernetes cluster where `Application Owner(s)` will deploy their `WorkLoad(s)`
As administrator, you will create a Kubernetes cluster leveraging the Kubernetes cluster `ResourceClass` that was created earlier and
`Crossplane` Kubernetes cluster dynamic provisioning.
```bash
kubectl apply -f cluster/examples/workloads/wordpress-gcp/cluster.yaml
```
- Verify that Kubernetes Cluster resource was created
```bash
kubectl -n crossplane-system get kubernetescluster -o custom-columns=NAME:.metadata.name,CLUSTERCLASS:.spec.classReference.name,CLUSTERREF:.spec.resourceName.name
```
Your output should look similar to:
```bash
NAME CLUSTERCLASS CLUSTERREF
demo-gke-cluster standard-cluster gke-67419e79-f5b3-11e8-9cec-9cb6d08bde99
```
- Verify that the target GKE cluster was successfully created
```bash
kubectl -n crossplane-system get gkecluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.zone,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
```
Your output should look similar to:
```bash
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
gke-67419e79-f5b3-11e8-9cec-9cb6d08bde99 RUNNING gke-6742fe8d-f5b3-11e8-9cec-9cb6d08bde99 146.148.93.40 us-central1-a standard-cluster Delete
```
To recap the operations that we just performed as the administrator:
- Defined a `Provider` with Google Service Account credentials
- Defined `ResourceClasses` for `KubernetesCluster` and `MySQLInstance`
- Provisioned (dynamically) a GKE Cluster using the `ResourceClass`
## Application Developer Tasks
This section covers the tasks performed by the application developer, which includes:
- Define Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
- Define the dependency resource requirements, in this case a `MySQL` database
Let's begin deploying the workload as the application developer:
- Deploy workload
```bash
kubectl apply -f cluster/examples/workloads/wordpress-gcp/workload.yaml
```
- Wait for `MySQLInstance` to be in `Bound` State
You can check the status via:
```bash
kubectl get mysqlinstance -o custom-columns=NAME:.metadata.name,VERSION:.spec.engineVersion,STATE:.status.bindingPhase,CLASS:.spec.classReference.name
```
Your output should look like:
```bash
NAME VERSION STATE CLASS
demo 5.7 Bound standard-mysql
```
**Note**: to check on the concrete resource type status as `Administrator` you can run:
```bash
kubectl -n crossplane-system get cloudsqlinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.databaseVersion
```
Your output should be similar to:
```bash
NAME STATUS CLASS VERSION
mysql-2fea0d8e-f5bb-11e8-9cec-9cb6d08bde99 RUNNABLE standard-mysql MYSQL_5_7
```
- Wait for `Workload` External IP Address
```bash
kubectl get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
```
**Note** the `Workload` is defined in Application Owner's (`default`) namespace
Your output should look similar to:
```bash
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
demo demo-gke-cluster demo wordpress 35.193.100.113
```
- Verify that `WordPress` service is accessible via `SERVICE-EXTERNAL-IP` by:
- Navigate in your browser to `SERVICE-EXTERNAL-IP`
At this point, you should see the setup page for WordPress in your web browser.
## Clean Up
Once you are done with this example, you can clean up all its artifacts with the following commands:
- Remove `Workload`
```bash
kubectl delete -f cluster/examples/workloads/wordpress-gcp/workload.yaml
```
- Remove `KubernetesCluster`
```bash
kubectl delete -f cluster/examples/workloads/wordpress-gcp/cluster.yaml
```
- Remove GCP `Provider` and `ResourceClasses`
```bash
kubectl delete -f cluster/examples/workloads/wordpress-gcp/provider.yaml
```
- Delete Google Project
```bash
# list all your projects
gcloud projects list
# delete demo project
gcloud projects delete [demo-project-id]
```