Merge pull request #119 from hasheddan/prune-old

Remove all unsupported Crossplane versions
This commit is contained in:
Daniel Mangum 2021-08-22 20:21:14 -04:00 committed by GitHub
commit 3613220e0f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
979 changed files with 1 additions and 161157 deletions

View File

@ -1 +1 @@
[{"version":"v1.3","path":"/docs/v1.3"},{"version":"v1.2","path":"/docs/v1.2"},{"version":"v1.1","path":"/docs/v1.1"},{"version":"v1.0","path":"/docs/v1.0"},{"version":"v0.14","path":"/docs/v0.14"},{"version":"v0.13","path":"/docs/v0.13"},{"version":"v0.12","path":"/docs/v0.12"},{"version":"v0.11","path":"/docs/v0.11"},{"version":"v0.10","path":"/docs/v0.10"},{"version":"v0.9","path":"/docs/v0.9"},{"version":"v0.8","path":"/docs/v0.8"},{"version":"v0.7","path":"/docs/v0.7"},{"version":"v0.6","path":"/docs/v0.6"},{"version":"v0.5","path":"/docs/v0.5"},{"version":"v0.4","path":"/docs/v0.4"},{"version":"v0.3","path":"/docs/v0.3"},{"version":"v0.2","path":"/docs/v0.2"},{"version":"v0.1","path":"/docs/v0.1"},{"version":"master","path":"/docs/master"}]
[{"version":"v1.3","path":"/docs/v1.3"},{"version":"v1.2","path":"/docs/v1.2"},{"version":"v1.1","path":"/docs/v1.1"},{"version":"master","path":"/docs/master"}]

View File

@ -1,26 +0,0 @@
# Crossplane
Crossplane is an open source multicloud control plane. It introduces workload and resource abstractions on-top of existing managed services that enables a high degree of workload portability across cloud providers. A single crossplane enables the provisioning and full-lifecycle management of services and infrastructure across a wide range of providers, offerings, vendors, regions, and clusters. Crossplane offers a universal API for cloud computing, a workload scheduler, and a set of smart controllers that can automate work across clouds.
<h4 align="center"><img src="media/arch.png" alt="Crossplane" height="400"></h4>
Crossplane presents a declarative management style API that covers a wide range of portable abstractions including databases, message queues, buckets, data pipelines, serverless, clusters, and many more coming. Its based on the declarative resource model of the popular [Kubernetes](https://github.com/kubernetes/kubernetes) project, and applies many of the lessons learned in container orchestration to multicloud workload and resource orchestration.
Crossplane supports a clean separation of concerns between developers and administrators. Developers define workloads without having to worry about implementation details, environment constraints, and policies. Administrators can define environment specifics, and policies. The separation of concern leads to a higher degree of reusability and reduces complexity.
Crossplane includes a workload scheduler that can factor a number of criteria including capabilities, availability, reliability, cost, regions, and performance while deploying workloads and their resources. The scheduler works alongside specialized resource controllers to ensure policies set by administrators are honored.
For a deeper dive into Crossplane, see the [architecture](https://docs.google.com/document/d/1whncqdUeU2cATGEJhHvzXWC9xdK29Er45NJeoemxebo/edit?usp=sharing) document.
## Table of Contents
* [Quick Start Guide](quick-start.md)
* [Getting Started](getting-started.md)
* [Installing Crossplane](install-crossplane.md)
* [Adding Your Cloud Providers](cloud-providers.md)
* [Deploying Workloads](deploy.md)
* [Running Resources](running-resources.md)
* [Troubleshooting](troubleshoot.md)
* [Concepts](concepts.md)
* [FAQs](faqs.md)
* [Contributing](contributing.md)

View File

@ -1,14 +0,0 @@
---
title: Adding Your Cloud Providers
toc: true
weight: 330
indent: true
---
# Adding Your Cloud Providers
In order for Crossplane to be able to manage resources across all your clouds, you will need to add your cloud provider credentials to Crossplane.
Use the links below for specific instructions to add each of the following cloud providers:
* [Google Cloud Platform (GCP)](cloud-providers/gcp/gcp-provider.md)
* [Microsoft Azure](cloud-providers/azure/azure-provider.md)
* [Amazon Web Services (AWS)](cloud-providers/aws/aws-provider.md)

View File

@ -1,33 +0,0 @@
# Adding Amazon Web Services (AWS) to Crossplane
In this guide, we will walk through the steps necessary to configure your AWS account to be ready for integration with Crossplane.
## AWS Credentials
### Option 1: aws Command Line Tool
If you have already installed and configured the [`aws` command line tool](https://aws.amazon.com/cli/), you can simply find your AWS credentials file in `~/.aws/credentials`.
### Option 2: AWS Console in Web Browser
If you do not have the `aws` tool installed, you can alternatively log into the [AWS console](https://aws.amazon.com/console/) and export the credentials.
The steps to follow below are from the [AWS SDK for GO](https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/setting-up.html):
1. Open the IAM console.
1. On the navigation menu, choose Users.
1. Choose your IAM user name (not the check box).
1. Open the Security credentials tab, and then choose Create access key.
1. To see the new access key, choose Show. Your credentials resemble the following:
- Access key ID: AKIAIOSFODNN7EXAMPLE
- Secret access key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
1. To download the key pair, choose Download .csv file.
Then convert the `*.csv` file to the below format and save it to `~/.aws/credentials`:
```
[default]
aws_access_key_id = AKIAIOSFODNN7EXAMPLE
aws_secret_access_key = wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
```
After the steps above, you should have your AWS credentials stored in `~/.aws/credentials`.

View File

@ -1,61 +0,0 @@
# Adding Microsoft Azure to Crossplane
In this guide, we will walk through the steps necessary to configure your Azure account to be ready for integration with Crossplane.
The general steps we will take are summarized below:
* Create a new service principal (account) that Crossplane will use to create and manage Azure resources
* Add the required permissions to the account
* Consent to the permissions using an administrator account
## Preparing your Microsoft Azure Account
In order to manage resources in Azure, you must provide credentials for a Azure service principal that Crossplane can use to authenticate.
This assumes that you have already [set up the Azure CLI client](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest) with your credentials.
Create a JSON file that contains all the information needed to connect and authenticate to Azure:
```console
# create service principal with Owner role
az ad sp create-for-rbac --sdk-auth --role Owner > crossplane-azure-provider-key.json
```
Take note of the `clientID` value from the JSON file that we just created, and save it to an environment variable:
```console
export AZURE_CLIENT_ID=<clientId value from json file>
```
Now add the required permissions to the service principal that will allow it to manage the necessary resources in Azure:
```console
# add required Azure Active Directory permissions
az ad app permission add --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --api-permissions 1cda74f2-2616-4834-b122-5cb1b07f8a59=Role 78c8a3c8-a07e-4b9e-af1b-b5ccab50a175=Role
# grant (activate) the permissions
az ad app permission grant --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --expires never
```
You might see an error similar to the following, but that is OK, the permissions should have gone through still:
```console
Operation failed with status: 'Conflict'. Details: 409 Client Error: Conflict for url: https://graph.windows.net/e7985bc4-a3b3-4f37-b9d2-fa256023b1ae/oauth2PermissionGrants?api-version=1.6
```
After these steps are completed, you should have the following file on your local filesystem:
* `crossplane-azure-provider-key.json`
## Grant Consent to Application Permissions
One more step is required to fully grant the permissions to the new service principal.
From the Azure Portal, you need to grant consent for the permissions using an admin account.
The steps to perform this action are listed below:
1. `echo ${AZURE_CLIENT_ID}` and note this ID value
1. Navigate to the Azure Portal: https://portal.azure.com
1. Click `Azure Active Directory`, or find it in the `All services` list
1. Click `App registrations (Preview)`
1. Click on the application from the list where the application (client) ID matches the value from step 1
1. Click `API permissions`
1. Click `Grant admin consent for Default Directory`
1. Click `Yes`

View File

@ -1,99 +0,0 @@
# Adding Google Cloud Platform (GCP) to Crossplane
In this guide, we will walk through the steps necessary to configure your GCP account to be ready for integration with Crossplane.
The general steps we will take are summarized below:
* Create a new example project that all resources will be deployed to
* Enable required APIs such as Kubernetes and CloudSQL
* Create a service account that will be used to perform GCP operations from Crossplane
* Assign necessary roles to the service account
* Enable billing
For your convenience, the specific steps to accomplish those tasks are provided for you below using either the `gcloud` command line tool, or the GCP console in a web browser.
You can choose whichever you are more comfortable with.
## Option 1: gcloud Command Line Tool
If you have the `gcloud` tool installed, you can run below commands from the example directory.
It
Instructions for installing `gcloud` can be found in the [Google docs](https://cloud.google.com/sdk/install).
```bash
# list your organizations (if applicable), take note of the specific organization ID you want to use
# if you have more than one organization (not common)
gcloud organizations list
# create a new project
export EXAMPLE_PROJECT_NAME=crossplane-example-123
gcloud projects create $EXAMPLE_PROJECT_NAME --enable-cloud-apis [--organization ORGANIZATION_ID]
# record the PROJECT_ID value of the newly created project
export EXAMPLE_PROJECT_ID=$(gcloud projects list --filter NAME=$EXAMPLE_PROJECT_NAME --format="value(PROJECT_ID)")
# enable Kubernetes API
gcloud --project $EXAMPLE_PROJECT_ID services enable container.googleapis.com
# enable CloudSQL API
gcloud --project $EXAMPLE_PROJECT_ID services enable sqladmin.googleapis.com
# create service account
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts create example-123 --display-name "Crossplane Example"
# export service account email
export EXAMPLE_SA="example-123@$EXAMPLE_PROJECT_ID.iam.gserviceaccount.com"
# create service account key (this will create a `key.json` file in your current working directory)
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts keys create --iam-account $EXAMPLE_SA key.json
# assign roles
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/container.admin"
```
## Option 2: GCP Console in a Web Browser
If you chose to use the `gcloud` tool, you can skip this section entirely.
Create a GCP example project which we will use to host our example GKE cluster, as well as our example CloudSQL instance.
- Login into [GCP Console](https://console.cloud.google.com)
- Create a new project (either stand alone or under existing organization)
- Create Example Service Account
- Navigate to: [Create Service Account](https://console.cloud.google.com/iam-admin/serviceaccounts)
- `Service Account Name`: type "example"
- `Service Account ID`: leave auto assigned
- `Service Account Description`: type "Crossplane example"
- Click `Create` button
- This should advance to the next section `2 Grant this service account to project (optional)`
- We will assign this account 3 roles:
- `Service Account User`
- `Cloud SQL Admin`
- `Kubernetes Engine Admin`
- Click `Create` button
- This should advance to the next section `3 Grant users access to this service account (optional)`
- We don't need to assign any user or admin roles to this account for the example purposes, so you can leave following two fields blank:
- `Service account users role`
- `Service account admins role`
- Next, we will create and export service account key
- Click `+ Create Key` button.
- This should open a `Create Key` side panel
- Select `json` for the Key type (should be selected by default)
- Click `Create`
- This should show `Private key saved to your computer` confirmation dialog
- You also should see `crossplane-example-1234-[suffix].json` file in your browser's Download directory
- Save (copy or move) this file into example (this) directory, with new name `key.json`
- Enable `Cloud SQL API`
- Navigate to [Cloud SQL Admin API](https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview)
- Click `Enable`
- Enable `Kubernetes Engine API`
- Navigate to [Kubernetes Engine API](https://console.developers.google.com/apis/api/container.googleapis.com/overview)
- Click `Enable`
## Enable Billing
No matter what option you chose to configure the previous steps, you will need to enable billing for your account in order to create and use Kubernetes clusters with GKE.
- Go to [GCP Console](https://console.cloud.google.com)
- Select example project
- Click `Enable Billing`
- Go to [Kubernetes Clusters](https://console.cloud.google.com/kubernetes/list)
- Click `Enable Billing`

View File

@ -1,58 +0,0 @@
---
title: Concepts
toc: true
weight: 410
---
# Concepts
## Control Plane
Crossplane is an open source multicloud control plane that consists of smart controllers that can work across clouds to enable workload portability, provisioning and full-lifecycle management of infrastructure across a wide range of providers, vendors, regions, and offerings.
The control plane presents a declarative management style API that covers a wide range of portable abstractions that facilitate these goals across disparate environments, clusters, regions, and clouds.
Crossplane can be thought of as a higher-order orchestrator across cloud providers.
For convenience, Crossplane can run directly on-top of an existing Kubernetes cluster without requiring any changes, even though Crossplane does not necessarily schedule or run any containers on the host cluster.
## Resources and Workloads
In Crossplane, a *resource* represents an external piece of infrastructure ranging from low level services like clusters and servers, to higher level infrastructure like databases, message queues, buckets, and more.
Resources are represented as persistent object within the crossplane, and they typically manage one or more pieces of external infrastructure within a cloud provider or cloud offering.
Resources can also represent local or in-cluster services.
We model *workloads* as a schedulable units of work that the user intends to run on a cloud provider.
Crossplane will support multiple types of workloads including container and serverless.
You can think of workloads as units that run **your** code and applications.
Every type of workload has a different kind of payload.
For example, a container workload can include a set of objects that will be deployed on a managed Kubernetes cluster, or a reference to helm chart, etc.
A serverless workload could include a function that will run on a serverless managed service.
Workloads can contain requirements for where and how the workload can run, including regions, providers, affinity, cost, and others that the scheduler can use when assigning the workload.
## Resource Claims and Resource Classes
To support workload portability we expose the concept of a resource claim and a resource class.
A resource claim is a persistent object that captures the desired configuration of a resource from the perspective of a workload or application.
Its configuration is cloud-provider and cloud-offering independent and its free of implementation and/or environmental details.
A resource claim can be thought of as a request for an actual resource and is typically created by a developer or application owner.
A resource class is configuration that contains implementation details specific to a certain environment or deployment, and policies related to a kind of resource.
A ResourceClass acts as a template with implementation details and policy for resources that will be dynamically provisioned by the workload at deployment time.
A resource class is typically created by an admin or infrastructure owner.
## Dynamic and Static Provisioning
A resource can be statically or dynamically provisioned.
Static provisioning is when an administrator creates the resource manually.
They set the configuration required to provision and manage the corresponding external resource within a cloud provider or cloud offering.
Once provisioned, resources are available to be bound to resource claims.
Dynamic provisioning is when an resource claim does not find a matching resource and provisions a new one instead.
The newly provisioned resource is automatically bound to the resource claim.
To enable dynamic provisioning the administrator needs to create one or more resource class objects.
## Connection Secrets
Workloads reference all the resources the consume in their `resources` section.
This helps Crossplane setup connectivity between the workload and resource, and create objects that hold connection information.
For example, for a database provisioned and managed by Crossplane, a secret will be created that contains a connection string, user and password.
This secret will be propagated to the target cluster so that it can be used by the workload.

View File

@ -1,11 +0,0 @@
---
title: Contributing
toc: true
weight: 610
---
# Contributing
Crossplane is a community driven project and we welcome contributions.
That includes [opening issues](https://github.com/crossplaneio/crossplane/issues) for improvements you'd like to see as well as submitting changes to the code base.
For more information about the contribution process, please see the [contribution guide](https://github.com/crossplaneio/crossplane/blob/master/CONTRIBUTING.md).

View File

@ -1,136 +0,0 @@
---
title: Deploying Workloads
toc: true
weight: 340
indent: true
---
# Deploying Workloads
## Guides
This section will walk you through how to deploy workloads to various cloud provider environments in a highly portable way.
For detailed instructions on how to deploy workloads to your cloud provider of choice, please visit the following guides:
* [Deploying a Workload on Google Cloud Platform (GCP)](workloads/gcp/wordpress-gcp.md)
* [Deploying a Workload on Microsoft Azure](workloads/azure/wordpress-azure.md)
* [Deploying a Workload on Amazon Web Services](workloads/aws/wordpress-aws.md)
## Workload Overview
A workload is a schedulable unit of work and contains a payload as well as defines its requirements for how the workload should run and what resources it will consume.
This helps Crossplane setup connectivity between the workload and resources, and make intelligent decisions about where and how to provision and manage the resources in their entirety.
Crossplane's scheduler is responsible for deploying the workload to a target cluster, which in this guide we will also be using Crossplane to deploy within your chosen cloud provider.
This walkthrough also demonstrates Crossplane's concept of a clean "separation of concerns" between developers and administrators.
Developers define workloads without having to worry about implementation details, environment constraints, and policies.
Administrators can define environment specifics, and policies.
The separation of concern leads to a higher degree of reusability and reduces complexity.
During this walkthrough, we will assume two separate identities:
1. Administrator (cluster or cloud) - responsible for setting up credentials and defining resource classes
2. Application Owner (developer) - responsible for defining and deploying the application and its dependencies
## Workload Example
### Dependency Resource
Let's take a closer look at a dependency resource that a workload will declare:
```yaml
## WordPress MySQL Database Instance
apiVersion: storage.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: demo
namespace: default
spec:
classReference:
name: standard-mysql
namespace: crossplane-system
engineVersion: "5.7"
```
This will request to create a `MySQLInstance` version 5.7, which will be fulfilled by the `standard-mysql` `ResourceClass`.
Note that the application developer is not aware of any further specifics when it comes to the `MySQLInstance` beyond their requested engine version.
This enables highly portable workloads, since the environment specific details of the database are defined by the administrator in a `ResourceClass`.
### Workload
Now let's look at the workload itself, which will reference the dependency resource from above, as well as other information such as the target cluster to deploy to.
```yaml
## WordPress Workload
apiVersion: compute.crossplane.io/v1alpha1
kind: Workload
metadata:
name: demo
namespace: default
spec:
resources:
- name: demo
secretName: demo
targetCluster:
name: demo-gke-cluster
namespace: crossplane-system
targetDeployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: wordpress
labels:
app: wordpress
spec:
selector:
app: wordpress
strategy:
type: Recreate
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4.6.1-apache
env:
- name: WORDPRESS_DB_HOST
valueFrom:
secretKeyRef:
name: demo
key: endpoint
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: demo
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: demo
key: password
ports:
- containerPort: 80
name: wordpress
targetNamespace: demo
targetService:
apiVersion: v1
kind: Service
metadata:
name: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
type: LoadBalancer
```
This `Workload` definition contains multiple components that informs Crossplane on how to deploy the workload and its resources:
- Resources: list of the resources required by the payload application
- TargetCluster: the cluster where the payload application and all its requirements should be deployed
- TargetNamespace: the namespace on the target cluster
- Workload Payload:
- TargetDeployment
- TargetService

View File

@ -1,48 +0,0 @@
---
title: FAQs
toc: true
weight: 510
---
# Frequently Asked Questions (FAQs)
### Where did the name Crossplane come from?
Crossplane is the fusing of cross-cloud control plane. We wanted to use a noun that refers to the entity responsible for connecting different cloud providers and acts as control plane across them. Cross implies “cross-cloud” and “plane” brings in “control plane”.
### What's up with popsicle?
We believe in a multi-flavor cloud.
### Why is Upbound open sourcing this project? What are Upbounds monetization plans?
Upbounds mission is to create a more open cloud-computing platform, with more choice and less lock-in. We believe the Crossplane as an important step towards this vision and that its going to take a village to solve this problem. We believe that multicloud control plane is a new category of open source software, and it will ultimately disrupt closed source and proprietary models. Upbound aspires to be a commercial provider of a more open cloud-computing platform.
### What kind of governance model will be used for Crossplane?
Crossplane will be an independent project and we plan on making a community driven project and not a vendor driven project. It will have an independent brand, github organization, and an open governance model. It will not be tied to single organization or individual.
### Will Crossplane be donated to an open source foundation?
We dont know yet. We are open to doing so but wed like to revisit this after the project has gotten some end-user community traction.
### Does using multicloud mean you will use the lowest common denominator across clouds?
Not necessarily. There are numerous best of breed cloud offerings that run on multiple clouds. For example, CockroachDB and ElasticSearch are world class implementations of platform software and run well on cloud providers. They compete with managed services offered by a cloud provider. We believe that by having a open control plane for they to integrate with, and providing a common API, CLI and UI for all of these services, that more of these offerings will exist and get first-class experience in the cloud.
### How are resources and claims related to PersistentVolumes in Kubernetes?
We modeled resource claims and classes after PersistentVolumes and PersistentVolumeClaims in Kubernetes. We believe many of the lessons learned from managing volumes in Kubernetes apply to managing resources within cloud providers. One notable exception is that we avoided creating a plugin model within Crossplane.
### How is workload scheduling related to pod scheduling in Kubernetes?
We modeled workload scheduling after the Pod scheduler in Kubernetes. We believe many of the lessons learned from Pod scheduling apply to scheduling workloads across cloud providers.
### Can I use Crossplane to consistently provision and manage multiple Kubernetes clusters?
Crossplane includes an portable API for Kubernetes clusters that will include common configuration including node pools, auto-scalers, taints, admission controllers, etc. These will be applied to the specific implementations within the cloud providers like EKS, GKE and AKS. We see the Kubernetes Cluster API to be something that will be used by administrators and not developers.
### Other attempts at building a higher level API on-top of a multitude of inconsistent lower level APIs have not been successful, will Crossplane not have the same issues?
We agree that building a consistent higher level API on top of multitudes of inconsistent lower level API's is well known to be fraught with peril (e.g. dumbing down to lowest common denominator, or resulting in so loosely defined an API as to be impossible to practically develop real portable applications on top of it).
Crossplane follows a different approach here. The portable API extracts the pieces that are common across all implementations, and from the perspective of the workload. The rest of the implementation details are captured in full fidelity by the admin in resource classes. The combination of the two is what results in full configuration that can be deployed. We believe this to be a reasonable tradeoff that avoids the dumbing down to lowest common denominator problem, while still enabling portability.

View File

@ -1,12 +0,0 @@
---
title: Getting Started
toc: true
weight: 310
---
# Getting Started
* [Installing Crossplane](install-crossplane.md)
* [Adding Your Cloud Providers](cloud-providers.md)
* [Deploying Workloads](deploy.md)
* [Running Resources](running-resources.md)
* [Troubleshooting](troubleshoot.md)

View File

@ -1,107 +0,0 @@
---
title: Install
toc: true
weight: 320
indent: true
---
# Installing Crossplane
Crossplane can be easily installed into any existing Kubernetes cluster using the regularly published Helm chart.
The Helm chart contains all the custom resources and controllers needed to deploy and configure Crossplane.
## Pre-requisites
* [Kubernetes cluster](https://kubernetes.io/docs/setup/)
* For example [Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/), minimum version `v0.28+`
* [Helm](https://docs.helm.sh/using_helm/), minimum version `v2.9.1+`.
## Installation
Helm charts for Crossplane are currently published to the `alpha` and `master` channels.
In the future, `beta` and `stable` will also be available.
### Alpha
The alpha channel is the most recent release of Crossplane that is considered ready for testing by the community.
```console
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane
```
### Master
The `master` channel contains the latest commits, with all automated tests passing.
`master` is subject to instability, incompatibility, and features may be added or removed without much prior notice.
It is recommended to use one of the more stable channels, but if you want the absolute newest Crossplane installed, then you can use the `master` channel.
To install the Helm chart from master, you will need to pass the specific version returned by the `search` command:
```console
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search crossplane
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version>
```
For example:
```console
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version 0.0.0-249.637ccf9
```
## Uninstalling the Chart
To uninstall/delete the `crossplane` deployment:
```console
helm delete --purge crossplane
```
That command removes all Kubernetes components associated with Crossplane, including all the custom resources and controllers.
## Configuration
The following tables lists the configurable parameters of the Crossplane chart and their default values.
| Parameter | Description | Default |
| ------------------------- | --------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Image | `crossplane/crossplane` |
| `image.tag` | Image tag | `master` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `imagePullSecrets` | Names of image pull secrets to use | `dockerhub` |
| `replicas` | The number of replicas to run for the Crossplane operator | `1` |
| `deploymentStrategy` | The deployment strategy for the Crossplane operator | `RollingUpdate` |
### Command Line
You can pass the settings with helm command line parameters.
Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`.
For example, the following command will install Crossplane with an image pull policy of `IfNotPresent`.
```console
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane --set image.pullPolicy=IfNotPresent
```
### Settings File
Alternatively, a yaml file that specifies the values for the above parameters (`values.yaml`) can be provided while installing the chart.
```console
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane -f values.yaml
```
Here are the sample settings to get you started.
```yaml
replicas: 1
deploymentStrategy: RollingUpdate
image:
repository: crossplane/crossplane
tag: master
pullPolicy: Always
imagePullSecrets:
- dockerhub
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

View File

@ -1,310 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 23.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 1312.19 279.51" style="enable-background:new 0 0 1312.19 279.51;" xml:space="preserve">
<style type="text/css">
.st0{clip-path:url(#SVGID_2_);fill:#F7D186;}
.st1{clip-path:url(#SVGID_4_);fill:#FF9234;}
.st2{clip-path:url(#SVGID_6_);enable-background:new ;}
.st3{clip-path:url(#SVGID_8_);}
.st4{clip-path:url(#SVGID_10_);}
.st5{clip-path:url(#SVGID_12_);fill:#FFCD3C;}
.st6{clip-path:url(#SVGID_14_);enable-background:new ;}
.st7{clip-path:url(#SVGID_16_);}
.st8{clip-path:url(#SVGID_18_);}
.st9{clip-path:url(#SVGID_20_);fill:#F3807B;}
.st10{clip-path:url(#SVGID_22_);enable-background:new ;}
.st11{clip-path:url(#SVGID_24_);}
.st12{clip-path:url(#SVGID_26_);}
.st13{clip-path:url(#SVGID_28_);fill:#35D0BA;}
.st14{clip-path:url(#SVGID_30_);fill:#D8AE64;}
.st15{clip-path:url(#SVGID_32_);fill:#004680;}
.st16{clip-path:url(#SVGID_34_);fill:#004680;}
.st17{clip-path:url(#SVGID_36_);fill:#004680;}
.st18{clip-path:url(#SVGID_38_);fill:#004680;}
.st19{clip-path:url(#SVGID_40_);fill:#004680;}
.st20{clip-path:url(#SVGID_42_);fill:#004680;}
.st21{clip-path:url(#SVGID_44_);fill:#004680;}
.st22{clip-path:url(#SVGID_46_);fill:#004680;}
.st23{clip-path:url(#SVGID_48_);fill:#004680;}
.st24{clip-path:url(#SVGID_50_);fill:#004680;}
</style>
<g>
<g>
<defs>
<path id="SVGID_1_" d="M115.47,94.13c-8.4,0-15.22,6.81-15.22,15.22v143.2c0,8.4,6.81,15.22,15.22,15.22s15.22-6.81,15.22-15.22
v-143.2C130.68,100.94,123.87,94.13,115.47,94.13"/>
</defs>
<clipPath id="SVGID_2_">
<use xlink:href="#SVGID_1_" style="overflow:visible;"/>
</clipPath>
<rect x="89.53" y="83.41" class="st0" width="51.87" height="195.07"/>
</g>
<g>
<defs>
<path id="SVGID_3_" d="M176.53,75.36c0.05-0.96,0.07-1.93,0.07-2.9c0-0.95-0.02-1.89-0.07-2.82
c-1.47-32.22-28.06-57.88-60.64-57.88S56.72,37.42,55.25,69.64c-0.04,0.94-0.07,1.88-0.07,2.82c0,1.04,0.03,2.07,0.08,3.09
c-0.02,0.5-0.08,1-0.08,1.51v99.64c0,19.06,15.59,34.65,34.65,34.65h52.14c19.06,0,34.65-15.59,34.65-34.65V77.07
C176.62,76.49,176.56,75.93,176.53,75.36"/>
</defs>
<clipPath id="SVGID_4_">
<use xlink:href="#SVGID_3_" style="overflow:visible;"/>
</clipPath>
<rect x="44.47" y="1.04" class="st1" width="142.87" height="221.04"/>
</g>
<g>
<defs>
<path id="SVGID_5_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_6_">
<use xlink:href="#SVGID_5_" style="overflow:visible;"/>
</clipPath>
<g class="st2">
<g>
<defs>
<rect id="SVGID_7_" x="16.08" y="24.9" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_8_">
<use xlink:href="#SVGID_7_" style="overflow:visible;"/>
</clipPath>
<g class="st3">
<defs>
<rect id="SVGID_9_" x="9.23" y="92.99" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -54.1638 118.2926)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_10_">
<use xlink:href="#SVGID_9_" style="overflow:visible;"/>
</clipPath>
<g class="st4">
<defs>
<rect id="SVGID_11_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_12_">
<use xlink:href="#SVGID_11_" style="overflow:visible;"/>
</clipPath>
<rect x="7.4" y="16.22" class="st5" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_13_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_14_">
<use xlink:href="#SVGID_13_" style="overflow:visible;"/>
</clipPath>
<g class="st6">
<g>
<defs>
<rect id="SVGID_15_" x="-37.52" y="-28.7" width="207.96" height="207.96"/>
</defs>
<clipPath id="SVGID_16_">
<use xlink:href="#SVGID_15_" style="overflow:visible;"/>
</clipPath>
<g class="st7">
<defs>
<rect id="SVGID_17_" x="-40.95" y="35.1" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -33.3744 68.1028)" width="212.95" height="78.48"/>
</defs>
<clipPath id="SVGID_18_">
<use xlink:href="#SVGID_17_" style="overflow:visible;"/>
</clipPath>
<g class="st8">
<defs>
<rect id="SVGID_19_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_20_">
<use xlink:href="#SVGID_19_" style="overflow:visible;"/>
</clipPath>
<rect x="-48.24" y="-39.42" class="st9" width="227.51" height="227.51"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_21_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_22_">
<use xlink:href="#SVGID_21_" style="overflow:visible;"/>
</clipPath>
<g class="st10">
<g>
<defs>
<rect id="SVGID_23_" x="61.1" y="69.92" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_24_">
<use xlink:href="#SVGID_23_" style="overflow:visible;"/>
</clipPath>
<g class="st11">
<defs>
<rect id="SVGID_25_" x="53.98" y="137.74" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -72.6974 163.0359)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_26_">
<use xlink:href="#SVGID_25_" style="overflow:visible;"/>
</clipPath>
<g class="st12">
<defs>
<rect id="SVGID_27_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_28_">
<use xlink:href="#SVGID_27_" style="overflow:visible;"/>
</clipPath>
<rect x="52.14" y="60.96" class="st13" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_29_" d="M104.38,211.52l26.4,26.39V211.3C130.78,211.3,103.72,211.52,104.38,211.52"/>
</defs>
<clipPath id="SVGID_30_">
<use xlink:href="#SVGID_29_" style="overflow:visible;"/>
</clipPath>
<rect x="93.65" y="200.58" class="st14" width="47.85" height="48.06"/>
</g>
<g>
<defs>
<path id="SVGID_31_" d="M307.52,195.1c-38.8,0-70.21-31.6-70.21-70.41c0-38.6,31.4-70.21,70.21-70.21c20.2,0,39.6,8.8,52.81,24
c4.2,5,3.8,12.2-1,16.4c-4.8,4.4-12.2,3.8-16.4-1c-9-10.2-21.8-16-35.4-16c-25.8,0-47.01,21-47.01,46.8
c0,26,21.2,47.01,47.01,47.01c13.6,0,26.4-5.8,35.4-16c4.2-4.8,11.6-5.4,16.4-1c4.8,4.2,5.2,11.4,1,16.4
C347.12,186.3,327.72,195.1,307.52,195.1"/>
</defs>
<use xlink:href="#SVGID_31_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_32_">
<use xlink:href="#SVGID_31_" style="overflow:visible;"/>
</clipPath>
<rect x="226.59" y="43.77" class="st15" width="147.35" height="162.05"/>
</g>
<g>
<defs>
<path id="SVGID_33_" d="M438.53,98.89c0,6.4-5.2,11.6-11.8,11.6c-12.8,0-22.4,10.4-22.4,24.6v48.41c0,6.4-5.2,11.6-11.6,11.6
c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,9.8,3.6,11.2,8.6c6.8-4,14.6-6.2,22.8-6.2
C433.33,87.29,438.53,92.49,438.53,98.89"/>
</defs>
<use xlink:href="#SVGID_33_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_34_">
<use xlink:href="#SVGID_33_" style="overflow:visible;"/>
</clipPath>
<rect x="370.4" y="74.17" class="st16" width="78.84" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_35_" d="M497.53,195.7c-30.4,0-55-24.8-55-55c0-30.4,24.6-55.21,55-55.21c30.4,0,55.21,24.8,55.21,55.21
C552.74,170.9,527.94,195.7,497.53,195.7 M497.53,108.69c-17.6,0-31.8,14.4-31.8,32c0,17.4,14.2,31.8,31.8,31.8
c17.6,0,31.8-14.4,31.8-31.8C529.34,123.09,515.14,108.69,497.53,108.69"/>
</defs>
<use xlink:href="#SVGID_35_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_36_">
<use xlink:href="#SVGID_35_" style="overflow:visible;"/>
</clipPath>
<rect x="431.81" y="74.77" class="st17" width="131.65" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_37_" d="M571.94,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.8,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C590.94,195.1,577.54,187.3,571.94,174.9"/>
</defs>
<use xlink:href="#SVGID_37_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_38_">
<use xlink:href="#SVGID_37_" style="overflow:visible;"/>
</clipPath>
<rect x="560.02" y="74.17" class="st18" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_39_" d="M663.75,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.81,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C682.75,195.1,669.35,187.3,663.75,174.9"/>
</defs>
<use xlink:href="#SVGID_39_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_40_">
<use xlink:href="#SVGID_39_" style="overflow:visible;"/>
</clipPath>
<rect x="651.83" y="74.17" class="st19" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_41_" d="M859.17,139.9c0,14.8-5,28.4-14.4,38.61c-9.8,10.6-23.2,16.6-38,16.6c-10.6,0-20.6-3.2-29-8.8v47.2
c0,6.4-5.4,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,10.2,3.8,11.4,9
c8.6-5.8,18.8-9,29.4-9c14.8,0,28.2,5.8,38,16.4C854.17,111.49,859.17,125.29,859.17,139.9 M835.96,139.9
c0-18.4-12.2-31.8-29.2-31.8c-16.8,0-29,13.4-29,31.8c0,18.4,12.2,31.8,29,31.8C823.77,171.7,835.96,158.3,835.96,139.9"/>
</defs>
<use xlink:href="#SVGID_41_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_42_">
<use xlink:href="#SVGID_41_" style="overflow:visible;"/>
</clipPath>
<rect x="743.64" y="74.17" class="st20" width="126.25" height="181.65"/>
</g>
<g>
<defs>
<path id="SVGID_43_" d="M889.77,195.1c-6.4,0-11.6-5.2-11.6-11.6V66.29c0-6.4,5.2-11.6,11.6-11.6c6.4,0,11.8,5.2,11.8,11.6V183.5
C901.57,189.9,896.17,195.1,889.77,195.1"/>
</defs>
<use xlink:href="#SVGID_43_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_44_">
<use xlink:href="#SVGID_43_" style="overflow:visible;"/>
</clipPath>
<rect x="867.45" y="43.97" class="st21" width="44.84" height="161.85"/>
</g>
<g>
<defs>
<path id="SVGID_45_" d="M1025.38,96.49v87.01c0,6.4-5.2,11.6-11.6,11.6c-5.6,0-10.2-3.8-11.4-9c-8.4,5.8-18.6,9-29.4,9
c-14.8,0-28.2-5.8-38.01-16.6c-9.2-10-14.4-23.8-14.4-38.4c0-14.8,5.2-28.6,14.4-38.61c9.8-10.8,23.21-16.6,38.01-16.6
c10.8,0,21,3.2,29.4,9c1.2-5.2,5.8-9,11.4-9C1020.18,84.89,1025.38,90.09,1025.38,96.49 M1002.18,140.1c0-18.6-12.4-32-29.2-32
c-17,0-29.2,13.4-29.2,32c0,18.4,12.2,31.8,29.2,31.8C989.78,171.9,1002.18,158.5,1002.18,140.1"/>
</defs>
<use xlink:href="#SVGID_45_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_46_">
<use xlink:href="#SVGID_45_" style="overflow:visible;"/>
</clipPath>
<rect x="909.85" y="74.17" class="st22" width="126.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_47_" d="M1136.79,132.7v50.8c0,6.4-5.2,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6v-50.8
c0-11.8-6.6-24.6-21.4-24.6c-13.4,0-23.4,10.6-23.4,24.6v0.8v0.8v49.2c0,6.4-5.2,11.6-11.6,11.6c-6.4,0-11.6-5.2-11.6-11.6v-49.4
v-1.4V96.49c0-6.4,5.2-11.6,11.6-11.6c4.8,0,8.8,2.8,10.6,6.8c7-4.4,15.4-6.8,24.4-6.8
C1117.39,84.89,1136.79,105.49,1136.79,132.7"/>
</defs>
<use xlink:href="#SVGID_47_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_48_">
<use xlink:href="#SVGID_47_" style="overflow:visible;"/>
</clipPath>
<rect x="1034.66" y="74.17" class="st23" width="112.85" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_49_" d="M1207.2,196.1c-14.8,0-28.2-6.4-38.01-17.2c-9.4-10-14.4-23.81-14.4-38.4c0-31.61,22.2-55.21,51.4-55.21
c29.41,0,50.8,23.2,50.8,55.21c0,6.4-5.2,11.6-11.8,11.6h-65.41c4,12.2,14.4,20.8,27.4,20.8c7.83,0,14.48-1.65,19.23-6.21
c1.44-1.38,2.7-3.03,3.77-4.99c3.4-5.6,10.6-7.2,16-4c5.6,3.4,7.2,10.6,4,16C1241.2,189.9,1225.6,196.1,1207.2,196.1
M1179.59,128.7h52.61c-4-13.8-15-20.2-26-20.2C1195.4,108.49,1183.79,114.89,1179.59,128.7"/>
</defs>
<use xlink:href="#SVGID_49_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_50_">
<use xlink:href="#SVGID_49_" style="overflow:visible;"/>
</clipPath>
<rect x="1144.07" y="74.57" class="st24" width="123.65" height="132.25"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,17 +0,0 @@
---
title: Quick Start Guide
toc: true
weight: 210
---
# Quick Start Guide
This quick start will demonstrate using Crossplane to deploy a portable stateful workload in the cloud provider of your choice.
It will first dynamically provision a Kubernetes cluster within the cloud provider environment, followed by a stateful application and its database to the same environment.
The database will also be dynamically provisioned using a managed service hosted by the cloud provider.
The Workload will be deployed into the target Kubernetes cluster, and be configured to consume the database resource in a completely portable way.
The general steps for this example are as follows:
1. Install Crossplane so it is ready to manage resources on your behalf: [Install Crossplane](install-crossplane.md)
1. Set up a cloud provider and add it to Crossplane: [Adding a Cloud Provider](cloud-providers.md)
1. Deploy a portable workload to the cloud provider: [Deploying Workloads](deploy.md)

View File

@ -1,90 +0,0 @@
---
title: Running Resources
toc: true
weight: 350
indent: true
---
# Running Resources
Crossplane enables you to run a number of different resources in a portable and cloud agnostic way, allowing you to author an application that runs without modifications on multiple environments and cloud providers.
A single Crossplane enables the provisioning and full-lifecycle management of infrastructure across a wide range of providers, vendors, regions, and offerings.
## Running Databases
Database managed services can be statically or dynamically provisioned by Crossplane in AWS, GCP, and Azure.
An application developer simply has to specify their general need for a database such as MySQL, without any specific knowledge of what environment that database will run in or even what specific type of database it will be at runtime.
The following sample is all the application developer needs to specify in order to get the correct MySQL database (CloudSQL, RDS, Azure MySQL) provisioned and configured for their application:
```yaml
apiVersion: storage.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: demo-mysql
spec:
classReference:
name: standard-mysql
namespace: crossplane-system
engineVersion: "5.7"
```
The cluster administrator specifies a resource class that acts as a template with the implementation details and policy specific to the environment that the generic MySQL resource is being deployed to.
This enables the database to be dynamically provisioned at deployment time without the application developer needing to know any of the details, which promotes portability and reusability.
An example resource class that will provision a CloudSQL instance in GCP in order to fulfill the applications general MySQL requirement would look like this:
```yaml
apiVersion: core.crossplane.io/v1alpha1
kind: ResourceClass
metadata:
name: standard-mysql
namespace: crossplane-system
parameters:
tier: db-n1-standard-1
region: us-west2
storageType: PD_SSD
provisioner: cloudsqlinstance.database.gcp.crossplane.io/v1alpha1
providerRef:
name: gcp-provider
reclaimPolicy: Delete
```
## Running Kubernetes Clusters
Kubernetes clusters are another type of resource that can be dynamically provisioned using a generic resource claim by the application developer and an environment specific resource class by the cluster administrator.
Generic Kubernetes cluster resource claim created by the application developer:
```yaml
apiVersion: compute.crossplane.io/v1alpha1
kind: KubernetesCluster
metadata:
name: demo-cluster
namespace: crossplane-system
spec:
classReference:
name: standard-cluster
namespace: crossplane-system
```
Environment specific GKE cluster resource class created by the admin:
```yaml
apiVersion: core.crossplane.io/v1alpha1
kind: ResourceClass
metadata:
name: standard-cluster
namespace: crossplane-system
parameters:
machineType: n1-standard-1
numNodes: "1"
zone: us-central1-a
provisioner: gkecluster.compute.gcp.crossplane.io/v1alpha1
providerRef:
name: gcp-provider
reclaimPolicy: Delete
```
## Future support
As the project continues to grow with support from the community, support for more resources will be added.
This includes all of the essential managed services from cloud providers as well as local or in-cluster services that deploy using the operator pattern.
Crossplane will provide support for serverless, databases, object storage (buckets), analytics, big data, AI, ML, message queues, key-value stores, and more.

View File

@ -1,98 +0,0 @@
---
title: Troubleshooting
toc: true
weight: 360
indent: true
---
# Troubleshooting
* [Crossplane Logs](#crossplane-logs)
* [Resource Status and Conditions](#resource-status-and-conditions)
* [Pausing Crossplane](#pausing-crossplane)
* [Deleting a Resource Hangs](#deleting-a-resource-hangs)
## Crossplane Logs
The first place to look to get more information or investigate a failure would be in the Crossplane pod logs, which should be running in the `crossplane-system` namespace.
To get the current Crossplane logs, run the following:
```console
kubectl -n crossplane-system logs $(kubectl -n crossplane-system get pod -l app=crossplane -o jsonpath='{.items[0].metadata.name}')
```
## Resource Status and Conditions
All of the objects that represent managed resources such as databases, clusters, etc. have a `status` section that can give good insight into the current state of that particular object.
In general, simply getting the `yaml` output of a Crossplane object will give insightful information about its condition:
```console
kubetl get <resource-type> -o yaml
```
For example, to get complete information about an Azure AKS cluster object, the following command will generate the below sample (truncated) output:
```console
> kubectl -n crossplane-system get akscluster -o yaml
...
status:
Conditions:
- LastTransitionTime: 2018-12-04T08:03:01Z
Message: 'failed to start create operation for AKS cluster aks-demo-cluster:
containerservice.ManagedClustersClient#CreateOrUpdate: Failure sending request:
StatusCode=400 -- Please see https://aka.ms/acs-sp-help for more details."'
Reason: failed to create cluster
Status: "False"
Type: Failed
- LastTransitionTime: 2018-12-04T08:03:14Z
Message: ""
Reason: ""
Status: "False"
Type: Creating
- LastTransitionTime: 2018-12-04T09:59:43Z
Message: ""
Reason: ""
Status: "True"
Type: Ready
bindingPhase: Bound
endpoint: crossplane-aks-14af6e93.hcp.centralus.azmk8s.io
state: Succeeded
```
We can see a few conditions in that AKS cluster's history.
It first encountered a failure, then it moved into the `Creating` state, then it finally became `Ready` later on.
Conditions that have `Status: "True"` are currently active, while conditions with `Status: "False"` happened in the past, but are no longer happening currently.
## Pausing Crossplane
Sometimes, it can be useful to pause Crossplane if you want to stop it from actively attempting to manage your resources, for instance if you have encountered a bug.
To pause Crossplane without deleting all of its resources, run the following command to simply scale down its deployment:
```console
kubectl -n crossplane-system scale --replicas=0 deployment/crossplane
```
Once you have been able to rectify the problem or smooth things out, you can unpause Crossplane simply by scaling its deployment back up:
```console
kubectl -n crossplane-system scale --replicas=1 deployment/crossplane
```
## Deleting a Resource Hangs
The resources that Crossplane manages will automatically be cleaned up so as not to leave anything running behind.
This is accomplished by using finalizers, but in certain scenarios, the finalizer can prevent the Kubernetes object from getting deleted.
To deal with this, we essentially want to patch the object to remove its finalizer, which will then allow it to be deleted completely.
Note that this won't necessarily delete the external resource that Crossplane was managing, so you will want to go to your cloud provider's console and look there for any lingering resources to clean up.
In general, a finalizer can be removed from an object with this command:
```console
kubectl patch <resource-type> <resource-name> -p '{"metadata":{"finalizers": []}}' --type=merge
```
For example, for a Workload object (`workloads.compute.crossplane.io`) named `test-workload`, you can remove its finalizer with:
```console
kubectl patch workloads.compute.crossplane.io test-workload -p '{"metadata":{"finalizers": []}}' --type=merge
```

View File

@ -1,229 +0,0 @@
# Deploying a WordPress Workload on AWS
This guide will walk you through how to use Crossplane to deploy a stateful workload in a portable way to AWS.
In this environment, the following components will be dynamically provisioned and configured during this guide:
* EKS Kubernetes cluster
* RDS MySQL database
* WordPress application
## Pre-requisites
Before starting this guide, you should have already [configured your AWS account](../../cloud-providers/aws/aws-provider.md) for usage by Crossplane.
You should have a `~/.aws/credentials` file on your local filesystem.
## Administrator Tasks
This section covers the tasks performed by the cluster or cloud administrator, which includes:
- Import AWS provider credentials
- Define Resource classes for cluster and database resources
- Create all EKS pre-requisite artifacts
- Create a target EKS Kubernetes cluster (using dynamic provisioning with the cluster resource class)
**Note**: all artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
restricted access, i.e. `Application Owner(s)` should not have access to them.
For the next steps, make sure your `kubectl` context points to the cluster where `Crossplane` was deployed.
### Create credentials
1. Get base64 encoded credentials with `cat ~/.aws/credentials|base64|tr -d '\n'`
1. Replace `BASE64ENCODED_AWS_PROVIDER_CREDS` in `cluster/examples/workloads/wordpress-aws/provider.yaml` with value from previous step.
### Configure EKS Cluster Pre-requisites
EKS cluster deployment is somewhat of an arduous process right now.
A number of artifacts and configuration needs to be set up within the AWS console first before proceeding with the provisioning of an EKS cluster using Crossplane.
We anticipate that AWS will make improvements on this user experience in the near future.
#### Create a named keypair
* You can either reuse an existing ec2 key pair or create a new key pair with [these steps](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
* Replace your key pair name in `cluster/examples/workloads/wordpress-aws/provider.yaml` in `EKS_WORKER_KEY_NAME`
#### Create your Amazon EKS Service Role
[Original Source Guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
1. Open the [IAM console](https://console.aws.amazon.com/iam/).
1. Choose Roles, then Create role.
1. Choose EKS from the list of services, then Allows Amazon EKS to manage your clusters on your behalf for your use case, then Next: Permissions.
1. Choose Next: Review.
1. For Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.
1. Replace `EKS_ROLE_ARN` in `cluster/examples/workloads/wordpress-aws/provider.yaml` with role arn from previous step.
#### Create your Amazon EKS Cluster VPC
[Original Source Guide](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html)
1. Open the [AWS CloudFormation console](https://console.aws.amazon.com/cloudformation).
1. From the navigation bar, select a Region that supports Amazon EKS.
```> Note
Amazon EKS is available in the following Regions at this time:
* US West (Oregon) (us-west-2)
* US East (N. Virginia) (us-east-1)
* EU (Ireland) (eu-west-1)
```
1. Choose Create stack.
1. For Choose a template, select Specify an Amazon S3 template URL.
1. Paste the following URL into the text area and choose Next:
```
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-11-07/amazon-eks-vpc-sample.yaml
```
1. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
```
* Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.
* VpcBlock: Choose a CIDR range for your VPC. You may leave the default value.
* Subnet01Block: Choose a CIDR range for subnet 1. You may leave the default value.
* Subnet02Block: Choose a CIDR range for subnet 2. You may leave the default value.
* Subnet03Block: Choose a CIDR range for subnet 3. You may leave the default value.
```
1. (Optional) On the Options page, tag your stack resources. Choose Next.
1. On the Review page, choose Create.
1. When your stack is created, select it in the console and choose Outputs.
1. Replace `EKS_VPC`, `EKS_ROLE_ARN`, `EKS_SUBNETS`, `EKS_SECURITY_GROUP` in `cluster/examples/workloads/wordpress-aws/provider.yaml` with values from previous step (vpcId, subnetIds, securityGroupIds). Note `EKS_SECURITY_GROUP` needs to be replaced twice in file.
1. Replace `REGION` in `cluster/examples/workloads/wordpress-aws/provider.yaml` with the region you selected in VPC creation.
#### Create an RDS subnet group
1. Navigate to aws console in same region as the EKS clsuter
1. Navigate to `RDS` service
1. Navigate to `Subnet groups` in left hand pane
1. Click `Create DB Subnet Group`
1. Name your subnet i.e. `eks-db-subnets`
1. Select the VPC created in the EKS VPC step
1. Click `Add all subnets related to this VPC`
1. Click Create
1. Replace `RDS_SUBNET_GROUP` in `cluster/examples/workloads/wordpress-aws/provider.yaml` with the `DBSubnetgroup` name you just created.
#### Create an RDS Security Group (example only)
**Note**: This will make your RDS instance visible from Anywhere on the internet.
This if for **EXAMPLE PURPOSES ONLY**, and is **NOT RECOMMENDED** for production system.
1. Navigate to ec2 in the region of the EKS cluster
1. Navigate to security groups
1. Select the same VPC from the EKS cluster.
1. On the Inbound Rules tab, choose Edit.
- For Type, choose `MYSQL/Aurora`
- For Port Range, type `3306`
- For Source, choose `Anywhere` from drop down or type: `0.0.0.0/0`
1. Choose Add another rule if you need to add more IP addresses or different port ranges.
1. Replace `RDS_SECURITY_GROUP` in `cluster/examples/workloads/wordpress-aws/provider.yaml` with the security group we just created.
### Deploy all Workload Resources
Now deploy all the workload resources, including the RDS database and EKS cluster with the following single commands:
Create provider:
```console
kubectl create -f cluster/examples/workloads/wordpress-aws/provider.yaml
```
Create cluster:
```console
kubectl create -f cluster/examples/workloads/wordpress-aws/cluster.yaml
```
It will take a while (~15 minutes) for the EKS cluster to be deployed and becoming ready.
You can keep an eye on its status with the following command:
```console
kubectl -n crossplane-system get ekscluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
```
Once the cluster is done provisioning, you should see output similar to the following (note the `STATE` field is `ACTIVE` and the `ENDPOINT` field has a value):
```console
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
eks-8f1f32c7-f6b4-11e8-844c-025000000001 ACTIVE <none> https://B922855C944FC0567E9050FCD75B6AE5.yl4.us-west-2.eks.amazonaws.com <none> standard-cluster Delete
```
## Application Developer Tasks
This section covers the tasks performed by the application developer, which includes:
- Define Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
- Define the dependency resource requirements, in this case a `MySQL` database
Now that the EKS cluster is ready, let's begin deploying the workload as the application developer:
```console
kubectl -n demo create -f cluster/examples/workloads/wordpress-aws/workload.yaml
```
This will also take awhile to complete, since the MySQL database needs to be deployed before the WordPress pod can consume it.
You can follow along with the MySQL database deployment with the following:
```console
kubectl -n crossplane-system get rdsinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.version
```
Once the `STATUS` column is `available` like below, then the WordPress pod should be able to connect to it:
```console
NAME STATUS CLASS VERSION
mysql-2a0be04f-f748-11e8-844c-025000000001 available standard-mysql <none>
```
Now we can watch the WordPress pod come online and a public IP address will get assigned to it:
```console
kubectl -n demo get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
```
When a public IP address has been assigned, you'll see output similar to the following:
```console
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
demo demo-cluster demo wordpress 104.43.240.15
```
Once WordPress is running and has a public IP address through its service, we can get the URL with the following command:
```console
echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"
```
Paste that URL into your browser and you should see WordPress running and ready for you to walk through the setup experience. You may need to wait a few minutes for this to become active in the AWS load balancer.
## Connect to your EKSCluster (optional)
Requires:
* awscli
* aws-iam-authenticator
Please see [Install instructions](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) section: `Install and Configure kubectl for Amazon EKS`
When the EKSCluster is up and running, you can update your kubeconfig with:
```console
aws eks update-kubeconfig --name <replace-me-eks-cluster-name>
```
Node pool is created after the master is up, so expect a few more minutes to wait, but eventually you can see that nodes joined with:
```console
kubectl config use-context <context-from-last-command>
kubectl get nodes
```
## Clean-up
First delete the workload, which will delete WordPress and the MySQL database:
```console
kubectl -n demo delete -f cluster/examples/workloads/wordpress-aws/workload.yaml
```
Then delete the EKS cluster:
```console
kubectl delete -f cluster/examples/workloads/wordpress-aws/cluster.yaml
```
Finally, delete the provider credentials:
```console
kubectl delete -f cluster/examples/workloads/wordpress-aws/provider.yaml
```
> Note: There may still be an ELB that was not properly cleaned up, and you will need
to go to EC2 > ELBs and delete it manually.

View File

@ -1,128 +0,0 @@
# Deploying a WordPress Workload on Microsoft Azure
This guide will walk you through how to use Crossplane to deploy a stateful workload in a portable way to Azure.
In this environment, the following components will be dynamically provisioned and configured during this guide:
* AKS Kubernetes cluster
* Azure MySQL database
* WordPress application
## Pre-requisites
Before starting this guide, you should have already [configured your Azure account](../../cloud-providers/azure/azure-provider.md) for usage by Crossplane.
You should have a `crossplane-azure-provider-key.json` file on your local filesystem, preferably at the root of where you cloned the [Crossplane repo](https://github.com/crossplaneio/crossplane).
## Administrator Tasks
This section covers the tasks performed by the cluster or cloud administrator, which includes:
- Import Azure provider credentials
- Define Resource classes for cluster and database resources
- Create a target Kubernetes cluster (using dynamic provisioning with the cluster resource class)
**Note**: all artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
restricted access, i.e. `Application Owner(s)` should not have access to them.
For the next steps, make sure your `kubectl` context points to the cluster where `Crossplane` was deployed.
- Create the Azure provider object in your cluster:
```console
sed "s/BASE64ENCODED_AZURE_PROVIDER_CREDS/`cat crossplane-azure-provider-key.json|base64|tr -d '\n'`/g;" cluster/examples/workloads/wordpress-azure/provider.yaml | kubectl create -f -
```
- Next, create the AKS cluster that will eventually be the target cluster for your Workload deployment:
```console
kubectl create -f cluster/examples/workloads/wordpress-azure/cluster.yaml
```
It will take a while (~15 minutes) for the AKS cluster to be deployed and becoming ready. You can keep an eye on its status with the following command:
```console
kubectl -n crossplane-system get akscluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
```
Once the cluster is done provisioning, you should see output similar to the following (note the `STATE` field is `Succeeded` and the `ENDPOINT` field has a value):
```console
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
aks-587762b3-f72b-11e8-bcbe-0800278fedb1 Succeeded aks-587762b3-f72b-11e8-bcbe-080 crossplane-aks-653c32ef.hcp.centralus.azmk8s.io Central US standard-cluster Delete
```
To recap the operations that we just performed as the administrator:
- Defined a `Provider` with Microsoft Azure service principal credentials
- Defined `ResourceClasses` for `KubernetesCluster` and `MySQLInstance`
- Provisioned (dynamically) an AKS Cluster using the `ResourceClass`
## Application Developer Tasks
This section covers the tasks performed by the application developer, which includes:
- Define Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
- Define the dependency resource requirements, in this case a `MySQL` database
Let's begin deploying the workload as the application developer:
- Now that the target AKS cluster is ready, we can deploy the Workload that contains all the Wordpress resources, including the SQL database, with the following single command:
```console
kubectl create -f cluster/examples/workloads/wordpress-azure/workload.yaml
```
This will also take awhile to complete, since the MySQL database needs to be deployed before the Wordpress pod can consume it.
You can follow along with the MySQL database deployment with the following:
```console
kubectl -n crossplane-system get mysqlserver -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.version
```
Once the `STATUS` column is `Ready` like below, then the Wordpress pod should be able to connect to it:
```console
NAME STATUS CLASS VERSION
mysql-58425bda-f72d-11e8-bcbe-0800278fedb1 Ready standard-mysql 5.7
```
- Now we can watch the Wordpress pod come online and a public IP address will get assigned to it:
```console
kubectl get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
```
When a public IP address has been assigned, you'll see output similar to the following:
```console
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
test-workload demo-cluster demo wordpress 104.43.240.15
```
- Once Wordpress is running and has a public IP address through its service, we can get the URL with the following command:
```console
echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"
```
- Paste that URL into your browser and you should see Wordpress running and ready for you to walk through the setup experience.
## Clean-up
First delete the workload, which will delete Wordpress and the MySQL database:
```console
kubectl delete -f cluster/examples/workloads/wordpress-azure/workload.yaml
```
Then delete the AKS cluster:
```console
kubectl delete -f cluster/examples/workloads/wordpress-azure/cluster.yaml
```
Finally, delete the provider credentials:
```console
kubectl delete -f cluster/examples/workloads/wordpress-azure/provider.yaml
```

View File

@ -1,175 +0,0 @@
# Deploying a WordPress Workload on GCP
This guide will walk you through how to use Crossplane to deploy a stateful workload in a portable way to GCP.
In this environment, the following components will be dynamically provisioned and configured during this guide:
* GKE Kubernetes cluster
* CloudSQL database
* WordPress application
## Pre-requisites
Before starting this guide, you should have already [configured your GCP account](../../cloud-providers/gcp/gcp-provider.md) for usage by Crossplane.
You should have a `key.json` file on your local filesystem, preferably at the root of where you cloned the [Crossplane repo](https://github.com/crossplaneio/crossplane).
## Administrator Tasks
This section covers the tasks performed by the cluster or cloud administrator, which includes:
- Import GCP provider credentials
- Define Resource classes for cluster and database resources
- Create a target Kubernetes cluster (using dynamic provisioning with the cluster resource class)
**Note**: all artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
restricted access, i.e. `Application Owner(s)` should not have access to them.
For the next steps, make sure your `kubectl` context points to the cluster where `Crossplane` was deployed.
- Export Project ID
**NOTE** you can skip this step if you generated GCP Service Account using `gcloud`
```bash
export DEMO_PROJECT_ID=[your-demo-project-id]
```
- Patch and Apply `provider.yaml`:
```bash
sed "s/BASE64ENCODED_CREDS/`cat key.json|base64 | tr -d '\n'`/g;s/DEMO_PROJECT_ID/$DEMO_PROJECT_ID/g" cluster/examples/workloads/wordpress-gcp/provider.yaml | kubectl create -f -
```
- Verify that GCP Provider is in `Ready` state
```bash
kubectl -n crossplane-system get providers.gcp.crossplane.io -o custom-columns=NAME:.metadata.name,STATUS:.status.Conditions[0].Type,PROJECT-ID:.spec.projectID
```
Your output should look similar to:
```bash
NAME STATUS PROJECT-ID
gcp-provider Ready [your-project-id]
```
- Verify that Resource Classes have been created
```bash
kubectl -n crossplane-system get resourceclass -o custom-columns=NAME:metadata.name,PROVISIONER:.provisioner,PROVIDER:.providerRef.name,RECLAIM-POLICY:.reclaimPolicy
```
Your output should be:
```bash
NAME PROVISIONER PROVIDER RECLAIM-POLICY
standard-cluster gkecluster.compute.gcp.crossplane.io/v1alpha1 gcp-provider Delete
standard-mysql cloudsqlinstance.database.gcp.crossplane.io/v1alpha1 gcp-provider Delete
```
- Create a target Kubernetes cluster where `Application Owner(s)` will deploy their `WorkLoad(s)`
As administrator, you will create a Kubernetes cluster leveraging the Kubernetes cluster `ResourceClass` that was created earlier and
`Crossplane` Kubernetes cluster dynamic provisioning.
```bash
kubectl apply -f cluster/examples/workloads/wordpress-gcp/kubernetes.yaml
```
- Verify that Kubernetes Cluster resource was created
```bash
kubectl -n crossplane-system get kubernetescluster -o custom-columns=NAME:.metadata.name,CLUSTERCLASS:.spec.classReference.name,CLUSTERREF:.spec.resourceName.name
```
Your output should look similar to:
```bash
NAME CLUSTERCLASS CLUSTERREF
demo-gke-cluster standard-cluster gke-67419e79-f5b3-11e8-9cec-9cb6d08bde99
```
- Verify that the target GKE cluster was successfully created
```bash
kubectl -n crossplane-system get gkecluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.zone,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
```
Your output should look similar to:
```bash
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
gke-67419e79-f5b3-11e8-9cec-9cb6d08bde99 RUNNING gke-6742fe8d-f5b3-11e8-9cec-9cb6d08bde99 146.148.93.40 us-central1-a standard-cluster Delete
```
To recap the operations that we just performed as the administrator:
- Defined a `Provider` with Google Service Account credentials
- Defined `ResourceClasses` for `KubernetesCluster` and `MySQLInstance`
- Provisioned (dynamically) a GKE Cluster using the `ResourceClass`
## Application Developer Tasks
This section covers the tasks performed by the application developer, which includes:
- Define Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
- Define the dependency resource requirements, in this case a `MySQL` database
Let's begin deploying the workload as the application developer:
- Deploy workload
```bash
kubectl apply -f cluster/examples/workloads/wordpress-gcp/workload.yaml
```
- Wait for `MySQLInstance` to be in `Bound` State
You can check the status via:
```bash
kubectl get mysqlinstance -o custom-columns=NAME:.metadata.name,VERSION:.spec.engineVersion,STATE:.status.bindingPhase,CLASS:.spec.classReference.name
```
Your output should look like:
```bash
NAME VERSION STATE CLASS
demo 5.7 Bound standard-mysql
```
**Note**: to check on the concrete resource type status as `Administrator` you can run:
```bash
kubectl -n crossplane-system get cloudsqlinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.databaseVersion
```
Your output should be similar to:
```bash
NAME STATUS CLASS VERSION
mysql-2fea0d8e-f5bb-11e8-9cec-9cb6d08bde99 RUNNABLE standard-mysql MYSQL_5_7
```
- Wait for `Workload` External IP Address
```bash
kubectl get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
```
**Note** the `Workload` is defined in Application Owner's (`default`) namespace
Your output should look similar to:
```bash
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
demo demo-gke-cluster demo wordpress 35.193.100.113
```
- Verify that `WordPress` service is accessible via `SERVICE-EXTERNAL-IP` by:
- Navigate in your browser to `SERVICE-EXTERNAL-IP`
At this point, you should see the setup page for WordPress in your web browser.
## Clean Up
Once you are done with this example, you can clean up all its artifacts with the following commands:
- Remove `Workload`
```bash
kubectl delete -f cluster/examples/workloads/wordpress-gcp/workload.yaml
```
- Remove `KubernetesCluster`
```bash
kubectl delete -f cluster/examples/workloads/wordpress-gcp/kubernetes.yaml
```
- Remove GCP `Provider` and `ResourceClasses`
```bash
kubectl delete -f cluster/examples/workloads/wordpress-gcp/provider.yaml
```
- Delete Google Project
```bash
# list all your projects
gcloud projects list
# delete demo project
gcloud projects delete [demo-project-id
```

View File

@ -1,63 +0,0 @@
# Overview
![Crossplane](media/banner.png)
Crossplane is an open source control plane that allows you to manage
applications and infrastructure the Kubernetes way. It provides the following
features:
- Deployment and management of cloud provider managed services using the
Kubernetes API.
- Management and scheduling of configuration data across multiple Kubernetes
clusters.
- Separation of concern between infrastructure owners, application owners, and
developers.
- Infrastructure agnostic packaging of applications and their dependencies.
- Scheduling applications into different clusters, zones, and regions.
Crossplane does not:
- Require that you run your workloads on Kubernetes.
- Manage the data plane across Kubernetes clusters.
- Manage or provision non-hosted Kubernetes clusters.
Crossplane can be [installed] into any Kubernetes cluster, and is compatible
with any Kubernetes-native project. It manages external services by installing
[Custom Resource Definitions] (CRDs) and [reconciling] instances of those Custom
Resources. Crossplane is built to be extensible, meaning that anyone can add
functionality for an new or existing cloud provider.
Crossplane is comprised of four main components:
1. **Core Crossplane**: the set of Kubernetes CRDs and controllers that manage
installation of `providers`, `stacks`, and `applications`, as well as the
scheduling of configuration data to remote Kubernetes clusters.
2. **Providers**: the set of Kubernetes CRDs and controllers that provision and
manage services on cloud providers. A cloud provider is any service that
exposes infrastructure via an API.
- Examples: [Google Cloud Platform], [Amazon Web Services], [Azure],
[Alibaba], [Github]
3. **Stacks**: a bundled set of custom resources that together represent an
environment on a cloud provider. The bundle of instances can be created by a
single custom resource.
- Examples: [Sample GCP Stack], [Sample AWS Stack], [Sample Azure Stack]
4. **Applications**: a deployable unit of code and configuration, which, when
created, may involve provisioning new services which are managed by a
`provider`, or consuming services created by a `stack`.
- Examples: [Wordpress]
<!-- Named Links -->
[installed]: getting-started/install.md
[Custom Resource Definitions]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[reconciling]: https://kubernetes.io/docs/concepts/architecture/controller/
[Google Cloud Platform]: https://github.com/crossplane/provider-gcp
[Amazon Web Services]: https://github.com/crossplane/provider-aws
[Azure]: https://github.com/crossplane/provider-azure
[Alibaba]: https://github.com/crossplane/provider-alibaba
[Github]: https://github.com/crossplane/provider-github
[Sample GCP Stack]: https://github.com/crossplane/stack-gcp-sample
[Sample AWS Stack]: https://github.com/crossplane/stack-aws-sample
[Sample Azure Stack]: https://github.com/crossplane/stack-azure-sample
[Wordpress]: https://github.com/crossplane/app-wordpress

View File

@ -1,39 +0,0 @@
---
title: API Documentation
toc: true
weight: 400
---
# API Documentation
The Crossplane ecosystem contains many CRDs that map to API types represented by
external infrastructure providers. The documentation for these CRDs are
auto-generated on [doc.crds.dev]. To find the CRDs available for providers
maintained by the Crossplane organization, you can search for the Github URL, or
append it in the [doc.crds.dev] URL path.
For instance, to find the CRDs available for [provider-azure], you would go to:
[doc.crds.dev/github.com/crossplane/provider/azure]
By default, you will be served the latest CRDs on the `master` branch for the
repository. If you prefer to see the CRDs for a specific version, you can append
the git tag for the release:
[doc.crds.dev/github.com/crossplane/provider-azure@v0.8.0]
Crossplane repositories that are not providers but do publish CRDs are also
served on [doc.crds.dev]. For instance, the [crossplane/crossplane] repository.
Bugs and feature requests for API documentation should be [opened as issues] on
the open source [doc.crds.dev repo].
<!-- Named Links -->
[doc.crds.dev]: https://doc.crds.dev/
[provider-azure]: https://github.com/crossplane/provider-azure
[doc.crds.dev/github.com/crossplane/provider/azure]: https://doc.crds.dev/github.com/crossplane/provider-azure
[doc.crds.dev/github.com/crossplane/provider-azure@v0.8.0]: https://doc.crds.dev/github.com/crossplane/provider-azure@v0.8.0
[crossplane/crossplane]: https://doc.crds.dev/github.com/crossplane/crossplane
[opened as issues]: https://github.com/crdsdev/doc/issues/new
[doc.crds.dev repo]: https://github.com/crdsdev/doc

View File

@ -1,143 +0,0 @@
# Adding Amazon Web Services (AWS) to Crossplane
In this guide, we will walk through the steps necessary to configure your AWS
account to be ready for integration with Crossplane. This will be done by adding
an AWS `Provider` resource type, which enables Crossplane to communicate with an
AWS account.
## Requirements
Prior to adding AWS to Crossplane, following steps need to be taken
- Crossplane is installed in a k8s cluster
- AWS Stack is installed in the same cluster
- `kubectl` is configured to communicate with the same cluster
## Step 1: Configure `aws` CLI
Crossplane uses [AWS security credentials], and stores them as a [secret] which
is managed by an AWS `Provider` instance. In addition, the AWS default region is
also used for targeting a specific region. Crossplane requires to have [`aws`
command line tool] [installed] and [configured]. Once installed, the credentials
and configuration will reside in `~/.aws/credentials` and `~/.aws/config`
respectively.
## Step 2: Setup `aws` Provider
Run [setup.sh] script to read `aws` credentials and region, and create an `aws
provider` instance in Crossplane:
```bash
./cluster/examples/aws-setup-provider/setup.sh [--profile aws_profile]
```
The `--profile` switch is optional and specifies the [aws named profile] that
was set in Step 1. If not provided, the `default` profile will be selected.
Once the script is successfully executed, Crossplane will use the specified aws
account and region in the given named profile to create subsequent AWS managed
resources.
You can confirm the existense of the AWS `Provider` by running:
```bash
kubectl -n crossplane-system get provider/aws-provider
```
## Optional: Setup AWS Provider Manually
An AWS [user][aws user] with `Administrative` privileges is needed to enable
Crossplane to create the required resources. Once the user is provisioned, an
[Access Key][] needs to be created so the user can have API access.
Using the set of [access key credentials][AWS security credentials] for the user
with the right access, we need to [install][install-aws] [`aws cli`][aws command
line tool], and then [configure][aws-cli-configure] it.
When the AWS cli is configured, the credentials and configuration will be in
`~/.aws/credentials` and `~/.aws/config` respectively. These will be consumed in
the next step.
When configuring the AWS cli, the user credentials could be configured under a
specific [AWS named profile][], or under `default`. Without loss of generality,
in this guide let's assume that the credentials are configured under the
`aws_profile` profile (which could also be `default`). We'll use this profile to
setup cloud provider in the next section.
Crossplane uses the AWS user credentials that were configured in the previous
step to create resources in AWS. These credentials will be stored as a
[secret][kubernetes secret] in Kubernetes, and will be used by an AWS `Provider`
instance. The default AWS region is also pulled from the cli configuration, and
added to the AWS provider.
To store the credentials as a secret, run:
```bash
# retrieve profile's credentials, save it under 'default' profile, and base64 encode it
BASE64ENCODED_AWS_ACCOUNT_CREDS=$(echo -e "[default]\naws_access_key_id = $(aws configure get aws_access_key_id --profile $aws_profile)\naws_secret_access_key = $(aws configure get aws_secret_access_key --profile $aws_profile)" | base64 | tr -d "\n")
# retrieve the profile's region from config
AWS_REGION=$(aws configure get region --profile ${aws_profile})
```
At this point, the region and the encoded credentials are stored in respective
variables. Next, we'll need to create an instance of AWS provider:
```bash
cat > provider.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: aws-account-creds
namespace: crossplane-system
type: Opaque
data:
credentials: ${BASE64ENCODED_AWS_ACCOUNT_CREDS}
---
apiVersion: aws.crossplane.io/v1alpha3
kind: Provider
metadata:
name: aws-provider
spec:
region: ${AWS_REGION}
credentialsSecretRef:
namespace: crossplane-system
name: aws-account-creds
key: credentials
EOF
# apply it to the cluster:
kubectl apply -f "provider.yaml"
# delete the credentials variable
unset BASE64ENCODED_AWS_ACCOUNT_CREDS
```
The output will look like the following:
```bash
secret/aws-user-creds created
provider.aws.crossplane.io/aws-provider created
```
The `aws-provider` resource will be used in other resources that we will create,
to provide access information to the configured AWS account.
<!-- Named Links -->
[`aws` command line tool]: https://aws.amazon.com/cli/
[AWS SDK for GO]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/setting-up.html
[installed]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
[configured]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
[AWS security credentials]: https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
[secret]:https://kubernetes.io/docs/concepts/configuration/secret/
[setup.sh]: https://github.com/crossplane/crossplane/blob/master/cluster/examples/aws-setup-provider/setup.sh
[aws named profile]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
[aws user]: https://docs.aws.amazon.com/mediapackage/latest/ug/setting-up-create-iam-user.html
[Access Key]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
[AWS security credentials]: https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
[aws command line tool]: https://aws.amazon.com/cli/
[install-aws]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
[aws-cli-configure]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
[kubernetes secret]: https://kubernetes.io/docs/concepts/configuration/secret/
[AWS named profile]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

View File

@ -1,125 +0,0 @@
# Adding Microsoft Azure to Crossplane
In this guide, we will walk through the steps necessary to configure your Azure
account to be ready for integration with Crossplane. The general steps we will
take are summarized below:
* Create a new service principal (account) that Crossplane will use to create
and manage Azure resources
* Add the required permissions to the account
* Consent to the permissions using an administrator account
## Preparing your Microsoft Azure Account
In order to manage resources in Azure, you must provide credentials for a Azure
service principal that Crossplane can use to authenticate. This assumes that you
have already [set up the Azure CLI
client](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest)
with your credentials.
Create a JSON file that contains all the information needed to connect and
authenticate to Azure:
```bash
# create service principal with Owner role
az ad sp create-for-rbac --sdk-auth --role Owner > crossplane-azure-provider-key.json
```
Take note of the `clientID` value from the JSON file that we just created, and
save it to an environment variable:
```bash
export AZURE_CLIENT_ID=<clientId value from json file>
```
Now add the required permissions to the service principal that will allow it to
manage the necessary resources in Azure:
```bash
# add required Azure Active Directory permissions
az ad app permission add --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --api-permissions 1cda74f2-2616-4834-b122-5cb1b07f8a59=Role 78c8a3c8-a07e-4b9e-af1b-b5ccab50a175=Role
# grant (activate) the permissions
az ad app permission grant --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --expires never
```
You might see an error similar to the following, but that is OK, the permissions
should have gone through still:
```console
Operation failed with status: 'Conflict'. Details: 409 Client Error: Conflict for url: https://graph.windows.net/e7985bc4-a3b3-4f37-b9d2-fa256023b1ae/oauth2PermissionGrants?api-version=1.6
```
Finally, you need to grant admin permissions on the Azure Active Directory to
the service principal because it will need to create other service principals
for your `AKSCluster`:
```bash
# grant admin consent to the service princinpal you created
az ad app permission admin-consent --id "${AZURE_CLIENT_ID}"
```
Note: You might need `Global Administrator` role to `Grant admin consent for
Default Directory`. Please contact the administrator of your Azure subscription.
To check your role, go to `Azure Active Directory` -> `Roles and
administrators`. You can find your role(s) by clicking on `Your Role (Preview)`
After these steps are completed, you should have the following file on your
local filesystem:
* `crossplane-azure-provider-key.json`
## Setup Azure Provider
Before creating any resources, we need to create and configure an Azure cloud
provider resource in Crossplane, which stores the cloud account information in
it. All the requests from Crossplane to Azure Cloud will use the credentials
attached to this provider resource. The following command assumes that you have
a `crossplane-azure-provider-key.json` file that belongs to the account youd
like Crossplane to use.
```bash
BASE64ENCODED_AZURE_ACCOUNT_CREDS=$(base64 crossplane-azure-provider-key.json | tr -d "\n")
```
Now well create our `Secret` that contains the credential and `Provider`
resource that refers to that secret:
```bash
cat > provider.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: azure-account-creds
namespace: crossplane-system
type: Opaque
data:
credentials: ${BASE64ENCODED_AZURE_ACCOUNT_CREDS}
---
apiVersion: azure.crossplane.io/v1alpha3
kind: Provider
metadata:
name: azure-provider
spec:
credentialsSecretRef:
namespace: crossplane-system
name: azure-account-creds
key: credentials
EOF
# apply it to the cluster:
kubectl apply -f "provider.yaml"
# delete the credentials variable
unset BASE64ENCODED_AZURE_ACCOUNT_CREDS
```
The output will look like the following:
```bash
secret/azure-user-creds created
provider.azure.crossplane.io/azure-provider created
```
The `azure-provider` resource will be used in other resources that we will
create, to provide access information to the configured Azure account.

View File

@ -1,260 +0,0 @@
# Adding Google Cloud Platform (GCP) to Crossplane
In this guide, we will walk through the steps necessary to configure your GCP
account to be ready for integration with Crossplane. The general steps we will
take are summarized below:
* Create a new example project that all resources will be deployed to
* Enable required APIs such as Kubernetes and CloudSQL
* Create a service account that will be used to perform GCP operations from
Crossplane
* Assign necessary roles to the service account
* Enable billing
For your convenience, the specific steps to accomplish those tasks are provided
for you below using either the `gcloud` command line tool, or the GCP console in
a web browser. You can choose whichever you are more comfortable with.
## Option 1: gcloud Command Line Tool
If you have the `gcloud` tool installed, you can run the commands below from the
crossplane directory.
Instructions for installing `gcloud` can be found in the [Google
docs](https://cloud.google.com/sdk/install).
### Using `gcp-credentials.sh`
In the `cluster/examples` directory you will find a helper script,
[`gcp-credentials.sh`](https://raw.githubusercontent.com/crossplane/crossplane/master/cluster/examples/gcp-credentials.sh).
This script will prompt you for the organization, project, and billing account
that will be used by `gcloud` when creating a project, service account, and
credentials file (`crossplane-gcp-provider-key.json`). The chosen project and
created service account will have access to the services and roles sufficient to
run the Crossplane GCP examples.
```console
$ cluster/examples/gcp-credentials.sh
... EXAMPLE OUTPUT ONLY
export ORGANIZATION_ID=987654321
export PROJECT_ID=crossplane-example-1234
export EXAMPLE_SA=example-1234@crossplane-example-1234.iam.gserviceaccount.com
export BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 crossplane-gcp-provider-key.json | tr -d "\n")
```
After running `gcp-credentials.sh`, a series of `export` commands will be shown.
Copy and paste the `export` commands that are provided. These variable names
will be referenced throughout the Crossplane examples, generally with a `sed`
command.
You will also find a `crossplane-gcp-provider-key.json` file in the current
working directory. Be sure to remove this file when you are done with the
example projects.
### Running `gcloud` by hand
```bash
# list your organizations (if applicable), take note of the specific organization ID you want to use
# if you have more than one organization (not common)
gcloud organizations list
# create a new project (project id must be <=30 characters)
export EXAMPLE_PROJECT_ID=crossplane-example-123
gcloud projects create $EXAMPLE_PROJECT_ID --enable-cloud-apis # [--organization $ORGANIZATION_ID]
# or, record the PROJECT_ID value of an existing project
# export EXAMPLE_PROJECT_ID=$(gcloud projects list --filter NAME=$EXAMPLE_PROJECT_NAME --format="value(PROJECT_ID)")
# link billing to the new project
gcloud beta billing accounts list
gcloud beta billing projects link $EXAMPLE_PROJECT_ID --billing-account=$ACCOUNT_ID
# enable Kubernetes API
gcloud --project $EXAMPLE_PROJECT_ID services enable container.googleapis.com
# enable CloudSQL API
gcloud --project $EXAMPLE_PROJECT_ID services enable sqladmin.googleapis.com
# enable Redis API
gcloud --project $EXAMPLE_PROJECT_ID services enable redis.googleapis.com
# enable Compute API
gcloud --project $EXAMPLE_PROJECT_ID services enable compute.googleapis.com
# enable Service Networking API
gcloud --project $EXAMPLE_PROJECT_ID services enable servicenetworking.googleapis.com
# enable Additional APIs needed for the example or project
# See `gcloud services list` for a complete list
# create service account
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts create example-123 --display-name "Crossplane Example"
# export service account email
export EXAMPLE_SA="example-123@$EXAMPLE_PROJECT_ID.iam.gserviceaccount.com"
# create service account key (this will create a `crossplane-gcp-provider-key.json` file in your current working directory)
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts keys create --iam-account $EXAMPLE_SA crossplane-gcp-provider-key.json
# assign roles
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/container.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/redis.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/compute.networkAdmin"
```
## Option 2: GCP Console in a Web Browser
If you chose to use the `gcloud` tool, you can skip this section entirely.
Create a GCP example project which we will use to host our example GKE cluster,
as well as our example CloudSQL instance.
- Login into [GCP Console](https://console.cloud.google.com)
- Create a [new
project](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com,sqladmin.googleapis.com,redis.googleapis.com)
(either stand alone or under existing organization)
- Create Example Service Account
- Navigate to: [Create Service
Account](https://console.cloud.google.com/iam-admin/serviceaccounts)
- `Service Account Name`: type "example"
- `Service Account ID`: leave auto assigned
- `Service Account Description`: type "Crossplane example"
- Click `Create` button
- This should advance to the next section `2 Grant this service account to
project (optional)`
- We will assign this account 3 roles:
- `Service Account User`
- `Cloud SQL Admin`
- `Kubernetes Engine Admin`
- `Compute Network Admin`
- Click `Create` button
- This should advance to the next section `3 Grant users access to this
service account (optional)`
- We don't need to assign any user or admin roles to this account for the
example purposes, so you can leave following two fields blank:
- `Service account users role`
- `Service account admins role`
- Next, we will create and export service account key
- Click `+ Create Key` button.
- This should open a `Create Key` side panel
- Select `json` for the Key type (should be selected by default)
- Click `Create`
- This should show `Private key saved to your computer` confirmation
dialog
- You also should see `crossplane-example-1234-[suffix].json` file in your
browser's Download directory
- Save (copy or move) this file into example (this) directory, with new
name `crossplane-gcp-provider-key.json`
- Enable `Cloud SQL API`
- Navigate to [Cloud SQL Admin
API](https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview)
- Click `Enable`
- Enable `Kubernetes Engine API`
- Navigate to [Kubernetes Engine
API](https://console.developers.google.com/apis/api/container.googleapis.com/overview)
- Click `Enable`
- Enable `Cloud Memorystore for Redis`
- Navigate to [Cloud Memorystore for
Redis](https://console.developers.google.com/apis/api/redis.googleapis.com/overview)
- Click `Enable`
- Enable `Compute Engine API`
- Navigate to [Compute Engine
API](https://console.developers.google.com/apis/api/compute.googleapis.com/overview)
- Click `Enable`
- Enable `Service Networking API`
- Navigate to [Service Networking
API](https://console.developers.google.com/apis/api/servicenetworking.googleapis.com/overview)
- Click `Enable`
### Enable Billing
You will need to enable billing for your account in order to create and use
Kubernetes clusters with GKE.
- Go to [GCP Console](https://console.cloud.google.com)
- Select example project
- Click `Enable Billing`
- Go to [Kubernetes Clusters](https://console.cloud.google.com/kubernetes/list)
- Click `Enable Billing`
## Setup GCP Provider
Before creating any resources, we need to create and configure a GCP cloud
provider resource in Crossplane, which stores the cloud account information in
it. All the requests from Crossplane to GCP will use the credentials attached to
this provider resource. The following command assumes that you have a
`crossplane-gcp-provider-key.json` file that belongs to the account that will be
used by Crossplane, which has GCP project id. You should be able to get the
project id from the JSON credentials file or from the GCP console. Without loss
of generality, let's assume the project id is `my-cool-gcp-project` in this
guide.
First, let's encode the credential file contents and put it in a variable:
```bash
# base64 encode the GCP credentials
BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 crossplane-gcp-provider-key.json | tr -d "\n")
```
Next, store the project ID of the GCP project in which you would like to
provision infrastructure as a variable:
```bash
# replace this with your own gcp project id
PROJECT_ID=my-cool-gcp-project
```
Finally, store the namespace in which you want to save the provider's secret as
a variable:
```bash
# change this namespace value if you want to use a different namespace (e.g. gitlab-managed-apps)
PROVIDER_SECRET_NAMESPACE=crossplane-system
```
Now well create the `Secret` resource that contains the credential, and
`Provider` resource which refers to that secret:
```bash
cat > provider.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: gcp-account-creds
namespace: ${PROVIDER_SECRET_NAMESPACE}
type: Opaque
data:
credentials: ${BASE64ENCODED_GCP_PROVIDER_CREDS}
---
apiVersion: gcp.crossplane.io/v1alpha3
kind: Provider
metadata:
name: gcp-provider
spec:
# replace this with your own gcp project id
projectID: ${PROJECT_ID}
credentialsSecretRef:
namespace: ${PROVIDER_SECRET_NAMESPACE}
name: gcp-account-creds
key: credentials
EOF
# apply it to the cluster:
kubectl apply -f "provider.yaml"
# delete the credentials
unset BASE64ENCODED_GCP_PROVIDER_CREDS
```
The output will look like the following:
```bash
secret/gcp-account-creds created
provider.gcp.crossplane.io/gcp-provider created
```
The `gcp-provider` resource will be used in other resources that we will create,
to provide access information to the configured GCP account.

View File

@ -1,200 +0,0 @@
---
title: Observability Developer Guide
toc: true
weight: 102
indent: true
---
# Observability Developer Guide
## Introduction
Observability is crucial to Crossplane users; both those operating Crossplane
and those using Crossplane to operate their infrastructure. Crossplane currently
approaches observability via Kubernetes events and structured logs. Timeseries
metrics are desired but [not yet implemented].
## Goals
In short, a non-admin user and an admin user should both be able to debug any
issues only by inspecting logs and events. There should be no need to rebuild
the Crossplane binary or to reach out to a Crossplane developer.
A user should be able to:
* Debug an issue without rebuilding the Crossplane binary
* Understand an issue without contacting a cluster admin
* Ask a cluster admin to check the logs for more details about the reason the
issue happened, if the details are not part of the error message
A cluster admin should be able to:
* Debug an issue without rebuilding the Crossplane binary
* Debug an issue only by looking at the logs
* Debug an issue without needing to contact a Crossplane developer
## Error reporting in the logs
Error reporting in the logs is mostly intended for consumption by Crossplane
cluster admins. A cluster admin should be able to debug any issue by inspecting
the logs, without needing to add more logs themselves or contact a Crossplane
developer. This means that logs should contain:
* Error messages, at either the info or debug level as contextually appropriate
* Any context leading up to an error, typically at debug level, so that the
errors can be debugged
## Error reporting as events
Error reporting as Kubernetes events is primarily aimed toward end-users of
Crossplane who are not cluster admins. Crossplane typically runs as a Kubernetes
pod, and thus it is unlikely that most users of Crossplane will have access to
its logs. [Events], on the other hand, are available as top-level Kubernetes
objects, and show up the objects they relate to when running `kubectl describe`.
Events should be recorded in the following cases:
* A significant operation is taken on a resource
* The state of a resource is changed
* An error occurs
The events recorded in these cases can be thought of as forming an event log of
things that happen for the resources that Crossplane manages. Each event should
refer back to the relevant controller and resource, and use other fields of the
Event kind as appropriate.
More details about examples of how to interact with events can be found in the
guide to [debugging an application cluster].
## Choosing between methods of error reporting
There are many ways to report errors, such as:
* Metrics
* Events
* Logging
* Tracing
It can be confusing to figure out which one is appropriate in a given situation.
This section will try to offer advice and a mindset that can be used to help
make this decision.
Let's set the context by listing the different user scenarios where error
reporting may be consumed. Here are the typical scenarios as we imagine them:
1. A person **using** a system needs to figure out why things aren't working as
expected, and whether they made a mistake that they can correct.
2. A person **operating** a service needs to monitor the service's **health**,
both now and historically.
3. A person **debugging** a problem which happened in a **live environment**
(often an **operator** of the system) needs information to figure out what
happened.
4. A person **developing** the software wants to **observe** what is happening.
5. A person **debugging** the software in a **development environment**
(typically a **developer** of the system) wants to debug a problem (there is
a lot of overlap between this and the live environment debugging scenario).
The goal is to satisfy the users in all of the scenarios. We'll refer to the
scenarios by number.
The short version is: we should do whatever satisfies all of the scenarios.
Logging and events are the recommendations for satisfying the scenarios,
although they don't cover scenario 2.
The longer version is:
* Scenario 1 is best served by events in the context of Crossplane, since the
users may not have access to read logs or metrics, and even if they did, it
would be hard to relate them back to the event the user is trying to
understand.
* Scenario 2 is best served by metrics, because they can be aggregated and
understood as a whole. And because they can be used to track things over time.
* Scenario 3 is best served by either logging that contains all the information
about and leading up to the event. Request-tracing systems are also useful for
this scenario.
* Scenario 4 is usually logs, maybe at a more verbose level than normal. But it
could be an attached debugger or some other type of tool. It could also be a
test suite.
* Scenario 5 is usually either logs, up to the highest imaginable verbosity, or
an attached debugging session. If there's a gap in reporting, it could involve
adding some print statements to get more logging.
As for the question of how to decide whether to log or not, we believe it helps
to try to visualize which of the scenarios the error or information in question
will be used for. We recommend starting with reporting as much information as
possible, but with configurable runtime behavior so that, for example, debugging
logs don't show up in production normally.
For the question of what constitutes an error, errors should be actionable by a
human. See the [Dave Cheney article] on this topic for some more discussion.
## In Practice
Crossplane provides two observability libraries as part of crossplane-runtime:
* [`event`] emits Kubernetes events.
* [`logging`] produces structured logs. Refer to its package documentation for
additional context on its API choices.
Keep the following in mind when using the above libraries:
* [Do] [not] use package level loggers or event recorders. Instantiate them in
`main()` and plumb them down to where they're needed.
* Each [`Reconciler`] implementation should use its own `logging.Logger` and
`event.Recorder`. Implementations are strongly encouraged to default to using
`logging.NewNopLogger()` and `event.NewNopRecorder()`, and accept a functional
loggers and recorder via variadic options. See for example the [managed
resource reconciler].
* Each controller should use its name as its event recorder's name, and include
its name under the `controller` structured logging key. The controllers name
should be of the form `controllertype/resourcekind`, for example
`managed/cloudsqlinstance` or `stacks/stackdefinition`. Controller names
should always be lowercase.
* Logs and events should typically be emitted by the `Reconcile` method of the
`Reconciler` implementation; not by functions called by `Reconcile`. Author
the methods orchestrated by `Reconcile` as if they were a library; prefer
surfacing useful information for the `Reconciler` to log (for example by
[wrapping errors]) over plumbing loggers and event recorders down to
increasingly deeper layers of code.
* Almost nothing is worth logging at info level. When deciding which logging
level to use, consider a production deployment of Crossplane reconciling tens
or hundreds of managed resources. If in doubt, pick debug. You can easily
increase the log level later if it proves warranted.
* The above is true even for errors; consider the audience. Is this an error
only the Crossplane cluster operator can fix? Does it indicate a significant
degradation of Crossplane's functionality? If so, log it at info. If the error
pertains to a single Crossplane resource emit an event instead.
* Always log errors under the structured logging key `error` (e.g.
`log.Debug("boom!, "error", err)`). Many logging implementations (including
Crossplane's) add context like stack traces for this key.
* Emit events liberally; they're rate limited and deduplicated.
* Follow [API conventions] when emitting events; ensure event reasons are unique
and `CamelCase`.
* Consider emitting events and logs when a terminal condition is encountered
(e.g. `Reconcile` returns) over logging logic flow. i.e. Prefer one log line
that reads "encountered an error fooing the bar" over two log lines that read
"about to foo the bar" and "encountered an error". Recall that if the audience
is a developer debugging Crossplane they will be provided a stack trace with
file and line context when an error is logged.
* Consider including the `reconcile.Request`, and the resource's UID and
resource version (not API version) under the keys `request`, `uid`, and
`version`. Doing so allows log readers to determine what specific version of a
resource the log pertains to.
Finally, when in doubt, aim for consistency with existing Crossplane controller
implementations.
<!-- Named Links -->
[not yet implemented]: https://github.com/crossplane/crossplane/issues/314
[Events]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#event-v1-core
[debugging an application cluster]: https://kubernetes.io/docs/tasks/debug-application-cluster/
[Dave Cheney article]: https://dave.cheney.net/2015/11/05/lets-talk-about-logging
[`event`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/event
[`logging`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/logging
[Do]: https://peter.bourgon.org/go-best-practices-2016/#logging-and-instrumentation
[not]: https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern
[`Reconciler`]: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler
[managed resource reconciler]: https://github.com/crossplane/crossplane-runtime/blob/a6bb0/pkg/reconciler/managed/reconciler.go#L436
[wrapping errors]: https://godoc.org/github.com/pkg/errors#Wrap
[API conventions]: https://github.com/kubernetes/community/blob/09f55c6/contributors/devel/sig-architecture/api-conventions.md#events

View File

@ -1,20 +0,0 @@
---
title: Contributing
toc: true
weight: 100
---
# Contributing
The following documentation is for developers who wish to contribute to or
extend Crossplane. Please [open an
issue](https://github.com/crossplane/crossplane/issues/new) for any additional
documentation you would like to see in this section.
1. [Services Developer Guide]
2. [Observability Developer Guide]
<!-- Named Link -->
[Services Developer Guide]: services_developer_guide.md
[Observability Developer Guide]: observability_developer_guide.md

File diff suppressed because it is too large Load Diff

View File

@ -1,201 +0,0 @@
---
title: Applications
toc: true
weight: 7
indent: true
---
# From Workloads to Apps
Crossplane *Applications* allow you to define your application and its managed
service dependencies as a single installable unit. They serve as an abstraction
above the claims, classes, and managed resources we explored in previous
sections. They are portable in that they create claims for infrastructure that
are satisfied by different managed service implementations depending on how your
Crossplane control cluster is configured.
## Deploying the Wordpress Application on GCP
[Wordpress] is a relatively simple monolithic application that only requires
compute to run its containerized binary and a connection to a MySQL database.
Wordpress is typically installed in a Kubernetes cluster using its official
[Helm chart]. Crossplane applications let you define your application using
common configuration tools such as [Helm] and [Kustomize], but represent them as
a [CustomResourceDefinition] in your cluster.
The steps for using a Crossplane application involve defining your
infrastructure, installing the application, then creating an instance of that
application. In the [previous section], we completed the first step by creating
our `GCPSample` instance. In contrast to the GCP provider and GCP sample stack,
the Wordpress application will be installed with a `StackInstall` instead of a
`ClusterStackInstall`. This means that the installation will only be available
in the namespace that we specify.
Create a file named `wordpress-install.yaml` with the following content:
```yaml
apiVersion: stacks.crossplane.io/v1alpha1
kind: StackInstall
metadata:
name: app-wordpress
namespace: cp-quickstart
spec:
package: crossplane/app-wordpress:master
```
Then create it in your cluster:
```
kubectl apply -f wordpress-install.yaml
```
We can now create Wordpress instances in the `crossplane-quickstart` namespace
using a single `CustomResourceDefinition`. When we do, a `KubernetesCluster`
claim and a `MySQLInstance` claim will be created in the namespace, as well as a
`KubernetesApplication` that contains the Wordpress application components. The
claims will be satisfied by the `GKEClusterClass` and `CloudSQLInstanceClass` we
created in the [previous section]. Let's create a `WordpressInstance` and see
what happens.
Create a file named `my-wordpress.yaml` with the following content:
```yaml
apiVersion: wordpress.apps.crossplane.io/v1alpha1
kind: WordpressInstance
metadata:
name: my-wordpress
namespace: cp-quickstart
spec:
provisionPolicy: ProvisionNewCluster
```
Then create it in your cluster:
```
kubectl apply -f my-wordpress.yaml
```
You can use the following commands to look at the resources being provisioned:
```
kubectl -n cp-quickstart get kubernetesclusters
```
```
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
my-wordpress-cluster GKEClusterClass my-gcp-gkeclusterclass GKECluster cp-quickstart-my-wordpress-cluster-jxftn 19s
```
```
kubectl -n cp-quickstart get mysqlinstances
```
```
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
my-wordpress-sql CloudSQLInstanceClass my-gcp-cloudsqlinstanceclass-mysql CloudSQLInstance cp-quickstart-my-wordpress-sql-vz9r7 30s
```
```
kubectl -n cp-quickstart get kubernetesapplications
```
```
NAME CLUSTER STATUS DESIRED SUBMITTED
my-wordpress-app Pending
```
It will take some time for the `GKECluster` and `CloudSQLInstance` to be
provisioned and ready, but when they are, Crossplane will schedule the Wordpress
`KubernetesApplication` to the remote `GKECluster`, as well as send the
`CloudSQLInstance` connection information to the remote cluster in the form of a
`Secret`. Because Wordpress is running in the `GKECluster` that we created in
the same network as the `CloudSQLInstance`, it will be able to communicate with
it freely.
When the `KubernetesApplication` has submitted all of its resources to the
cluster, you should be able to view the IP address of the Wordpress `Service`:
```
kubectl -n cp-quickstart describe kubernetesapplicationresources my-wordpress-service
```
```
Name: my-wordpress-service
Namespace: cp-quickstart
Labels: app=my-wordpress
Annotations: <none>
API Version: workload.crossplane.io/v1alpha1
Kind: KubernetesApplicationResource
Metadata:
Creation Timestamp: 2020-03-23T23:07:07Z
Finalizers:
finalizer.kubernetesapplicationresource.workload.crossplane.io
Generation: 1
Owner References:
API Version: workload.crossplane.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: KubernetesApplication
Name: my-wordpress-app
UID: c4baec14-c8ac-4c75-94f9-0a1cd3638ea6
Resource Version: 3509
Self Link: /apis/workload.crossplane.io/v1alpha1/namespaces/cp-quickstart/kubernetesapplicationresources/my-wordpress-service
UID: 80f5513a-704c-41f9-b5e9-d681dda85feb
Spec:
Target Ref:
Name: c568d44c-c882-42ca-ab4b-217cd101b269
Template:
API Version: v1
Kind: Service
Metadata:
Labels:
App: wordpress
Name: wordpress
Namespace: my-wordpress
Spec:
Ports:
Port: 80
Selector:
App: wordpress
Type: LoadBalancer
Status:
Conditioned Status:
Conditions:
Last Transition Time: 2020-03-23T23:07:11Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Remote:
Load Balancer:
Ingress:
Ip: 34.94.54.204 # the application is running at this IP address
State: Submitted
Events: <none>
```
Navigating to the address in your browser should take you to the Wordpress
welcome page.
Now you are familiar with **Providers**, **Stacks**, and **Applications**. The
next step is to build and deploy your own. Take a look at some of our [guides]
to learn more.
## Clean Up
If you would like to clean up the resources created in this section, run the
following commands:
```
kubectl delete -f my-wordpress.yaml
kubectl delete -f my-gcp.yaml
```
<!-- Named Links -->
[Wordpress]: https://wordpress.org/
[Helm chart]: https://github.com/bitnami/charts/tree/master/bitnami/wordpress
[Helm]: https://helm.sh/
[Kustomize]: https://kustomize.io/
[CustomResourceDefinition]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[previous section]: stack.md
[guides]: ../guides/guides.md

View File

@ -1,26 +0,0 @@
---
title: Configure
toc: true
weight: 2
indent: true
---
# Configure Your Cloud Provider Account
In order for Crossplane to be able to manage resources in a specific cloud
provider, you will need to create an account for Crossplane to use. Use the
links below for cloud-specific instructions to create an account that can be
used throughout the guides:
* [Google Cloud Platform (GCP) Service Account]
* [Microsoft Azure Service Principal]
* [Amazon Web Services (AWS) IAM User]
Once you have configured your cloud provider account, you can get started
provisioning resources!
<!-- Named Links -->
[Google Cloud Platform (GCP) Service Account]: ../cloud-providers/gcp/gcp-provider.md
[Microsoft Azure Service Principal]: ../cloud-providers/azure/azure-provider.md
[Amazon Web Services (AWS) IAM User]: ../cloud-providers/aws/aws-provider.md

View File

@ -1,190 +0,0 @@
---
title: Dynamic Provisioning
toc: true
weight: 4
indent: true
---
# Dynamic Provisioning
While someone in your organization needs to be familiar with the lowest level of
infrastructure, many people just want a simple workflow for acquiring and using
infrastructure. Crossplane provides the ability for organizations to define a
catalog of infrastructure and applications, then for teams and individuals to
consume from the catalog with application-focused requests. This is made
possible by *dynamic provisioning*.
## Dynamically Provision a Redis Cluster on GCP
In the [previous example], we created a `CloudMemorystoreInstance`
then claimed it directly with a `RedisCluster` claim. With dynamic provisioning,
we can instead create a *resource class* then request request creation of a
managed resource (and subsequent creation of its external implementation) using
a claim.
> **Resource Class**: an instance of a cluster-scoped CRD that represents
> configuration for an external unit of infrastructure. The fields of a resource
> class CRD map 1-to-1 with the fields exposed by the provider's API, but
> creating a resource class instance *does not* result in immediate creation of
> the external unit. The CRDs that represent resource classes on a provider are
> installed with it.
Create a file named `cloud-memorystore-class.yaml` with the following content:
```yaml
apiVersion: cache.gcp.crossplane.io/v1beta1
kind: CloudMemorystoreInstanceClass
metadata:
name: cms-class
labels:
guide: quickstart
specTemplate:
providerRef:
name: gcp-provider
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
forProvider:
tier: STANDARD_HA
region: us-west2
memorySizeGb: 1
```
> *Note: similar to a managed resource, there is no namespace defined on our
> configuration for the `CloudMemorystoreInstanceClass` above because resource
> classes are
> [cluster-scoped].*
You will notice that this looks very similar to the
`CloudMemorystoreInstanceClass` we created in the previous example. It has the
same configuration for the external Cloud Memorystore instance, but you will
notice that it contains a `specTemplate` instead of a `spec`. The `specTemplate`
will be used later to create a `CloudMemorystoreInstance` that has the same
configuration as the one in the static provisioning example.
Create the `CloudMemorystoreInstanceClass`:
```
kubectl apply -f cloud-memorystore-class.yaml
```
There is nothing to observe yet, we have published this configuration for a
Cloud Memorystore instance for later use. To actually create our
`CloudMemorystoreInstance`, we must create a `RedisCluster` claim that
references the `CloudMemorystoreInstanceClass`. A claim can reference a class in
two ways:
1. **Class Reference**: this is most similar to the way we referenced the
managed resource in the previous example. Instead of a `resourceRef` we can
provide a `classRef`:
```yaml
apiVersion: cache.crossplane.io/v1alpha1
kind: RedisCluster
metadata:
name: redis-claim-dynamic
namespace: cp-quickstart
spec:
classRef:
apiVersion: cache.gcp.crossplane.io/v1beta1
kind: CloudMemorystoreInstanceClass
name: cms-class
writeConnectionSecretToRef:
name: redis-connection-details-dynamic
```
2. **Class Selector**: this is a more general way to reference a class, and also
requires less knowledge of the underlying implementation by the claim
creator. You will notice that the `CloudMemorystoreInstanceClass` we created
above includes the `guide: quickstart` label. If we include that label in a
selector on the `RedisCluster` claim, the claim will be scheduled to a class
that has that label.
> *Note: if multiple classes have a label that is included in the claim's
> selector, one will be chosen at random.*
```yaml
apiVersion: cache.crossplane.io/v1alpha1
kind: RedisCluster
metadata:
name: redis-claim-dynamic
namespace: cp-quickstart
spec:
classSelector:
matchLabels:
guide: quickstart
writeConnectionSecretToRef:
name: redis-connection-details-dynamic
```
Using a label selector means that the `RedisCluster` claim creator is not
concerned whether the claim is satisfied by a GCP
`CloudMemorystoreInstanceClass`, an AWS `ReplicationGroupClass`, an Azure
`RedisClass`, or other. It allows them to select an implementation based on its
traits, rather than its provider. Selecting by label is frequently used to
choose between a set of classes from the same provider, each with with different
configuration. For instance, we could create three different
`CloudMemorystoreInstanceClass` with `memorySizeGb: 1`, `memorySizeGb: 5`,
`memorySizeGb: 10`, and label them `storage: small`, `storage: medium`, and
`storage: large`. Depending on our application requirements, we can then provide
a label selector that will find an appropriate implementation. Each of these
implementations will result in a Redis cluster being created and sufficient
connection details being propagated to the namespace of the claim.
Create a file name `redis-cluster-dynamic.yaml` and add the content the label
selector example above. Then, create the claim:
```
kubectl apply -f redis-cluster-dynamic.yaml
```
Because the `CloudMemorystoreInstanceClass` is the only Redis-compatible class
with the `guide: quickstart` label in our cluster, it is guaranteed to be used.
If you take a look at the `RedisCluster` claim, you should see that the
`example-cloudmemorystore-class` is being used and a `CloudMemorystoreInstance`
has been created with the same configuration:
```
kubectl get -f redis-cluster-dynamic.yaml
```
```
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
redis-claim-dynamic CloudMemorystoreInstanceClass cms-class CloudMemorystoreInstance cp-quickstart-redis-claim-dynamic-vp4dv 4s
```
You can also view the status of the `CloudMemorystoreInstance` as its external
resource is being created:
```
kubectl get cloudmemorystoreinstances
```
```
NAME STATUS STATE CLASS VERSION AGE
cp-quickstart-redis-claim-dynamic-vp4dv CREATING cms-class REDIS_4_0 25s
```
Once the `CloudMemorystoreInstance` becomes ready and bound, we have found
ourselves in the same situation as the conclusion of our static provisioning.
However, we got here with separating the responsibilities of defining and
consuming infrastructure, and we can create as many `RedisCluster` claims as we
want with the reusable configuration defined in our
`CloudMemorystoreInstanceClass`.
Continue to the [next section] to learn how Crossplane can
schedule workloads to remote Kubernetes clusters!
## Clean Up
If you would like to clean up the resources created in this section, run the
following command:
```
kubectl delete -f redis-cluster-dynamic.yaml
```
<!-- Named Links -->
[previous example]: static.md
[cluster-scoped]: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition
[next section]: workload.md

View File

@ -1,365 +0,0 @@
---
title: Install
toc: true
weight: 1
indent: true
---
# Install Crossplane
Crossplane can be easily installed into any existing Kubernetes cluster using
the regularly published Helm chart. The Helm chart contains all the custom
resources and controllers needed to deploy and configure Crossplane.
## Pre-requisites
* [Kubernetes cluster]
* For example [Minikube], minimum version `v0.28+`
* [Helm], minimum version `v2.12.0+`.
* For Helm 2, make sure Tiller is initialized with sufficient permissions to
work on `crossplane-system` namespace.
## Installation
Helm charts for Crossplane are currently published to the `alpha` and `master`
channels. In the future, `beta` and `stable` will also be available.
### Alpha
The alpha channel is the most recent release of Crossplane that is considered
ready for testing by the community.
Install with Helm 2:
```console
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane
```
Install with Helm 3:
If your Kubernetes version is lower than 1.15 and you'd like to install
Crossplane via Helm 3, you'll need Helm v3.1.0+ that has the flag
`--disable-openapi-validation`.
```console
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
# Kubernetes 1.15 and newer versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane
# Kubernetes 1.14 and older versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane --disable-openapi-validation
```
### Master
The `master` channel contains the latest commits, with all automated tests
passing. `master` is subject to instability, incompatibility, and features may
be added or removed without much prior notice. It is recommended to use one of
the more stable channels, but if you want the absolute newest Crossplane
installed, then you can use the `master` channel.
To install the Helm chart from master, you will need to pass the specific
version returned by the `search` command:
Install with Helm 2:
```console
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search crossplane --devel
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version>
```
For example:
```console
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version 0.0.0-249.637ccf9
```
Install with Helm 3:
If your Kubernetes version is lower than 1.15 and you'd like to install
Crossplane via Helm 3, you'll need Helm v3.1.0+.
```console
kubectl create namespace crossplane-system
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search repo crossplane --devel
# Kubernetes 1.15 and newer versions
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --devel
# Kubernetes 1.14 and older versions
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --devel --disable-openapi-validation
```
## Installing Infrastructure Providers
You can add additional functionality to Crossplane's control plane by installing
`providers`. For example, each supported cloud provider has its own
corresponding Crossplane `provider` that contains all the functionality for that
particular cloud. After a cloud provider's infrastructure `provider` is
installed, you will be able to provision and manage resources within that cloud
from Crossplane.
### Installation with Helm
You can include deployment of additional infrastructure providers into your helm
installation by setting `clusterStacks.<provider-name>.deploy` to `true`.
For example, the following will install `master` version of the GCP stack.
Using Helm 2:
```console
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --set clusterStacks.gcp.deploy=true --set clusterStacks.gcp.version=master
```
Using Helm 3:
```console
kubectl create namespace crossplane-system
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --set clusterStacks.gcp.deploy=true --set clusterStacks.gcp.version=master --devel
```
See [helm configuration parameters](#configuration) for supported stacks and
parameters.
### Manual Installation
After Crossplane has been installed, it is possible to extend Crossplane's
functionality by installing Crossplane providers.
#### GCP Provider
To get started with Google Cloud Platform (GCP), create a file named
`provider-gcp.yaml` with the following content:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: gcp
---
apiVersion: stacks.crossplane.io/v1alpha1
kind: ClusterStackInstall
metadata:
name: provider-gcp
namespace: gcp
spec:
package: "crossplane/provider-gcp:v0.9.0"
```
Then you can install the GCP provider into Crossplane in the `gcp` namespace
with the following command:
```console
kubectl apply -f provider-gcp.yaml
```
#### AWS Provider
To get started with Amazon Web Services (AWS), create a file named
`provider-aws.yaml` with the following content:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: aws
---
apiVersion: stacks.crossplane.io/v1alpha1
kind: ClusterStackInstall
metadata:
name: provider-aws
namespace: aws
spec:
package: "crossplane/provider-aws:v0.9.0"
```
Then you can install the AWS provider into Crossplane in the `aws` namespace
with the following command:
```console
kubectl apply -f provider-aws.yaml
```
#### Azure Provider
To get started with Microsoft Azure, create a file named `provider-azure.yaml`
with the following content:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: azure
---
apiVersion: stacks.crossplane.io/v1alpha1
kind: ClusterStackInstall
metadata:
name: provider-azure
namespace: azure
spec:
package: "crossplane/provider-azure:v0.9.0"
```
Then you can install the Azure provider into Crossplane in the `azure` namespace
with the following command:
```console
kubectl apply -f provider-azure.yaml
```
#### Rook Provider
To get started with Rook, create a file named `provider-rook.yaml` with the
following content:
```yaml
apiVersion: v1
kind: Namespace
metadata:
name: rook
---
apiVersion: stacks.crossplane.io/v1alpha1
kind: ClusterStackInstall
metadata:
name: provider-rook
namespace: rook
spec:
package: "crossplane/provider-rook:v0.6.0"
```
Then you can install the Rook provider into Crossplane in the `rook` namespace
with the following command:
```console
kubectl apply -f provider-rook.yaml
```
### Uninstalling Infrastructure Providers
The infrastructure can be uninstalled simply by deleting the provider resources
from the cluster with a command similar to what's shown below. **Note** that
this will also **delete** any resources that Crossplane has provisioned in the
cloud provider if their `ReclaimPolicy` is set to `Delete`.
After you have ensured that you are completely done with all your cloud provider
resources, you can then run one of the commands below, depending on which cloud
provider you are removing, to remove its provider from Crossplane:
#### Uninstalling GCP
```console
kubectl delete -f provider-gcp.yaml
```
#### Uninstalling AWS
```console
kubectl delete -f provider-aws.yaml
```
#### Uninstalling Azure
```console
kubectl delete -f provider-azure.yaml
```
#### Uninstalling Rook
```console
kubectl delete -f provider-rook.yaml
```
## Uninstalling the Chart
To uninstall/delete the `crossplane` deployment:
```console
helm delete --purge crossplane
```
That command removes all Kubernetes components associated with Crossplane,
including all the custom resources and controllers.
## Configuration
The following tables lists the configurable parameters of the Crossplane chart
and their default values.
| Parameter | Description | Default |
| -------------------------------- | --------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Image | `crossplane/crossplane` |
| `image.tag` | Image tag | `master` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `imagePullSecrets` | Names of image pull secrets to use | `dockerhub` |
| `replicas` | The number of replicas to run for the Crossplane operator | `1` |
| `deploymentStrategy` | The deployment strategy for the Crossplane operator | `RollingUpdate` |
| `clusterStacks.aws.deploy` | Deploy AWS stack | `false`
| `clusterStacks.aws.version` | AWS provider version to deploy | `<latest released version>`
| `clusterStacks.gcp.deploy` | Deploy GCP stack | `false`
| `clusterStacks.gcp.version` | GCP provider version to deploy | `<latest released version>`
| `clusterStacks.azure.deploy` | Deploy Azure stack | `false`
| `clusterStacks.azure.version` | Azure provider version to deploy | `<latest released version>`
| `clusterStacks.rook.deploy` | Deploy Rook stack | `false`
| `clusterStacks.rook.version` | Rook provider version to deploy | `<latest released version>`
| `personas.deploy` | Install roles and bindings for Crossplane user personas | `true`
| `templateStacks.enabled` | Enable experimental template stacks support | `true`
| `templateStacks.controllerImage` | Template Stack controller image | `crossplane/templating-controller:v0.2.1`
| `resourcesCrossplane.limits.cpu` | CPU resource limits for Crossplane | `100m`
| `resourcesCrossplane.limits.memory` | Memory resource limits for Crossplane | `512Mi`
| `resourcesCrossplane.requests.cpu` | CPU resource requests for Crossplane | `100m`
| `resourcesCrossplane.requests.memory` | Memory resource requests for Crossplane | `256Mi`
| `resourcesStackManager.limits.cpu` | CPU resource limits for StackManager | `100m`
| `resourcesStackManager.limits.memory` | Memory resource limits for StackManager | `512Mi`
| `resourcesStackManager.requests.cpu` | CPU resource requests for StackManager | `100m`
| `resourcesStackManager.requests.memory` | Memory resource requests for StackManager | `256Mi`
| `forceImagePullPolicy` | Force the named ImagePullPolicy on Stack install and containers | ``
| `insecureAllowAllApigroups` | Enable core Kubernetes API group permissions for Stacks. When enabled, Stacks may declare dependency on core Kubernetes API types.) | `false` |
| `insecurePassFullDeployment` | Enable stacks to pass their full deployment, including security context. When omitted, Stacks deployments will have security context removed and all containers will have `allowPrivilegeEscalation` set to false. | `false` |
### Command Line
You can pass the settings with helm command line parameters. Specify each
parameter using the `--set key=value[,key=value]` argument to `helm install`.
For example, the following command will install Crossplane with an image pull
policy of `IfNotPresent`.
```console
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane --set image.pullPolicy=IfNotPresent
```
### Settings File
Alternatively, a yaml file that specifies the values for the above parameters
(`values.yaml`) can be provided while installing the chart.
```console
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane -f values.yaml
```
Here are the sample settings to get you started.
```yaml
replicas: 1
deploymentStrategy: RollingUpdate
image:
repository: crossplane/crossplane
tag: master
pullPolicy: Always
imagePullSecrets:
- dockerhub
```
<!-- Named Links -->
[Kubernetes cluster]: https://kubernetes.io/docs/setup/
[Minikube]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[Helm]: https://docs.helm.sh/using_helm/

View File

@ -1,126 +0,0 @@
---
title: Stacks
toc: true
weight: 6
indent: true
---
# Defining Infrastructure Environments with Stacks
Though defining your infrastructure as reusable classes and being able to
dynamically provision resources with the same configuration makes for a much
better workflow, it is still a fair amount of work to set up a service catalog,
especially when secure connectivity is required between numerous infrastructure
components. For example, a workload in a remote Kubernetes cluster may want to
communicate with a database that is not exposed over the public internet.
Depending on the provider being used, this can involve VPCs, subnetworks,
security groups and more. Frequently, setting up these networking components is
a repeated task that may take place in multiple regions and accounts.
Crossplane *Stacks* allow you to group a collection of managed resources and
classes into a single unit that can be installed into your cluster as a
[CustomResourceDefinition]. Let's take a look at installing a minimal stack for
commonly used GCP resources.
## Installing and Using the GCP Sample Stack
[GCP Sample Stack] is a Crossplane stack that includes the following managed
resources:
* `Network`
* `Subnetwork`
* `GlobalAddress`
* `Connection`
Because these are managed resources, they will be created immediately (i.e.
static provisioning. The following resource classes will also be created. They
are configured with references to the networking resources above so when we
dynamically provision resources using these classes they will be created in the
provisioned `Network`, `Subnetwork`, etc.
* `GKEClusterClass`
* `CloudSQLInstanceClass`
* `CloudMemorystoreInstanceClass`
The GCP sample stack will also create a `Provider` resource for us, so we can go
ahead and delete the one we have been using:
```
kubectl delete provider.gcp.crossplane.io gcp-provider
```
Now, similar to how we installed the GCP provider at the beginning, we can
install a Crossplane stack with a `ClusterStackInstall`. Create the a file named
`stack-gcp-sample.yaml` with the following content:
```yaml
apiVersion: stacks.crossplane.io/v1alpha1
kind: ClusterStackInstall
metadata:
name: stack-gcp-sample
namespace: crossplane-system
spec:
package: crossplane/stack-gcp-sample:master
```
Then create it in your cluster:
```
kubectl apply -f stack-gcp-sample.yaml
```
Creating this resource does not actually cause any of the resources listed above
to be created. Instead it creates a `CustomResourceDefinition` in your cluster
that allows you to repeatedly create instance of the environment defined by the
stack. To create an instance of the GCP sample stack, create a file named
`my-gcp.yaml` with the following content:
```yaml
apiVersion: gcp.stacks.crossplane.io/v1alpha1
kind: GCPSample
metadata:
name: my-gcp
spec:
region: us-west2
projectID: <your-project-id> # replace with the project ID you created your Provider with earlier
credentialsSecretRef:
name: gcp-account-creds
namespace: crossplane-system
key: credentials
```
Then create the instance:
```
kubectl apply -f my-gcp.yaml
```
Crossplane will immediately create the managed resources and classes that are
part of the GCP sample stack.
Now that we have general set of infrastructure and classes defined in our
cluster, it is time to deploy some applications. In the [previous section], we
bundled resources into a `KubernetesApplication` and created it in the control
cluster. This is useful for applications that are deployed infrequently and are
not widely distributed, but can be a burden for someone who is not familiar with
the architecture to manage. In the [next section] we will explore how Crossplane
*Applications* make it possible to package and distribute your configuration,
including managed services that an application consumes.
## Clean Up
The resources created in this section will be used in the next one as well, but
if you do not intend to go through the next section and would like to clean up
the resources created in this section, run the following command:
```
kubectl delete -f my-gcp.yaml
```
<!-- Named Links -->
[CustomResourceDefinition]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[GCP Sample Stack]: https://github.com/crossplane/stack-gcp-sample
[previous section]: workload.md
[next section]: app.md

View File

@ -1,274 +0,0 @@
---
title: Static Provisioning
toc: true
weight: 3
indent: true
---
# Static Provisioning
Crossplane supports provisioning resources *statically* and *dynamically*.
Static provisioning is the traditional manner of creating new infrastructure
using an infrastructure as code tool. To statically provision a resource, you
provide all necessary configuration and Crossplane simply takes your
configuration and submits it to the cloud provider.
Where Crossplane differs from an infrastructure as code tool is that it
continues to manage your resources after they are created. Let's take a look at
a simple example. We will use GCP for this quick start, but you can achieve the
same functionality of any of the providers mentioned in the [installation] and
[configuration] sections. You should have your provider of choice installed and
should have created a `Provider` resource with the necessary credentials. We
will use a [GCP `Provider`] resource with name `gcp-provider` below.
## Statically Provision a Redis Cluster on GCP
GCP provides Redis clusters using [Cloud Memorystore]. The GCP Crossplane
provider installs a `CloudMemorystoreInstance` [CustomResourceDefinition] (CRD)
which makes the API type available in your Kubernetes cluster. Creating an
instance of this CRD will result in the creation of a corresponding Cloud
Memorystore instance on GCP. CRDs like `CloudMemorystoreInstance` are referred
to as **Managed Resources** in Crossplane.
> **Managed Resource**: a cluster-scoped custom resource that represents an
> external unit of infrastructure. The fields of a managed resource CRD map
> 1-to-1 with the fields exposed by the provider's API, and creation of a
> managed resource result in immediate creation of external unit. The CRDs that
> represent managed resources on a provider are installed with it.
The fields available on the `CloudMemorystoreInstance` CRD match the ones
exposed by GCP, so you can configure it however you see fit.
Create a file named `cloud-memorystore.yaml` with the following content:
```yaml
apiVersion: cache.gcp.crossplane.io/v1beta1
kind: CloudMemorystoreInstance
metadata:
name: example-cloudmemorystore-instance
spec:
providerRef:
name: gcp-provider
writeConnectionSecretToRef:
name: example-cloudmemorystore-connection-details
namespace: crossplane-system
reclaimPolicy: Delete
forProvider:
tier: STANDARD_HA
region: us-west2
memorySizeGb: 1
```
> *Note: there is no namespace defined on our configuration for the
> `CloudMemorystoreInstance` above because it is [cluster-scoped].*
Now, create a `CloudMemorystoreInstance` in your cluster with the following
command:
```
kubectl apply -f cloud-memorystore.yaml
```
The GCP provider controllers will see the creation of this
`CloudMemorystoreInstance` and subsequently create it on GCP. You can log in to
the GCP console to view the the status of the resource, but Crossplane will also
propagate the status back to the `CloudMemorystore` object itself. This allows
you to manage your infrastructure without ever leaving `kubectl`.
```
kubectl describe -f cloud-memorystore.yaml
```
```
Name: example-cloudmemorystore-instance
Namespace:
Labels: <none>
Annotations: crossplane.io/external-name: example-cloudmemorystore-instance
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"cache.gcp.crossplane.io/v1beta1","kind":"CloudMemorystoreInstance","metadata":{"annotations":{},"name":"example-cloudmemory...
API Version: cache.gcp.crossplane.io/v1beta1
Kind: CloudMemorystoreInstance
Metadata:
Creation Timestamp: 2020-03-23T19:28:14Z
Finalizers:
finalizer.managedresource.crossplane.io
Generation: 2
Resource Version: 1476
Self Link: /apis/cache.gcp.crossplane.io/v1beta1/cloudmemorystoreinstances/example-cloudmemorystore-instance
UID: 68be2036-4716-4c82-be5c-7923f1f8d6b1
Spec:
For Provider:
Alternative Location Id: us-west2-a
Authorized Network: projects/crossplane-playground/global/networks/default
Location Id: us-west2-b
Memory Size Gb: 1
Redis Version: REDIS_4_0
Region: us-west2
Tier: STANDARD_HA
Provider Ref:
Name: gcp-provider
Reclaim Policy: Delete
Write Connection Secret To Ref:
Name: example-cloudmemorystore-connection-details
Namespace: crossplane-system
Status:
At Provider:
Create Time: 2020-03-23T19:28:16Z
Name: projects/crossplane-playground/locations/us-west2/instances/example-cloudmemorystore-instance
Persistence Iam Identity: serviceAccount:651413264395-compute@developer.gserviceaccount.com
Port: 6379
State: CREATING
Conditions:
Last Transition Time: 2020-03-23T19:28:14Z
Reason: Successfully resolved resource references to other resources
Status: True
Type: ReferencesResolved
Last Transition Time: 2020-03-23T19:28:14Z
Reason: Resource is being created
Status: False
Type: Ready
Last Transition Time: 2020-03-23T19:28:17Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CreatedExternalResource 14s managed/cloudmemorystoreinstance.cache.gcp.crossplane.io Successfully requested creation of external resource
```
When the resource is done provisioning on GCP, you should see the `State` turn
from `CREATING` to `READY`, and a corresponding event will be emitted. At this
point, Crossplane will create a `Secret` that contains any connection
information for the external resource. The `Secret` is created in the location
we specified in our configuration:
```yaml
writeConnectionSecretToRef:
name: example-cloudmemorystore-connection-details
namespace: crossplane-system
```
It will take some time to provision, but once the `CloudMemorystoreInstance` is
ready, take a look at the contents of the `Secret`:
```
kubectl -n crossplane-system describe secret example-cloudmemorystore-connection-details
```
```
Name: example-cloudmemorystore-connection-details
Namespace: crossplane-system
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
endpoint: 14 bytes
port: 4 bytes
```
You will also see that the `CloudMemorystoreInstance` resource is still
reporting `Status: Unbound`. This is because we have not *claimed* it for usage
yet.
Crossplane follows a similar pattern to [Kubernetes persistent volumes]. When
you statically provision a resource in Crossplane, the external resource is also
created. However, when you want to use a resource, you create an
application-focused **claim** for it. In this case, we will create a
`RedisCluster` claim for the `CloudMemorystoreInstance`. Because we know exactly
which `CloudMemorystoreInstance` we want to use, we reference it directly from
the claim.
> **Claim**: a namespace-scoped custom resources that represents a claim for
> usage of a managed resource. Claims are abstract types, like `RedisCluster`,
> that have support across multiple providers. A claim may be satisfied by many
> different managed resource types. For instance, a `RedisCluster` can be
> satisfied by an instance of a GCP `CloudMemorystoreInstance`, an AWS
> `ReplicationGroup`, or an Azure `Redis`. It could also be satisfied by
> different configurations of a single resource type. For instance, you may have
> a large, medium, and small storage `CloudMemorystoreInstance`. When a claim
> becomes *bound* to a managed resource, any connection information from the
> managed resource (i.e. usernames, password, IP addresses, etc.) is propagated
> to the namespace of the claim.
First, let's create a `Namespace` for our claim:
```
kubectl create namespace cp-quickstart
```
Next, create a file named `redis-cluster-static.yaml` with the following
content:
```yaml
apiVersion: cache.crossplane.io/v1alpha1
kind: RedisCluster
metadata:
name: redis-claim-static
namespace: cp-quickstart
spec:
resourceRef:
apiVersion: cache.gcp.crossplane.io/v1beta1
kind: CloudMemorystoreInstance
name: example-cloudmemorystore-instance
writeConnectionSecretToRef:
name: redis-connection-details-static
```
Now, create `RedisCluster` claim in your cluster with the following command:
```
kubectl apply -f redis-cluster-static.yaml
```
You should see the the claim was created, and is now bound:
```
kubectl get -f redis-cluster-static.yaml
```
```
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
redis-claim-static Bound CloudMemorystoreInstance example-cloudmemorystore-instance 12s
```
You should also notice that the connection `Secret` we looked at earlier has now
been propagated to the namespace of our claim:
```
kubectl -n cp-quickstart get secrets
```
```
NAME TYPE DATA AGE
default-token-cnhfn kubernetes.io/service-account-token 3 74s
redis-connection-details-static Opaque 2 36s
```
We have now created and prepared an external managed service for usage using
only `kubectl`, but it was a fairly manual process that required familiarity
with the underlying Redis implementation (Cloud Memorystore). This can be made
easier with *[dynamic provisioning]*.
## Clean Up
If you would like to clean up the resources created in this section, run the
following command:
```
kubectl delete -f redis-cluster-static.yaml
```
<!-- Named Links -->
[installation]: install.md
[configuration]: configure.md
[GCP `Provider`]: cloud-providers/gcp/gcp-provider.md
[Cloud Memorystore]: https://cloud.google.com/memorystore
[CustomResourceDefinition]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[cluster-scoped]: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#create-a-customresourcedefinition
[Kubernetes persistent volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[dynamic provisioning]: dynamic.md

View File

@ -1,345 +0,0 @@
---
title: Scheduling Workloads
toc: true
weight: 5
indent: true
---
# Scheduling Workloads to Remote Clusters
In the previous two examples, we provisioned infrastructure that is consumed by
some form of application. However, many providers expose services that you run
your application on. The most obvious example of this type of service would be a
managed Kubernetes service, such as [GKE], [EKS], or [AKS]. Crossplane not only
provisions and manages these types of infrastructure, but also allows you to
schedule workloads to them.
In the case of a Kubernetes cluster, Crossplane lets you schedule to remote
Kubernetes clusters from a single *control cluster*. The remote cluster may have
been, but does not have to been *provisioned* by Crossplane. Importantly, each
remote cluster maintains its own *control plane*. Crossplane is only responsible
for sending configuration data to the remote cluster.
> **Control Cluster**: the Kubernetes cluster where Crossplane is installed. It
> may also be used to run workloads besides Crossplane controllers, but it is
> not required to do so.
> **Remote Cluster**: a Kubernetes cluster that Crossplane has access to and may
> schedule workloads to. A remote cluster may have been created from the
> Crossplane control cluster using a provider's managed Kubernetes service, or
> it may be an existing cluster whose connection information was imported into
> the control cluster.
## Provisioning a GKE Cluster and Scheduling a Workload to it
By this point, you are familiar with both dynamic and static provisioning. In
this example, we will dynamically provision a `GKECluster`, but will focus on
what happens after it is ready and bound.
Create a file named `gke-cluster-class.yaml` with the following content:
```yaml
apiVersion: compute.gcp.crossplane.io/v1alpha3
kind: GKEClusterClass
metadata:
name: gkecluster-standard
labels:
guide: quickstart
specTemplate:
writeConnectionSecretsToNamespace: crossplane-system
machineType: n1-standard-1
numNodes: 1
zone: us-central1-b
providerRef:
name: gcp-provider
reclaimPolicy: Delete
```
Create the `GKEClusterClass` resource in your cluster:
```
kubectl apply -f gke-cluster-class.yaml
```
Now create a file named `k8s-cluster.yaml` with the following content:
```yaml
apiVersion: compute.crossplane.io/v1alpha1
kind: KubernetesCluster
metadata:
name: k8scluster
namespace: cp-quickstart
labels:
cluster: hello-world
spec:
classSelector:
matchLabels:
example: "true"
writeConnectionSecretToRef:
name: k8scluster
```
Then create the `KubernetesCluster` claim in your cluster:
```
kubectl apply -f k8s-cluster.yaml
```
As before, a `GKECluster` managed resource should be created and its connection
information will be propagated to the `cp-quickstart` namespace when it is ready
and bound:
```
kubectl get -f k8scluster.yaml
```
```
NAME STATUS CLASS-KIND CLASS-NAME RESOURCE-KIND RESOURCE-NAME AGE
k8scluster GKEClusterClass gkecluster-standard GKECluster cp-quickstart-k8scluster-88426 36s
```
As you may have guessed, the connection information for a `KubernetesCluster`
claim contains [kubeconfig] information. Once the `KubernetesCluster` claim is
bound, you can view the contents of the `Secret` in the `cp-quickstart`
namespace:
```
kubectl -n cp-quickstart describe secret k8scluster
```
The `KubernetesCluster` claim is also unique from other claim types in that when
it becomes bound, Crossplane automatically creates a `KubernetesTarget` that
references the connection secret in the same namespace. You can see the
`KubernetesTarget` that Crossplane created for this `KubernetesCluster` claim:
```
kubectl -n cp-quickstart get kubernetestargets
```
> *Note: a `KubernetesTarget` that is automatically created by Crossplane for a
> bound `KubernetesCluster` claim will have the same labels as the
> `KubernetesCluster` claim.*
To schedule workloads to remote clusters, Crossplane requires those resource to
be wrapped in a `KubernetesApplication`.
> **Kubernetes Application**: a custom resource that bundles other resources
> that are intended to be run on a remote Kubernetes cluster. Creating a
> `KubernetesApplication` will cause Crossplane to find a suitable
> `KubernetesTarget` and attempt to create the bundled resources on the
> referenced `KubernetesCluster` using its connection `Secret`.
We can start by bundling a simple hello world app with a `Namespace`,
`Deployment`, and `Service` for scheduling to our GKE cluster.
Create a file named `helloworld.yaml` with the following content:
```yaml
apiVersion: workload.crossplane.io/v1alpha1
kind: KubernetesApplication
metadata:
name: helloworld
namespace: cp-quickstart
labels:
app: helloworld
spec:
resourceSelector:
matchLabels:
app: helloworld
targetSelector:
matchLabels:
cluster: hello-world
resourceTemplates:
- metadata:
name: helloworld-namespace
labels:
app: helloworld
spec:
template:
apiVersion: v1
kind: Namespace
metadata:
name: helloworld
labels:
app: helloworld
- metadata:
name: helloworld-deployment
labels:
app: helloworld
spec:
template:
apiVersion: apps/v1
kind: Deployment
metadata:
name: helloworld-deployment
namespace: helloworld
spec:
selector:
matchLabels:
app: helloworld
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: hello-world
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
protocol: TCP
- metadata:
name: helloworld-service
labels:
app: helloworld
spec:
template:
kind: Service
metadata:
name: helloworld-service
namespace: helloworld
spec:
selector:
app: helloworld
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
```
Create the `KubernetesApplication`:
```
kubectl apply -f helloworld.yaml
```
Crossplane will immediately attempt to find a compatible `KubernetesTarget` with
matching labels to the stanza we included on our `KubernetesApplication`:
```
targetSelector:
matchLabels:
cluster: hello-world
```
Because we only have one `KubernetesTarget` with these labels in the
`cp-quickstart` namespace, the `KubernetesApplication` will be scheduled to the
GKE cluster we created earlier. You can view the progress of creating the
resources on the remote cluster by looking at the `KubernetesApplication` and
the resulting `KubernetesApplicationResources`:
```
kubectl -n cp-quickstart get kubernetesapplications
```
```
NAME CLUSTER STATUS DESIRED SUBMITTED
helloworld 92184b85-4db3-48d2-99a2-36b3cf81226e Scheduled 3
```
```
kubectl -n cp-quickstart get kubernetesapplicationresources
```
```
NAME TEMPLATE-KIND TEMPLATE-NAME CLUSTER STATUS
helloworld-deployment Deployment helloworld-deployment c1c435a3-8673-46d5-95bb-55cc5040a6fd Submitted
helloworld-namespace Namespace helloworld c1c435a3-8673-46d5-95bb-55cc5040a6fd Submitted
helloworld-service Service helloworld-service c1c435a3-8673-46d5-95bb-55cc5040a6fd Submitted
```
> *Note: each in-line template in a `KubernetesApplication` results in the
> creation of a corresponding `KubernetesApplicationResource`. Crossplane keeps
> the resources on the remote cluster in sync with their
> `KubernetesApplicationResource`, and keeps each respective
> `KubernetesApplicationResource` in sync with its template on the
> `KubernetesApplication`.*
When all three resources have been provisioned, the `KubernetesApplication` will
show a `3` in the `SUBMITTED` column. If you inspect the
`KubernetesApplication`, you should see the IP address of the `LoadBalancer`
`Service` in the remote cluster. If you navigate your browser to that address,
you should be greeted by the hello world application.
```
kubectl -n cp-quickstart describe kubernetesapplicationresource helloworld-service
```
```
Name: helloworld-service
Namespace: cp-quickstart
Labels: app=helloworld
Annotations: <none>
API Version: workload.crossplane.io/v1alpha1
Kind: KubernetesApplicationResource
Metadata:
Creation Timestamp: 2020-03-23T22:29:16Z
Finalizers:
finalizer.kubernetesapplicationresource.workload.crossplane.io
Generation: 2
Owner References:
API Version: workload.crossplane.io/v1alpha1
Block Owner Deletion: true
Controller: true
Kind: KubernetesApplication
Name: helloworld
UID: 1f1808ad-2b82-47df-8e8f-40255511c20a
Resource Version: 31969
Self Link: /apis/workload.crossplane.io/v1alpha1/namespaces/cp-quickstart/kubernetesapplicationresources/helloworld-service
UID: 508c07d5-e3c8-4df2-9927-c320448db437
Spec:
Target Ref:
Name: c1c435a3-8673-46d5-95bb-55cc5040a6fd
Template:
Kind: Service
Metadata:
Name: helloworld-service
Namespace: helloworld
Spec:
Ports:
Port: 80
Target Port: 8080
Selector:
App: helloworld
Type: LoadBalancer
Status:
Conditioned Status:
Conditions:
Last Transition Time: 2020-03-23T22:29:54Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Remote:
Load Balancer:
Ingress:
Ip: 34.67.121.186 # the application is running at this IP address
State: Submitted
Events: <none>
```
> *Note: Creating a cluster and scheduling resources to it is a nice workflow,
> but it is likely that you may want to schedule resources to an existing
> cluster or one that is not a managed service that Crossplane supports. This is
> made possible by storing the base64 encoded `kubeconfig` data in a `Secret`,
> then manually creating a `KubernetesTarget` to point at it. This is an
> advanced workflow, and additional information can be found in the
> [guide on manually adding clusters].*
## Clean Up
If you would like to clean up the resources created in this section, run the
following commands:
```
kubectl delete -f helloworld.yaml
kubectl delete -f k8s-cluster.yaml
```
<!-- Named Links -->
[GKE]: https://cloud.google.com/kubernetes-engine
[EKS]: https://aws.amazon.com/eks/
[AKS]: https://azure.microsoft.com/en-us/services/kubernetes-service/
[kubeconfig]: https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/
[guide on manually adding clusters]: ../guides/manually_adding_clusters.md

View File

@ -1,27 +0,0 @@
---
title: Guides
toc: true
weight: 200
---
# Guides
Because of Crossplane's standardization on the Kubernetes API, it integrates
well with many other project. Below is a collection of guides and tutorials that
demonstrate how to use Crossplane with a variety of tools in the GitOps, service
mesh, and infrastructure provisioning spaces.
- [Connecting AWS managed services to your Argo CD pipeline with open source
Crossplane]
- [Using Crossplane to schedule workloads to Kubernetes clusters provisioned by
Cluster API]
- [Using managed services in your development workflow with Crossplane and
Okteto]
- [Installing Linkerd into remote clusters using Crossplane]
<!-- Named Links -->
[Connecting AWS managed services to your Argo CD pipeline with open source Crossplane]: https://aws.amazon.com/blogs/opensource/connecting-aws-managed-services-to-your-argo-cd-pipeline-with-open-source-crossplane/
[Using Crossplane to schedule workloads to Kubernetes clusters provisioned by Cluster API]: https://github.com/crossplane/tbs/tree/master/episodes/11/assets
[Using managed services in your development workflow with Crossplane and Okteto]: https://github.com/crossplane/tbs/tree/master/episodes/10/assets
[Installing Linkerd into remote clusters using Crossplane]: https://github.com/crossplane/tbs/tree/master/episodes/12/assets

View File

@ -1,53 +0,0 @@
---
title: Manually Adding Existing Kubernetes Clusters
toc: true
weight: 201
indent: true
---
# Manually Adding Existing Kubernetes Clusters
As discussed in the section on [scheduling applications to remote clusters],
Crossplane allows you to import existing Kubernetes clusters for scheduling.
This can be done for any cluster for which you have a `kubeconfig`. Crossplane
will be given the same permissions to the remote cluster that are provided to
the user in your `kubeconfig`.
The first step is creating a `Secret` with the base64 encoded data of your
`kubeconfig`. This can be done with the following command (assumes data is in
`kubeconfig.yaml` and desired namespace is `cp-infra-ops`):
```
kubectl -n cp-infra-ops create secret generic myk8scluster --from-literal=kubeconfig=$(base64 kubeconfig.yaml -w 0)
```
In order to use this cluster as a scheduling target, you must also create a
`KubernetesTarget` resource that references the `Secret`. Make sure to include
any labels that you want to schedule your `KubernetesApplication` to:
`myk8starget.yaml`
```
apiVersion: workload.crossplane.io/v1alpha1
kind: KubernetesTarget
metadata:
name: myk8starget
namespace: cp-infra-ops
labels:
guide: infra-ops
spec:
connectionSecretRef:
name: myk8scluster
```
```
kubectl apply -f myk8starget.yaml
```
*Note: the `Secret` and `KubernetesTarget` must be in the same namespace.*
You can now create a `KubernetesApplication` in the `cp-infra-ops` namespace and
your remote cluster will be a scheduling option.
<!-- Named Links -->
[scheduling applications to remote clusters]: ../workload.md

View File

@ -1,198 +0,0 @@
---
title: Packaging an Application
toc: true
weight: 202
indent: true
---
# Packaging an Application
In the quick start guide, we demonstrated how Wordpress can be installed as a
Crossplane `Application`. Now we want to learn more about how to package any
application in a similar fashion. The good news is that we can use common
Kubernetes configuration tools, such as [Helm] and [Kustomize], which you may
already be familiar with.
## Setting up a Repository
The required components of an application repository are minimal. For example,
the required components of the [Wordpress application] we deployed in the quick
start are the following:
```
├── Dockerfile
├── .registry
│   ├── app.yaml
│   ├── behavior.yaml
│   ├── icon.svg
│   └── resources
│   ├── wordpress.apps.crossplane.io_wordpressinstances.crd.yaml
│   ├── wordpressinstance.icon.svg
│   ├── wordpressinstance.resource.yaml
│   └── wordpressinstance.ui-schema.yaml
├── helm-chart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── app.yaml
│   │   ├── cluster.yaml
│   │   └── database.yaml
│   └── values.yaml
```
Let's take a look at each component in-depth.
### Dockerfile
The Dockerfile is only responsible for copying the configuration directory
(`helm-chart/` in this case) and the `.registry` directory. You can likely use a
very similar Dockerfile across all of your applications:
```Dockerfile
FROM alpine:3.7
WORKDIR /
COPY helm-chart /helm-chart
COPY .registry /.registry
```
### .registry
The `.registry` directory informs Crossplane how to install your application. It
consists of the following:
**app.yaml** `[required]`
The `app.yaml` file is responsible for defining the metadata for an application,
such as name, version, and required permissions. The Wordpress `app.yaml` is a
good reference for available fields:
```yaml
# Human readable title of application.
title: Wordpress
overviewShort: Cloud portable Wordpress deployments behind managed Kubernetes and SQL services are demonstrated in this Crossplane Stack.
overview: |-
This Wordpress application uses a simple controller that uses Crossplane to orchestrate managed SQL services and managed Kubernetes clusters which are then used to run a Wordpress deployment.
A simple Custom Resource Definition (CRD) is provided allowing for instances of this Crossplane managed Wordpress Application to be provisioned with a few lines of yaml.
The Sample Wordpress Application is intended for demonstration purposes and should not be used to deploy production instances of Wordpress.
# Markdown description of this entry
readme: |-
### Create wordpresses
Before wordpresses will provision, the Crossplane control cluster must
be configured to connect to a provider (e.g. GCP, Azure, AWS).
Once a provider is configured, starting the process of creating a
Wordpress Application instance is easy.
cat <<EOF | kubectl apply -f -
apiVersion: wordpress.samples.apps.crossplane.io/v1alpha1
kind: WordpressInstance
metadata:
name: wordpressinstance-sample
EOF
The stack (and Crossplane) will take care of the rest.
# Maintainer names and emails.
maintainers:
- name: Daniel Suskin
email: daniel@upbound.io
# Owner names and emails.
owners:
- name: Daniel Suskin
email: daniel@upbound.io
# Human readable company name.
company: Upbound
# Type of package: Provider, Stack, or Application
packageType: Application
# Keywords that describe this application and help search indexing
keywords:
- "samples"
- "examples"
- "tutorials"
- "wordpress"
# Links to more information about the application (about page, source code, etc.)
website: "https://upbound.io"
source: "https://github.com/crossplane/app-wordpress"
# RBAC Roles will be generated permitting this stack to use all verbs on all
# resources in the groups listed below.
permissionScope: Namespaced
dependsOn:
- crd: "kubernetesclusters.compute.crossplane.io/v1alpha1"
- crd: "mysqlinstances.database.crossplane.io/v1alpha1"
- crd: "kubernetesapplications.workload.crossplane.io/v1alpha1"
- crd: "kubernetesapplicationresources.workload.crossplane.io/v1alpha1"
# License SPDX name: https://spdx.org/licenses/
license: Apache-2.0
```
**behavior.yaml** `[required]`
While the `app.yaml` is responsible for metadata, that `behavior.yaml` is
responsible for operations. It is where you tell Crossplane how to create
resources in the cluster when an instance of the [CustomResourceDefinition] that
represents your application is created. Take a look at the Wordpress
`behavior.yaml` for reference:
```yaml
source:
path: "helm-chart" # where the configuration data exists in Docker container
crd:
kind: WordpressInstance # the kind of the CustomResourceDefinition
apiVersion: wordpress.apps.crossplane.io/v1alpha1 # the apiVersion of the CustomResourceDefinition
engine:
type: helm3 # the configuration engine to be used (helm3 and kustomize are valid options)
```
**icon.svg**
The `icon.svg` file is a logo for your application.
**resources/** `[required]`
The `resources/` directory contains the CustomResourceDefinition (CRD) that
Crossplane watches to apply the configuration data you supply. For the Wordpress
application, this is `wordpress.apps.crossplane.io_wordpressinstances.crd.yaml`.
CRDs can be generated from `go` code using projects like [controller-tools], or
can be written by hand.
You can also supply metadata files for your CRD, which can be parsed by a user
interface. The files must match the name of the CRD kind for your application:
- `<your-kind>.icon.svg`: an image to be displayed for your application CRD
- `<your-kind>.resource.yaml`: a description of your application CRD
- `<your-kind>.ui-schema.yaml`: the configurable fields on your CRD that you
wish to be displayed in a UI
Crossplane will take these files and apply them as [annotations] on the
installed application. They can then be parsed by a user interface.
### Configuration Directory
The configuration directory contains the actual manifests for deploying your
application. In the case of Wordpress, this includes a `KubernetesApplication`
(`helm-chart/templates/app.yaml`), a `KubernetesCluster` claim
(`helm-chart/templates/cluster.yaml`), and a `MySQLInstance` claim
(`helm-chart/templates/database.yaml`). The configuration tool for the manifests
in the directory should match the `engine` field in your
`.registry/behavior.yaml`. The options for engines at this time are `helm3` and
`kustomize`. Crossplane will pass values from the `spec` of the application's
CRD as variables in the manifests. For instance, the `provisionPolicy` field in
the `spec` of the `WordpressInstance` CRD will be passed to the Helm chart
defined in the `helm-chart/` directory.
<!-- Named Links -->
[Helm]: https://helm.sh/
[Kustomize]: https://kustomize.io/
[Wordpress application]: https://github.com/crossplane/app-wordpress
[CustomResourceDefinition]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[controller-tools]: https://github.com/kubernetes-sigs/controller-tools
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 375 KiB

View File

@ -1,310 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 23.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 1312.19 279.51" style="enable-background:new 0 0 1312.19 279.51;" xml:space="preserve">
<style type="text/css">
.st0{clip-path:url(#SVGID_2_);fill:#F7D186;}
.st1{clip-path:url(#SVGID_4_);fill:#FF9234;}
.st2{clip-path:url(#SVGID_6_);enable-background:new ;}
.st3{clip-path:url(#SVGID_8_);}
.st4{clip-path:url(#SVGID_10_);}
.st5{clip-path:url(#SVGID_12_);fill:#FFCD3C;}
.st6{clip-path:url(#SVGID_14_);enable-background:new ;}
.st7{clip-path:url(#SVGID_16_);}
.st8{clip-path:url(#SVGID_18_);}
.st9{clip-path:url(#SVGID_20_);fill:#F3807B;}
.st10{clip-path:url(#SVGID_22_);enable-background:new ;}
.st11{clip-path:url(#SVGID_24_);}
.st12{clip-path:url(#SVGID_26_);}
.st13{clip-path:url(#SVGID_28_);fill:#35D0BA;}
.st14{clip-path:url(#SVGID_30_);fill:#D8AE64;}
.st15{clip-path:url(#SVGID_32_);fill:#004680;}
.st16{clip-path:url(#SVGID_34_);fill:#004680;}
.st17{clip-path:url(#SVGID_36_);fill:#004680;}
.st18{clip-path:url(#SVGID_38_);fill:#004680;}
.st19{clip-path:url(#SVGID_40_);fill:#004680;}
.st20{clip-path:url(#SVGID_42_);fill:#004680;}
.st21{clip-path:url(#SVGID_44_);fill:#004680;}
.st22{clip-path:url(#SVGID_46_);fill:#004680;}
.st23{clip-path:url(#SVGID_48_);fill:#004680;}
.st24{clip-path:url(#SVGID_50_);fill:#004680;}
</style>
<g>
<g>
<defs>
<path id="SVGID_1_" d="M115.47,94.13c-8.4,0-15.22,6.81-15.22,15.22v143.2c0,8.4,6.81,15.22,15.22,15.22s15.22-6.81,15.22-15.22
v-143.2C130.68,100.94,123.87,94.13,115.47,94.13"/>
</defs>
<clipPath id="SVGID_2_">
<use xlink:href="#SVGID_1_" style="overflow:visible;"/>
</clipPath>
<rect x="89.53" y="83.41" class="st0" width="51.87" height="195.07"/>
</g>
<g>
<defs>
<path id="SVGID_3_" d="M176.53,75.36c0.05-0.96,0.07-1.93,0.07-2.9c0-0.95-0.02-1.89-0.07-2.82
c-1.47-32.22-28.06-57.88-60.64-57.88S56.72,37.42,55.25,69.64c-0.04,0.94-0.07,1.88-0.07,2.82c0,1.04,0.03,2.07,0.08,3.09
c-0.02,0.5-0.08,1-0.08,1.51v99.64c0,19.06,15.59,34.65,34.65,34.65h52.14c19.06,0,34.65-15.59,34.65-34.65V77.07
C176.62,76.49,176.56,75.93,176.53,75.36"/>
</defs>
<clipPath id="SVGID_4_">
<use xlink:href="#SVGID_3_" style="overflow:visible;"/>
</clipPath>
<rect x="44.47" y="1.04" class="st1" width="142.87" height="221.04"/>
</g>
<g>
<defs>
<path id="SVGID_5_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_6_">
<use xlink:href="#SVGID_5_" style="overflow:visible;"/>
</clipPath>
<g class="st2">
<g>
<defs>
<rect id="SVGID_7_" x="16.08" y="24.9" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_8_">
<use xlink:href="#SVGID_7_" style="overflow:visible;"/>
</clipPath>
<g class="st3">
<defs>
<rect id="SVGID_9_" x="9.23" y="92.99" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -54.1638 118.2926)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_10_">
<use xlink:href="#SVGID_9_" style="overflow:visible;"/>
</clipPath>
<g class="st4">
<defs>
<rect id="SVGID_11_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_12_">
<use xlink:href="#SVGID_11_" style="overflow:visible;"/>
</clipPath>
<rect x="7.4" y="16.22" class="st5" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_13_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_14_">
<use xlink:href="#SVGID_13_" style="overflow:visible;"/>
</clipPath>
<g class="st6">
<g>
<defs>
<rect id="SVGID_15_" x="-37.52" y="-28.7" width="207.96" height="207.96"/>
</defs>
<clipPath id="SVGID_16_">
<use xlink:href="#SVGID_15_" style="overflow:visible;"/>
</clipPath>
<g class="st7">
<defs>
<rect id="SVGID_17_" x="-40.95" y="35.1" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -33.3744 68.1028)" width="212.95" height="78.48"/>
</defs>
<clipPath id="SVGID_18_">
<use xlink:href="#SVGID_17_" style="overflow:visible;"/>
</clipPath>
<g class="st8">
<defs>
<rect id="SVGID_19_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_20_">
<use xlink:href="#SVGID_19_" style="overflow:visible;"/>
</clipPath>
<rect x="-48.24" y="-39.42" class="st9" width="227.51" height="227.51"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_21_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_22_">
<use xlink:href="#SVGID_21_" style="overflow:visible;"/>
</clipPath>
<g class="st10">
<g>
<defs>
<rect id="SVGID_23_" x="61.1" y="69.92" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_24_">
<use xlink:href="#SVGID_23_" style="overflow:visible;"/>
</clipPath>
<g class="st11">
<defs>
<rect id="SVGID_25_" x="53.98" y="137.74" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -72.6974 163.0359)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_26_">
<use xlink:href="#SVGID_25_" style="overflow:visible;"/>
</clipPath>
<g class="st12">
<defs>
<rect id="SVGID_27_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_28_">
<use xlink:href="#SVGID_27_" style="overflow:visible;"/>
</clipPath>
<rect x="52.14" y="60.96" class="st13" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_29_" d="M104.38,211.52l26.4,26.39V211.3C130.78,211.3,103.72,211.52,104.38,211.52"/>
</defs>
<clipPath id="SVGID_30_">
<use xlink:href="#SVGID_29_" style="overflow:visible;"/>
</clipPath>
<rect x="93.65" y="200.58" class="st14" width="47.85" height="48.06"/>
</g>
<g>
<defs>
<path id="SVGID_31_" d="M307.52,195.1c-38.8,0-70.21-31.6-70.21-70.41c0-38.6,31.4-70.21,70.21-70.21c20.2,0,39.6,8.8,52.81,24
c4.2,5,3.8,12.2-1,16.4c-4.8,4.4-12.2,3.8-16.4-1c-9-10.2-21.8-16-35.4-16c-25.8,0-47.01,21-47.01,46.8
c0,26,21.2,47.01,47.01,47.01c13.6,0,26.4-5.8,35.4-16c4.2-4.8,11.6-5.4,16.4-1c4.8,4.2,5.2,11.4,1,16.4
C347.12,186.3,327.72,195.1,307.52,195.1"/>
</defs>
<use xlink:href="#SVGID_31_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_32_">
<use xlink:href="#SVGID_31_" style="overflow:visible;"/>
</clipPath>
<rect x="226.59" y="43.77" class="st15" width="147.35" height="162.05"/>
</g>
<g>
<defs>
<path id="SVGID_33_" d="M438.53,98.89c0,6.4-5.2,11.6-11.8,11.6c-12.8,0-22.4,10.4-22.4,24.6v48.41c0,6.4-5.2,11.6-11.6,11.6
c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,9.8,3.6,11.2,8.6c6.8-4,14.6-6.2,22.8-6.2
C433.33,87.29,438.53,92.49,438.53,98.89"/>
</defs>
<use xlink:href="#SVGID_33_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_34_">
<use xlink:href="#SVGID_33_" style="overflow:visible;"/>
</clipPath>
<rect x="370.4" y="74.17" class="st16" width="78.84" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_35_" d="M497.53,195.7c-30.4,0-55-24.8-55-55c0-30.4,24.6-55.21,55-55.21c30.4,0,55.21,24.8,55.21,55.21
C552.74,170.9,527.94,195.7,497.53,195.7 M497.53,108.69c-17.6,0-31.8,14.4-31.8,32c0,17.4,14.2,31.8,31.8,31.8
c17.6,0,31.8-14.4,31.8-31.8C529.34,123.09,515.14,108.69,497.53,108.69"/>
</defs>
<use xlink:href="#SVGID_35_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_36_">
<use xlink:href="#SVGID_35_" style="overflow:visible;"/>
</clipPath>
<rect x="431.81" y="74.77" class="st17" width="131.65" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_37_" d="M571.94,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.8,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C590.94,195.1,577.54,187.3,571.94,174.9"/>
</defs>
<use xlink:href="#SVGID_37_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_38_">
<use xlink:href="#SVGID_37_" style="overflow:visible;"/>
</clipPath>
<rect x="560.02" y="74.17" class="st18" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_39_" d="M663.75,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.81,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C682.75,195.1,669.35,187.3,663.75,174.9"/>
</defs>
<use xlink:href="#SVGID_39_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_40_">
<use xlink:href="#SVGID_39_" style="overflow:visible;"/>
</clipPath>
<rect x="651.83" y="74.17" class="st19" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_41_" d="M859.17,139.9c0,14.8-5,28.4-14.4,38.61c-9.8,10.6-23.2,16.6-38,16.6c-10.6,0-20.6-3.2-29-8.8v47.2
c0,6.4-5.4,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,10.2,3.8,11.4,9
c8.6-5.8,18.8-9,29.4-9c14.8,0,28.2,5.8,38,16.4C854.17,111.49,859.17,125.29,859.17,139.9 M835.96,139.9
c0-18.4-12.2-31.8-29.2-31.8c-16.8,0-29,13.4-29,31.8c0,18.4,12.2,31.8,29,31.8C823.77,171.7,835.96,158.3,835.96,139.9"/>
</defs>
<use xlink:href="#SVGID_41_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_42_">
<use xlink:href="#SVGID_41_" style="overflow:visible;"/>
</clipPath>
<rect x="743.64" y="74.17" class="st20" width="126.25" height="181.65"/>
</g>
<g>
<defs>
<path id="SVGID_43_" d="M889.77,195.1c-6.4,0-11.6-5.2-11.6-11.6V66.29c0-6.4,5.2-11.6,11.6-11.6c6.4,0,11.8,5.2,11.8,11.6V183.5
C901.57,189.9,896.17,195.1,889.77,195.1"/>
</defs>
<use xlink:href="#SVGID_43_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_44_">
<use xlink:href="#SVGID_43_" style="overflow:visible;"/>
</clipPath>
<rect x="867.45" y="43.97" class="st21" width="44.84" height="161.85"/>
</g>
<g>
<defs>
<path id="SVGID_45_" d="M1025.38,96.49v87.01c0,6.4-5.2,11.6-11.6,11.6c-5.6,0-10.2-3.8-11.4-9c-8.4,5.8-18.6,9-29.4,9
c-14.8,0-28.2-5.8-38.01-16.6c-9.2-10-14.4-23.8-14.4-38.4c0-14.8,5.2-28.6,14.4-38.61c9.8-10.8,23.21-16.6,38.01-16.6
c10.8,0,21,3.2,29.4,9c1.2-5.2,5.8-9,11.4-9C1020.18,84.89,1025.38,90.09,1025.38,96.49 M1002.18,140.1c0-18.6-12.4-32-29.2-32
c-17,0-29.2,13.4-29.2,32c0,18.4,12.2,31.8,29.2,31.8C989.78,171.9,1002.18,158.5,1002.18,140.1"/>
</defs>
<use xlink:href="#SVGID_45_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_46_">
<use xlink:href="#SVGID_45_" style="overflow:visible;"/>
</clipPath>
<rect x="909.85" y="74.17" class="st22" width="126.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_47_" d="M1136.79,132.7v50.8c0,6.4-5.2,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6v-50.8
c0-11.8-6.6-24.6-21.4-24.6c-13.4,0-23.4,10.6-23.4,24.6v0.8v0.8v49.2c0,6.4-5.2,11.6-11.6,11.6c-6.4,0-11.6-5.2-11.6-11.6v-49.4
v-1.4V96.49c0-6.4,5.2-11.6,11.6-11.6c4.8,0,8.8,2.8,10.6,6.8c7-4.4,15.4-6.8,24.4-6.8
C1117.39,84.89,1136.79,105.49,1136.79,132.7"/>
</defs>
<use xlink:href="#SVGID_47_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_48_">
<use xlink:href="#SVGID_47_" style="overflow:visible;"/>
</clipPath>
<rect x="1034.66" y="74.17" class="st23" width="112.85" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_49_" d="M1207.2,196.1c-14.8,0-28.2-6.4-38.01-17.2c-9.4-10-14.4-23.81-14.4-38.4c0-31.61,22.2-55.21,51.4-55.21
c29.41,0,50.8,23.2,50.8,55.21c0,6.4-5.2,11.6-11.8,11.6h-65.41c4,12.2,14.4,20.8,27.4,20.8c7.83,0,14.48-1.65,19.23-6.21
c1.44-1.38,2.7-3.03,3.77-4.99c3.4-5.6,10.6-7.2,16-4c5.6,3.4,7.2,10.6,4,16C1241.2,189.9,1225.6,196.1,1207.2,196.1
M1179.59,128.7h52.61c-4-13.8-15-20.2-26-20.2C1195.4,108.49,1183.79,114.89,1179.59,128.7"/>
</defs>
<use xlink:href="#SVGID_49_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_50_">
<use xlink:href="#SVGID_49_" style="overflow:visible;"/>
</clipPath>
<rect x="1144.07" y="74.57" class="st24" width="123.65" height="132.25"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,88 +0,0 @@
---
title: FAQs
toc: true
weight: 301
indent: true
---
# Frequently Asked Questions (FAQs)
### Where did the name Crossplane come from?
Crossplane is the fusing of cross-cloud control plane. We wanted to use a noun
that refers to the entity responsible for connecting different cloud providers
and acts as control plane across them. Cross implies “cross-cloud” and “plane”
brings in “control plane”.
### What's up with popsicle?
We believe in a multi-flavor cloud.
### Why is Upbound open sourcing this project? What are Upbounds monetization plans?
Upbounds mission is to create a more open cloud-computing platform, with more
choice and less lock-in. We believe the Crossplane as an important step towards
this vision and that its going to take a village to solve this problem. We
believe that multicloud control plane is a new category of open source software,
and it will ultimately disrupt closed source and proprietary models. Upbound
aspires to be a commercial provider of a more open cloud-computing platform.
### What kind of governance model will be used for Crossplane?
Crossplane will be an independent project and we plan on making a community
driven project and not a vendor driven project. It will have an independent
brand, github organization, and an open governance model. It will not be tied to
single organization or individual.
### Will Crossplane be donated to an open source foundation?
We dont know yet. We are open to doing so but wed like to revisit this after
the project has gotten some end-user community traction.
### Does using multicloud mean you will use the lowest common denominator across clouds?
Not necessarily. There are numerous best of breed cloud offerings that run on
multiple clouds. For example, CockroachDB and ElasticSearch are world class
implementations of platform software and run well on cloud providers. They
compete with managed services offered by a cloud provider. We believe that by
having an open control plane for them to integrate with, and providing a common
API, CLI and UI for all of these services, that more of these offerings will
exist and get first-class experience in the cloud.
### How are resources and claims related to PersistentVolumes in Kubernetes?
We modeled resource claims and classes after PersistentVolumes and
PersistentVolumeClaims in Kubernetes. We believe many of the lessons learned
from managing volumes in Kubernetes apply to managing resources within cloud
providers. One notable exception is that we avoided creating a plugin model
within Crossplane.
### How is workload scheduling related to pod scheduling in Kubernetes?
We modeled workload scheduling after the Pod scheduler in Kubernetes. We believe
many of the lessons learned from Pod scheduling apply to scheduling workloads
across cloud providers.
### Can I use Crossplane to consistently provision and manage multiple Kubernetes clusters?
Crossplane includes a portable API for Kubernetes clusters that will include
common configuration including node pools, auto-scalers, taints, admission
controllers, etc. These will be applied to the specific implementations within
the cloud providers like EKS, GKE and AKS. We see the Kubernetes Cluster API to
be something that will be used by administrators and not developers.
### Other attempts at building a higher level API on-top of a multitude of inconsistent lower level APIs have not been successful, will Crossplane not have the same issues?
We agree that building a consistent higher level API on top of multitudes of
inconsistent lower level API's is well known to be fraught with peril (e.g.
dumbing down to lowest common denominator, or resulting in so loosely defined an
API as to be impossible to practically develop real portable applications on top
of it).
Crossplane follows a different approach here. The portable API extracts the
pieces that are common across all implementations, and from the perspective of
the workload. The rest of the implementation details are captured in full
fidelity by the admin in resource classes. The combination of the two is what
results in full configuration that can be deployed. We believe this to be a
reasonable tradeoff that avoids the dumbing down to lowest common denominator
problem, while still enabling portability.

View File

@ -1,39 +0,0 @@
---
title: Learn More
toc: true
weight: 302
indent: true
---
# Learn More
If you have any questions, please drop us a note on [Crossplane Slack][join-crossplane-slack] or [contact us][contact-us]!
***Learn more about using Crossplane***
- [GitLab deploys into multiple clouds from kubectl using Crossplane](https://about.gitlab.com/2019/05/20/gitlab-first-deployed-kubernetes-api-to-multiple-clouds/)
- [CNCF Talks & Community Presentations](https://www.youtube.com/playlist?list=PL510POnNVaaZJj9OG6PbgsZvgYbhwJRyE)
- [Software Engineering Daily - Intro Podcast](https://softwareengineeringdaily.com/2019/01/02/crossplane-multicloud-control-plane-with-bassam-tabbara/)
- [Crossplane Architecture](https://docs.google.com/document/d/1whncqdUeU2cATGEJhHvzXWC9xdK29Er45NJeoemxebo/edit?usp=sharing)
- [Latest Design Docs](https://github.com/crossplane/crossplane/tree/master/design)
- [Roadmap](https://github.com/crossplane/crossplane/blob/master/ROADMAP.md)
***Writing Kubernetes controllers to extend Crossplane***
- [Keep the Space Shuttle Flying: Writing Robust Operators](https://www.youtube.com/watch?v=uf97lOApOv8)
- [Best practices for building Kubernetes Operators](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)
- [Programming Kubernetes Book](https://www.oreilly.com/library/view/programming-kubernetes/9781492047094/)
- [Crossplane Reconciler Patterns](https://github.com/crossplane/crossplane/blob/master/design/design-doc-reconciler-patterns.md)
- [Contributor Guide](https://github.com/crossplane/crossplane/blob/master/CONTRIBUTING.md)
***Join the growing Crossplane community and get involved!***
- Join our [Community Slack](https://slack.crossplane.io/)!
- Submit an issue on [GitHub](https://github.com/crossplane/crossplane)
- Attend our bi-weekly [Community Meeting](https://github.com/crossplane/crossplane#community-meeting)
- Join our bi-weekly live stream: [The Binding Status](https://github.com/crossplane/tbs)
- Subscribe to our [YouTube Channel](https://www.youtube.com/channel/UC19FgzMBMqBro361HbE46Fw)
- Drop us a note on Twitter: [@crossplane_io](https://twitter.com/crossplane_io)
- Email us: [info@crossplane.io](mailto:info@crossplane.io)
<!-- Named links -->
[join-crossplane-slack]: https://slack.crossplane.io
[contact-us]: https://github.com/crossplane/crossplane#contact

View File

@ -1,21 +0,0 @@
---
title: Reference
toc: true
weight: 300
---
# Overview
The reference documentation includes answers to frequently asked questions, information about similar projects, and links to resources that can help you learn more about Crossplane and Kubernetes. If you have additional information that you think would be valuable for the community, please feel free to [open a pull request]() and add it.
1. [Troubleshoot]
2. [Frequently Asked Questions]
3. [Learn More]
4. [Related Projects]
<!-- Named Links -->
[Troubleshoot]: troubleshoot.md
[Frequently Asked Questions]: faqs.md
[Learn More]: learn_more.md
[Related Projects]: related_projects.md

View File

@ -1,96 +0,0 @@
---
title: Related Projects
toc: true
weight: 304
indent: true
---
# Related Projects
While there are many projects that address similar issues, none of them
encapsulate the full use case that Crossplane addresses. This list is not
exhaustive and is not meant to provide a deep analysis of the following
projects, but instead to motivate why Crossplane was created.
## Open Service Broker and Service Catalog
The [Open Service Broker] and the [Kubernetes Service Catalog] are able to
dynamically provision managed services in multiple cloud providers from
Kubernetes. As a result it shares similar goals with Crossplane. However,
service broker is not designed for workload portability, does not have a good
separation of concern, and does not offer any integration with workload and
resource scheduling. Service brokers can not span multiple cloud providers at
once.
## Kubernetes Federation
The [federation-v2] project offers a single control plane that can span multiple
Kubernetes clusters. Its being incubated in SIG-multicluster. Crossplane shares
some of the goals of managing multiple Kubernetes clusters and also the core
principles of creating a higher level control plane, scheduler and controllers
that span clusters. While the federation-v2 project is scoped to just Kubernetes
clusters, Crossplane supports non-container workloads, and orchestrating
resources that run as managed services including databases, message queues,
buckets, and others. The federation effort focuses on defining Kubernetes
objects that can be templatized, and propagated to other Kubernetes clusters.
Crossplane focuses on defining portable workload abstractions across cloud
providers and offerings. We have considered taking a dependency on the
federation-v2 work within Crossplane, although its not clear at this point if
this would accelerate the Crossplane effort.
## AWS Service Operator
The [AWS Service Operator] is a recent project that implements a set of
Kubernetes controllers that are able to provision managed services in AWS. It
defines a set of CRDs for managed services like DynamoDB, and controllers that
can provision them via AWS CloudFormation. It is similar to Crossplane in that
it can provision managed services in AWS. Crossplane goes a lot further by
offering workload portability across cloud multiple cloud providers, separation
of concern, and a scheduler for workload and resources.
## AWS CloudFormation, GCP Deployment Manager, and Others
These products offer a declarative model for deploying and provisioning
infrastructure in each of the respective cloud providers. They only work for one
cloud provider and do not solve the problem of workload portability. These
products are generally closed source, and offer little or no extensibility
points. We have considered using some of these products as a way to implement
resource controllers in Crossplane.
## Terraform
[Terraform] is a popular tool for provisioning infrastructure across cloud
providers. It offers a declarative configuration language with support for
templating, composability, referential integrity and dependency management.
Terraform can dynamically provision infrastructure and perform changes when the
tool is run by a human. Unlike Crossplane, Terraform does not support workload
portability across cloud providers, and does not have any active controllers
that can react to failures, or make changes to running infrastructure without
human intervention. Terraform attempts to solve multicloud at the tool level,
while Crossplane is at the API and control plane level. Terraform is open source
under a MPL license, and follows an open core business model, with a number of
its features closed source. We are evaluating whether we can use Terraform to
accelerate the development of resource controllers in Crossplane.
## Pulumi
[Pulumi] is a product that is based on terraform and uses most of its providers.
Instead of using a configuration language, Pulumi uses popular programming
languages like Typescript to capture the configuration. At runtime, Pulumi
generates a DAG of resources just like terraform and applies it to cloud
providers. Pulumi has an early model for workload portability that is
implemented using language abstractions. Unlike Crossplane, it does not have any
active controllers that can react to failures, or make changes to running
infrastructure without human intervention, nor does it support workload
scheduling. Pulumi attempts to solve multicloud scenarios at the language level,
while Crossplane is at the API and control plane level. Pulumi is open source
under a APL2 license but a number of features require using their SaaS offering.
<!-- Named Links -->
[Open Service Broker]: https://www.openservicebrokerapi.org/
[Kubernetes Service Catalog]: https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/
[federation-v2]: https://github.com/kubernetes-sigs/federation-v2
[AWS Service Operator]: https://github.com/awslabs/aws-service-operator
[Terraform]: https://www.terraform.io/
[Pulumi]: https://www.pulumi.com/

View File

@ -1,262 +0,0 @@
---
title: Troubleshoot
toc: true
weight: 303
indent: true
---
# Troubleshooting
* [Using the trace command]
* [Resource Status and Conditions]
* [Crossplane Logs]
* [Pausing Crossplane]
* [Deleting a Resource Hangs]
* [Host-Aware Resource Debugging]
## Using the trace command
The [Crossplane CLI] trace command provides a holistic view for a particular
object and related ones to ease debugging and troubleshooting process. It finds
the relevant Crossplane resources for a given one and provides detailed
information as well as an overview indicating what could be wrong.
Usage:
```
kubectl crossplane trace TYPE[.GROUP] NAME [-n| --namespace NAMESPACE] [--kubeconfig KUBECONFIG] [-o| --outputFormat dot]
```
Examples:
```
# Trace a KubernetesApplication
kubectl crossplane trace KubernetesApplication wordpress-app-83f04457-0b1b-4532-9691-f55cf6c0da6e -n app-project1-dev
# Trace a MySQLInstance
kubectl crossplane trace MySQLInstance wordpress-mysql-83f04457-0b1b-4532-9691-f55cf6c0da6e -n app-project1-dev
```
For more information, see [the trace command documentation].
## Resource Status and Conditions
Most Crossplane resources have a `status` section that can represent the current
state of that particular resource. Running `kubectl describe` against a
Crossplane resource will frequently give insightful information about its
condition. For example, to determine the status of a MySQLInstance resource
claim, run:
```shell
kubectl -n app-project1-dev describe mysqlinstance mysql-claim
```
This should produce output that includes:
```console
Status:
Conditions:
Last Transition Time: 2019-09-16T13:46:42Z
Reason: Managed claim is waiting for managed resource to become bindable
Status: False
Type: Ready
Last Transition Time: 2019-09-16T13:46:42Z
Reason: Successfully reconciled managed resource
Status: True
Type: Synced
```
Most Crossplane resources set exactly two condition types; `Ready` and `Synced`.
`Ready` represents the availability of the resource itself - whether it is
creating, deleting, available, unavailable, binding, etc. `Synced` represents
the success of the most recent attempt to 'reconcile' the _desired_ state of the
resource with its _actual_ state. The `Synced` condition is the first place you
should look when a Crossplane resource is not behaving as expected.
## Crossplane Logs
The next place to look to get more information or investigate a failure would be
in the Crossplane pod logs, which should be running in the `crossplane-system`
namespace. To get the current Crossplane logs, run the following:
```shell
kubectl -n crossplane-system logs -lapp=crossplane
```
Remember that much of Crossplane's functionality is provided by Stacks. You can
use `kubectl logs` to view Stack logs too, though Stacks may not run in the
`crossplane-system` namespace.
## Pausing Crossplane
Sometimes, for example when you encounter a bug. it can be useful to pause
Crossplane if you want to stop it from actively attempting to manage your
resources. To pause Crossplane without deleting all of its resources, run the
following command to simply scale down its deployment:
```bash
kubectl -n crossplane-system scale --replicas=0 deployment/crossplane
```
Once you have been able to rectify the problem or smooth things out, you can
unpause Crossplane simply by scaling its deployment back up:
```bash
kubectl -n crossplane-system scale --replicas=1 deployment/crossplane
```
Remember that much of Crossplane's functionality is provided by Stacks. You can
use `kubectl scale` to pause Stack pods too, though Stacks may not run in the
`crossplane-system` namespace.
## Deleting a Resource Hangs
The resources that Crossplane manages will automatically be cleaned up so as not
to leave anything running behind. This is accomplished by using finalizers, but
in certain scenarios the finalizer can prevent the Kubernetes object from
getting deleted.
To deal with this, we essentially want to patch the object to remove its
finalizer, which will then allow it to be deleted completely. Note that this
won't necessarily delete the external resource that Crossplane was managing, so
you will want to go to your cloud provider's console and look there for any
lingering resources to clean up.
In general, a finalizer can be removed from an object with this command:
```console
kubectl patch <resource-type> <resource-name> -p '{"metadata":{"finalizers": []}}' --type=merge
```
For example, for a Workload object (`workloads.compute.crossplane.io`) named
`test-workload`, you can remove its finalizer with:
```console
kubectl patch workloads.compute.crossplane.io test-workload -p '{"metadata":{"finalizers": []}}' --type=merge
```
## Host-Aware Resource Debugging
Stack resources (including the Stack, service accounts, deployments, and jobs)
are usually easy to identify by name. These resource names are based on the name
used in the StackInstall or Stack resource.
### Resource Location
In a host-aware configuration, these resources may be divided between the host
and the tenant.
The host, which runs the Stack controller, does not need (or get) the CRDs used
by the Stack controller. The Stack controller connects to the tenant Kubernetes
API to watch the owned types of the Stack (which is why the CRDs are only
installed on the Tenant).
Kind | Name | Place
---- | ----- | ------
pod | crossplane | Host (ns: tenantFoo-system)
pod | stack-manager | Host (ns: tenantFoo-system)
job | (stack installjob) | Host (ns: tenantFoo-controllers)
pod | (stack controller) | Host (ns: tenantFoo-controllers)
Kind | Name | Place
---- | ----- | ------
crd | Stack, SI, CSI | Tenant
Stack | wordpress | Tenant
StackInstall | wordpress | Tenant
crd | KubernetesEngine, MysqlInstance, ... | Tenant
crd | GKEInstance, CloudSQLInstance, ... | Tenant
(rbac) | (stack controller) | Tenant
(rbac) | (workspace owner, crossplane-admin) | Tenant
(rbac) | (stack:namespace:1.2.3:admin) | Tenant
crd | WordpressInstance | Tenant
WordpressInstance | wp-instance | Tenant
KubernetesApplication | wp-instance | Tenant
Kind | Name | Place
---- | ----- | ------
pod | wp-instance (from KubernetesAplication) | New Cluster
### Name Truncation
In some cases, the full name of a Stack resource, which could be up to 253
characters long, can not be represented in the created resources. For example,
jobs and deployment names may not exceed 63 characters because these names are
turned into resource label values which impose a 63 character limit. Stack
created resources whose names would otherwise not be permitted in the Kubernetes
API will be truncated with a unique suffix.
When running the Stack Manager in host-aware mode, tenant stack resources
created in the host controller namespace generally reuse the Stack names:
"{tenant namespace}.{tenant name}". In order to avoid the name length
restrictions, these resources may be truncated at either or both of the
namespace segment or over the entire name. In these cases resource labels,
owner references, and annotations should be consulted to identify the
responsible Stack.
* [Relationship Labels]
* [Owner References]
* Annotations: `tenant.crossplane.io/{singular}-name` and
`tenant.crossplane.io/{singular}-namespace` (_singular_ may be `stackinstall`,
`clusterstackinstall` or `stack`)
#### Example
Long resource names may be present on the tenant.
```console
$ name=stack-with-a-really-long-resource-name-so-long-that-it-will-be-truncated
$ ns=this-is-just-a-really-long-namespace-name-at-the-character-max
$ kubectl create ns $ns
$ kubectl crossplane stack install --namespace $ns crossplane/sample-stack-wordpress:0.1.1 $name
```
When used as host resource names, the stack namespace and stack are concatenated
to form host names, as well as labels. These resource names and label values
must be truncated to fit the 63 character limit on label values.
```console
$ kubectl --context=crossplane-host -n tenant-controllers get job -o yaml
apiVersion: v1
items:
- apiVersion: batch/v1
kind: Job
metadata:
annotations:
tenant.crossplane.io/stackinstall-name: stack-with-a-really-long-resource-name-so-long-that-it-will-be-truncated
tenant.crossplane.io/stackinstall-namespace: this-is-just-a-really-long-namespace-name-at-the-character-max
creationTimestamp: "2020-03-20T17:06:25Z"
labels:
core.crossplane.io/parent-group: stacks.crossplane.io
core.crossplane.io/parent-kind: StackInstall
core.crossplane.io/parent-name: stack-with-a-really-long-resource-name-so-long-that-it-wi-alqdw
core.crossplane.io/parent-namespace: this-is-just-a-really-long-namespace-name-at-the-character-max
core.crossplane.io/parent-uid: 596705e4-a28e-47c9-a907-d2732f07a85e
core.crossplane.io/parent-version: v1alpha1
name: this-is-just-a-really-long-namespace-name-at-the-characte-egoav
namespace: tenant-controllers
spec:
backoffLimit: 0
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 8f290bf2-8c91-494a-a76b-27c2ccb9e0a8
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 8f290bf2-8c91-494a-a76b-27c2ccb9e0a8
job-name: this-is-just-a-really-long-namespace-name-at-the-characte-egoav
...
```
<!-- Named Links -->
[Using the trace command]: #using-the-trace-command
[Resource Status and Conditions]: #resource-status-and-conditions
[Crossplane Logs]: #crossplane-logs
[Pausing Crossplane]: #pausing-crossplane
[Deleting a Resource Hangs]: #deleting-a-resource-hangs
[Host-Aware Resource Debugging]: #host-aware-resource-debugging
[Crossplane CLI]: https://github.com/crossplane/crossplane-cli
[the trace command documentation]: https://github.com/crossplane/crossplane-cli/tree/master/docs/trace-command.md
[Relationship Labels]: https://github.com/crossplane/crossplane/blob/master/design/one-pager-stack-relationship-labels.md
[Owner References]: https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents

View File

@ -1,485 +0,0 @@
# Release Process
This document is meant to be a complete end-to-end guide for how to release new
versions of software for Crossplane and its related projects.
## tl;dr Process Overview
All the details are available in the sections below, but we'll start this guide
with a very high level sequential overview for how to run the release process.
1. **feature freeze**: Merge all completed features into master branches of all
repos to begin "feature freeze" period
1. **API docs/user guides**: Regenerate API docs and update all user guides with
current content for scenarios included in the release
1. **release crossplane-runtime**: Tag and release a new version of
crossplane-runtime using the GitHub UI.
1. **pin crossplane dependencies**: Update the go modules of core crossplane in
master to depend on the newly released version of crossplane-runtime.
1. **pre-release tag crossplane**: Run tag pipeline to tag the start of
pre-releases in master in the crossplane repo
1. **branch crossplane**: Create a new release branch using the GitHub UI for
the crossplane repo
1. **crossplane release branch prep**: In Crossplane's release branch, update
all examples, docs, and integration tests to update references and versions,
including the yet to be released versions of providers and stacks.
1. **tag**: Run the tag pipeline to tag Crossplane's release branch with an
official semver
1. **release providers**: Run the release process for each **provider** that we
maintain
1. **pin dependencies**: Update the go modules of each provider repo to
depend on the new version of crossplane and crossplane-runtime.
1. **pre-release tag**: Run tag pipeline to tag the start of pre-releases in
**master** of each provider repo
1. **branch**: Create a new release branch using the GitHub UI for the
provider repo
1. **release branch prep**: In the release branch, update all examples,
docs, and integration tests to update references and versions
1. **test** Test builds from the release branch, fix any critical bugs that
are found
1. **tag**: Run the tag pipeline to tag the release branch with an official
semver
1. **build/publish**: Run build pipeline to publish build with official
semver
1. **release template stacks**: Run the release process for each **template
stack** that we maintain. Note that the process for template stacks is
slightly different from the stack release process.
1. **test** Test builds from the release branch (typically `master`), fix
any critical bugs that are found
1. **version**: Update all version information in the docs, as appropriate
1. **tag**: Run the tag pipeline to tag the release branch with an official
semver
1. **build/publish**: Run the publish pipeline to publish build with
official semver
1. **build/publish**: Run build pipeline to publish Crossplane build from
release branch with official semver
1. **verify**: Verify all artifacts have been published successfully, perform
sanity testing
1. **promote**: Run promote pipelines on all repos to promote releases to
desired channel(s)
1. **release notes**: Publish well authored and complete release notes on GitHub
1. **announce**: Announce the release on Twitter, Slack, etc.
## Detailed Process
This section will walk through the release process in more fine grained and
prescriptive detail.
### Scope
This document will cover the release process for all of the repositories that
the Crossplane team maintains and publishes regular versioned artifacts from.
This set of repositories covers both core Crossplane and the set of Providers,
Stacks, and Apps that Crossplane currently maintains:
* [`crossplane`](https://github.com/crossplane/crossplane)
* [`provider-gcp`](https://github.com/crossplane/provider-gcp)
* [`provider-aws`](https://github.com/crossplane/provider-aws)
* [`provider-azure`](https://github.com/crossplane/provider-azure)
* [`provider-rook`](https://github.com/crossplane/provider-rook)
* [`stack-minimal-gcp`](https://github.com/crossplane/stack-minimal-gcp)
* [`stack-minimal-aws`](https://github.com/crossplane/stack-minimal-aws)
* [`stack-minimal-azure`](https://github.com/crossplane/stack-minimal-azure)
* [`app-wordpress`](https://github.com/crossplane/app-wordpress)
The release process for Providers is almost identical to that of core Crossplane
because they use the same [shared build
logic](https://github.com/upbound/build/). The steps in this guide will apply
to all repositories listed above unless otherwise mentioned.
### Feature Freeze
Feature freeze should be performed on all repos. In order to start the feature
freeze period, the following conditions should be met:
* All expected features should be
["complete"](https://github.com/crossplane/crossplane/blob/master/design/one-pager-definition-of-done.md)
and merged into master. This includes user guides, examples, API documentation
via [crossdocs](https://github.com/negz/crossdocs/), and test updates.
* All issues in the
[milestone](https://github.com/crossplane/crossplane/milestones) should be
closed
* Sanity testing has been performed on `master`
After these conditions are met, the feature freeze begins by creating the RC tag
and the release branch.
### Pin Dependencies
It is a best practice to release Crossplane projects with "pinned" dependencies
to specific versions of other upstream Crossplane projects. For example, after
crossplane-runtime has been released, we want to update the main Crossplane repo
to use that specific released version.
To update a dependency to a specific version, simply edit the `go.mod` file to
point to the desired version, then run `go mod tidy`.
### Pre-release Tag
The next step is to create the pre-release tag for the `HEAD` commit in
`master`. This tag serves as an indication of when the release was branched
from master and is also important for generating future versions of `master`
builds since that [versioning
process](https://github.com/upbound/build/blob/master/makelib/common.mk#L182-L196)
is based on `git describe --tags`.
> **NOTE:** The `tag` pipeline does not yet support additional (pre-release)
tags in the version number, such as `v0.5.0-rc`.
[#330](https://github.com/crossplane/crossplane/issues/330) will be resolved
when this functionality is available. In the meantime, **manually tagging and
pushing to the repo is required**. Ignore the steps below about running the
pipeline because the pipeline won't work.
To accomplish this, run the `tag` pipeline for each repo on the `master` branch.
You will be prompted to enter the `version` for the tag and the `commit` hash to
tag. It's possible to leave the `commit` field blank to default to tagging
`HEAD`.
Since this tag will essentially be the start of pre-releases working towards the
**next** version, the `version` should be the **next** release number, plus a
trailing tag to indicate it is a pre-release. The current convention is to use
`*-rc`. For example, when we are releasing the `v0.9.0` release and we are
ready for master to start working towards the **next** release of `v0.10.0`, we
would make the tag `v0.10.0-rc`
After the tag pipeline has succeeded, verify in the [GitHub
UI](https://github.com/crossplane/crossplane/tags) that the tag was successfully
applied to the correct commit.
### Create Release Branch
Creating the release branch can be done within the [GitHub
UI](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository).
Basically, you just use the branch selector drop down and type in the name of
the new release branch, e.g. `release-0.5`. Release branch names always follow
the convention of `release-[minor-semver]`.
If this is the first ever release branch being created in a repo (uncommon), you
should also set up branch protection rules for the `release-*` pattern. You can
find existing examples in the [Crossplane repo
settings](https://github.com/crossplane/crossplane/settings/branches).
At this point, the `HEAD` commit in the release branch will be our release
candidate. The build pipeline will automatically be started due to the create
branch event, so we can start to perform testing on this build. Note that it
should be the exact same as what is currently in `master` since they are using
the same commit and have the same tag. Also note that this is not the official
release build since we have not made the official release tag yet (e.g.
`v0.5.0`).
The `master` branch can now be opened for new features since we have a safe
release branch to continue bug fixes and improvements for the release itself.
Essentially, `master` is free to now diverge from the release branch.
### Release Branch Prep
In the core Crossplane repository, we need to update the release branch docs and
examples to point to the new versions that we will be releasing soon.
* Documentation, such as [Installation
instructions](https://github.com/crossplane/crossplane/blob/release-0.9/docs/install.md#installing-infrastructure-providers),
and
[Stack](https://github.com/crossplane/crossplane/blob/release-0.9/docs/stack.md)
and
[App](https://github.com/crossplane/crossplane/blob/release-0.9/docs/app.md)
guides.
* searching for `:master` will help a lot here
* Examples, such as [`StackInstall` yaml
files](https://github.com/crossplane/crossplane/tree/release-0.9/cluster/examples/provider)
* [Helm chart
defaults](https://github.com/crossplane/crossplane/blob/release-0.9/cluster/charts/crossplane/values.yaml.tmpl),
ensure all `values.yaml.tmpl` files are updated.
* provider versions
* `templating-controller` version (if a new version is available and ready)
#### Bug Fixes in Release Branch
During our testing of the release candidate, we may find issues or bugs that we
triage and decide we want to fix before the release goes out. In order to fix a
bug in the release branch, the following process is recommended:
1. Make the bug fix into `master` first through the normal PR process
1. If the applicable code has already been removed from `master` then simply
fix the bug directly in the release branch by opening a PR directly
against the release branch
1. Backport the fix by performing a cherry-pick of the fix's commit hash
(**not** the merge commit) from `master` into the release branch. For
example, to backport a fix from master to `v0.5.0`, something like the
following should be used:
```console
git fetch --all
git checkout -b release-0.5 upstream/release-0.5
git cherry-pick -x <fix commit hash>
```
1. Open a PR with the cherry-pick commit targeting the release-branch
After all bugs have been fixed and backported to the release branch, we can move
on to tagging the final release commit.
### Tag Core Crossplane
Similar to running the `tag` pipelines for each stack, now it's time to run the
[`tag`
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fcrossplane%2Fcrossplane-tag/branches)
for core Crossplane. In fact, the [instructions](#stack-tag-pipeline) are
exactly the same:
Run the tag pipeline by clicking the Run button in the Jenkins UI in the correct
release branch's row. You will be prompted for the version you are tagging,
e.g., `v0.5.0` as well as the commit hash. The hash is optional and if you leave
it blank it will default to `HEAD` of the branch, which is what you want.
> **Note:** The first time you run a pipeline on a new branch, you won't get
> prompted for the values to input. The build will quickly fail and then you can
> run (not replay) it a second time to be prompted. This is a Jenkins bug that
> is tracked by [#41929](https://issues.jenkins-ci.org/browse/JENKINS-41929) and
> has been open for almost 3 years, so don't hold your breath.
### Draft Release Notes
We're getting close to starting the official release, so you should take this
opportunity to draft up the release notes. You can create a [new release draft
here](https://github.com/crossplane/crossplane/releases/new). Make sure you
select "This is a pre-release" and hit "Save draft" when you are ready to share
and collect feedback. Do **not** hit "Publish release" yet.
You can see and follow the template and structure from [previous
releases](https://github.com/crossplane/crossplane/releases).
### Provider Release Process
This section will walk through how to release the Providers and does not
directly apply to core Crossplane.
#### Pin Provider Dependencies
Similar to core crossplane, each provider should have its crossplane related
dependencies pinned to the versions that we are releasing. In the **master**
branch of each provider repo, update the `crossplane` and `crossplane-runtime`
dependencies to the versions we are releasing.
Simply edit `go.mod` with the new versions, then run `go mod tidy`.
The providers also depend on `crossplane-tools`, but that currently does not
have official releases, so in practice should be using the latest from master.
#### Provider Pre-release tag
Follow the same steps that we did for core crossplane to tag the **master**
branch of each provider repo with a pre-release tag for the **next** version.
These steps can be found in the [pre-release tag section](#pre-release-tag).
#### Create Provider Release Branches
Now create a release branch for each of the provider repos using the GitHub UI.
The steps are the same as what we did to [create the release
branch](#create-release-branch) for core crossplane.
#### Provider Release Branch Prep
In the **release branch** for each provider, you should update the version tags
and metadata in:
* `integration_tests.sh` - `STACK_IMAGE`
* `ClusterStackInstall` sample and example yaml files
* `*.resource.yaml` - docs links in markdown
* Not all of these `*.resource.yaml` files have links that need to be updated,
they are infrequent and inconsistent
Searching for `:master` will be a big help here.
#### Provider Tag, Build, and Publish
Now that the Providers are all tested and their version metadata has been
updated, it's time to tag the release branch with the official version tag. You
can do this by running the `tag` pipeline on the release branch of each
Provider:
* [`provider-gcp` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-gcp-pipelines%2Fprovider-gcp-tag/branches)
* [`provider-aws` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-aws-pipelines%2Fprovider-aws-tag/branches/)
* [`provider-azure` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-azure-pipelines%2Fprovider-azure-tag/branches/)
* [`provider-rook` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-rook-pipelines%2Fprovider-rook-tag/branches/)
* Run the `tag` pipeline on the release branch
* Enter the version and commit hash (leave blank for `HEAD`)
* The first time you run on a new release branch, you won't be prompted and the
build will fail, just run (not replay) a second time
After the tag pipeline has been run and the release branch has been tagged, you
can run the normal build pipeline on the release branch. This will kick off the
official release build and upon success, all release artifacts will be
officially published.
After the release build succeeds, verify that the correctly versioned Provider
images have been pushed to Docker Hub.
### Template Stack Release Process
The Template Stacks we maintain are slightly different from the controller-based
stacks that we maintain. Their processes are similar but a little simpler. This
section will walk through how to release the Template Stacks themselves, and
does not directly apply to core Crossplane.
For Template Stacks, we do not use release branches unless we need to make a
patch release. In the future we may need a more robust branching strategy, but
for now we are not using branches because it is simpler.
Note that Template Stacks **do not** require any code changes to update their
version. A slight exception to this is for their `behavior.yaml` files, which
should have the `controllerImage` field updated if a new version of the
`templating-controller` is available and ready.
### Template Stack Tag And Publish Pipeline
Here is the list of all template stacks:
* [`stack-minimal-gcp`](https://github.com/crossplane/stack-minimal-gcp)
* [`stack-minimal-aws`](https://github.com/crossplane/stack-minimal-aws)
* [`stack-minimal-azure`](https://github.com/crossplane/stack-minimal-azure)
* [`app-wordpress`](https://github.com/crossplane/app-wordpress)
Each one should be released as part of a complete release, using the
instructions below. To read even more about the template stack release process,
see [the release section of this
document](https://github.com/crossplane/cicd/blob/master/docs/pipelines.md#how-do-i-cut-a-release)
Note that there's also the
[`templating-controller`](https://github.com/crossplane/templating-controller),
which supports template stacks. It is possible that it **may** need to be
released as well, but typically is released independently from Crossplane.
#### Tag the Template Stack
Once a template stack is tested and ready for cutting a semver release, we will
want to tag the repository with the new release version. In most cases, to get
the version, take a look at the most recent tag in the repo, and increment the
minor version. For example, if the most recent tag was `v0.2.0`, the new tag
should be `v0.3.0`.
Run the template stack's tag job on Jenkins, against the `master` branch. Enter
in the new tag to use. If the current release candidate is not the head of
`master`, enter in the commit to tag.
You can find the tag pipeline for the individual stack by going to the
[crossplane org in Jenkins](https://jenkinsci.upbound.io/job/crossplane/),
finding the folder with the same name as the template stack, and then going to
the `tag` job group. Then going to the `master` branch job under the group. For
example, here is [a link to the stack-minimal-gcp tag job for
master](https://jenkinsci.upbound.io/job/crossplane/job/stack-minimal-gcp/job/tag/job/master/).
> **Note:** The first time you run a pipeline on a new branch, you won't get
> prompted for the values to input and the build will fail. See details in the
> [tagging core crossplane section](#tag-core-crossplane).
#### Publish the Template Stack
After the tag pipeline has been run and the repository has been tagged, you can
run the `publish` job for the template stack. For example, here's a [link to the
stack-minimal-gcp publish
job](https://jenkinsci.upbound.io/job/crossplane/job/stack-minimal-gcp/job/publish/job/master/).
This will kick off the official release build and upon success, all release
artifacts will be officially published. This should also be run from the
`master` branch in most cases. Or, if a release branch was used, from the
release branch. The tag parameter should be used, and the tag for the current
release should be used. For example, if the new tag we created was `v0.3.0`, we
would want to provide `v0.3.0` for the `publish` job.
#### Verify the Template Stack was Published
After the publish build succeeds, verify that the correctly versioned template
stack images have been pushed to Docker Hub.
### Template Stack Patch Releases
To do a patch release with a template stack, create a release branch from the
minor version tag on the `master` branch, if a release branch doesn't already
exist. Then, the regular tagging and publishing process for template stacks can
be followed, incrementing the patch version to get the new release tag.
### Build and Release Core Crossplane
After the providers, stacks, and apps have all been released, ensure the [normal
build
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fcrossplane%2Fbuild/branches)
is run on the release branch for core crossplane. This will be the official
release build with an official version number and all of its release artifacts
will be published.
After the pipeline runs successfully, you should verify that all artifacts have
been published to:
* [Docker Hub](https://hub.docker.com/repository/docker/crossplane/crossplane)
* [S3 releases bucket](https://releases.crossplane.io/)
* [Helm chart repository](https://charts.crossplane.io/)
* [Docs website](https://crossplane.io/docs/latest)
### Promotion
If everything looks good with the official versioned release that we just
published, we can go ahead and run the `promote` pipeline for the core
crossplane and provider repos. This is a very quick pipeline that doesn't
rebuild anything, it simply makes metadata changes to the published release to
also include the release in the channel of your choice.
Currently, we only support the `master` and `alpha` channels.
For the core crossplane and each provider repo, run the `promote` pipeline on
the release branch and input the version you would like to promote (e.g.
`v0.5.0`) and the channel you'd like to promote it to. The first time you run
this pipeline on a new release branch, you will not be prompted for values, so
the pipeline will fail. Just run (not replay) it a second time to be prompted.
* Run `promote` pipeline for `master` channel
* Run `promote` pipeline for `alpha` channel
After the `promote` pipelines have succeeded, verify on DockerHub and the Helm
chart repository that the release has been promoted to the right channels.
### Publish Release Notes
Now that the release has been published and verified, you can publish the
[release notes](https://github.com/crossplane/crossplane/releases) that you
drafted earlier. After incorporating all feedback, you can now click on the
"Publish release" button.
This will send an email notification with the release notes to all watchers of
the repo.
### Announce Release
We have completed the entire release, so it's now time to announce it to the
world. Using the [@crossplane_io](https://twitter.com/crossplane_io) Twitter
account, tweet about the new release and blog. You'll see examples from the
previous releases, such as this tweet for
[v0.4](https://twitter.com/crossplane_io/status/1189307636350705664).
Post a link to this tweet on the Slack #announcements channel, then copy a link
to that and post it in the #general channel.
### Patch Releases
We also have the ability to run patch releases to update previous releases that
have already been published. These patch releases are always run from the last
release branch, we do **not** create a new release branch for a patch release.
The basic flow is **very** similar to a normal release, but with a few less
steps. Please refer to details for each step in the sections above.
* Fix any bugs in `master` first and then `cherry-pick -x` to the release branch
* If `master` has already removed the relevant code then make your fix
directly in the release branch
* After all testing on the release branch look good and any docs/examples/tests
have been updated with the new version number, run the `tag` pipeline on the
release branch with the new patch version (e.g. `v0.5.1`)
* Run the normal build pipeline on the release branch to build and publish the
release
* Publish release notes
* Run promote pipeline to promote the patch release to the `master` and `alpha`
channels

View File

@ -1,66 +0,0 @@
# Overview
![Crossplane](media/banner.png)
Crossplane is an open source Kubernetes add-on that extends any cluster with
the ability to provision and manage cloud infrastructure, services, and
applications using kubectl, GitOps, or any tool that works with the Kubernetes
API.
With Crossplane you can:
* **Provision & manage cloud infrastructure with kubectl**
* [Install Crossplane] to provision and manage cloud infrastructure and
services from any Kubernetes cluster.
* Provision infrastructure primitives from any provider ([GCP], [AWS],
[Azure], [Alibaba], on-prem) and use them alongside existing application
configurations.
* Version, manage, and deploy with your favorite tools and workflows that
youre using with your clusters today.
* **Publish custom infrastructure resources for your applications to use**
* Define, compose, and publish your own [infrastructure resources] with
declarative YAML, resulting in your own infrastructure CRDs being added to
the Kubernetes API for applications to use.
* Hide infrastructure complexity and include policy guardrails, so
applications can easily and safely consume the infrastructure they need,
using any tool that works with the Kubernetes API.
* Consume infrastructure resources alongside any Kubernetes application to
provision and manage the cloud services they need with Crossplane as an
add-on to any Kubernetes cluster.
* **Deploy applications using a team-centric approach with OAM**
* Define cloud native applications and the infrastructure they require with
the Open Application Model ([OAM]).
* Collaborate with a team-centric approach with a strong separation of
concerns.
* Deploy application configurations from app delivery pipelines or GitOps
workflows, using the proven Kubernetes declarative model.
Separation of concerns is core to Crossplanes approach to infrastructure and
application management, so team members can deliver value by focusing on what
they know best. Crossplane's team-centric approach reflects individuals often
specializing in the following roles:
* **Infrastructure Operators** - provide infrastructure and services for apps
to consume
* **Application Developers** - build application components independent of
infrastructure
* **Application Operators** - compose, deploy, and run application
configurations
## Getting Started
[Install Crossplane] into any Kubernetes cluster to get started.
<!-- Named Links -->
[Install Crossplane]: getting-started/install-configure.md
[Custom Resource Definitions]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[reconciling]: https://kubernetes.io/docs/concepts/architecture/controller/
[GCP]: https://github.com/crossplane/provider-gcp
[AWS]: https://github.com/crossplane/provider-aws
[Azure]: https://github.com/crossplane/provider-azure
[Alibaba]: https://github.com/crossplane/provider-alibaba
[infrastructure resources]: https://blog.crossplane.io/crossplane-v0-10-compose-and-publish-your-own-infrastructure-crds-velero-backup-restore-compatibility-and-more/
[OAM]: https://github.com/oam-dev/spec/releases/tag/v1.0.0-alpha.2

View File

@ -1,7 +0,0 @@
---
title: crossplane
toc: true
weight: 401
indent: true
redirect_to: https://doc.crds.dev/github.com/crossplane/crossplane
---

View File

@ -1,39 +0,0 @@
---
title: API Documentation
toc: true
weight: 400
---
# API Documentation
The Crossplane ecosystem contains many CRDs that map to API types represented by
external infrastructure providers. The documentation for these CRDs are
auto-generated on [doc.crds.dev]. To find the CRDs available for providers
maintained by the Crossplane organization, you can search for the Github URL, or
append it in the [doc.crds.dev] URL path.
For instance, to find the CRDs available for [provider-azure], you would go to:
[doc.crds.dev/github.com/crossplane/provider/azure]
By default, you will be served the latest CRDs on the `master` branch for the
repository. If you prefer to see the CRDs for a specific version, you can append
the git tag for the release:
[doc.crds.dev/github.com/crossplane/provider-azure@v0.8.0]
Crossplane repositories that are not providers but do publish CRDs are also
served on [doc.crds.dev]. For instance, the [crossplane/crossplane] repository.
Bugs and feature requests for API documentation should be [opened as issues] on
the open source [doc.crds.dev repo].
<!-- Named Links -->
[doc.crds.dev]: https://doc.crds.dev/
[provider-azure]: https://github.com/crossplane/provider-azure
[doc.crds.dev/github.com/crossplane/provider/azure]: https://doc.crds.dev/github.com/crossplane/provider-azure
[doc.crds.dev/github.com/crossplane/provider-azure@v0.8.0]: https://doc.crds.dev/github.com/crossplane/provider-azure@v0.8.0
[crossplane/crossplane]: https://doc.crds.dev/github.com/crossplane/crossplane
[opened as issues]: https://github.com/crdsdev/doc/issues/new
[doc.crds.dev repo]: https://github.com/crdsdev/doc

View File

@ -1,7 +0,0 @@
---
title: provider-alibaba
toc: true
weight: 402
indent: true
redirect_to: https://doc.crds.dev/github.com/crossplane/provider-alibaba
---

View File

@ -1,7 +0,0 @@
---
title: provider-aws
toc: true
weight: 403
indent: true
redirect_to: https://doc.crds.dev/github.com/crossplane/provider-aws
---

View File

@ -1,7 +0,0 @@
---
title: provider-azure
toc: true
weight: 404
indent: true
redirect_to: https://doc.crds.dev/github.com/crossplane/provider-azure
---

View File

@ -1,7 +0,0 @@
---
title: provider-gcp
toc: true
weight: 405
indent: true
redirect_to: https://doc.crds.dev/github.com/crossplane/provider-gcp
---

View File

@ -1,7 +0,0 @@
---
title: provider-rook
toc: true
weight: 406
indent: true
redirect_to: https://doc.crds.dev/github.com/crossplane/provider-rook
---

View File

@ -1,143 +0,0 @@
# Adding Amazon Web Services (AWS) to Crossplane
In this guide, we will walk through the steps necessary to configure your AWS
account to be ready for integration with Crossplane. This will be done by adding
an AWS `Provider` resource type, which enables Crossplane to communicate with an
AWS account.
## Requirements
Prior to adding AWS to Crossplane, following steps need to be taken
- Crossplane is installed in a k8s cluster
- AWS Stack is installed in the same cluster
- `kubectl` is configured to communicate with the same cluster
## Step 1: Configure `aws` CLI
Crossplane uses [AWS security credentials], and stores them as a [secret] which
is managed by an AWS `Provider` instance. In addition, the AWS default region is
also used for targeting a specific region. Crossplane requires to have [`aws`
command line tool] [installed] and [configured]. Once installed, the credentials
and configuration will reside in `~/.aws/credentials` and `~/.aws/config`
respectively.
## Step 2: Setup `aws` Provider
Run [setup.sh] script to read `aws` credentials and region, and create an `aws
provider` instance in Crossplane:
```bash
./cluster/examples/aws-setup-provider/setup.sh [--profile aws_profile]
```
The `--profile` switch is optional and specifies the [aws named profile] that
was set in Step 1. If not provided, the `default` profile will be selected.
Once the script is successfully executed, Crossplane will use the specified aws
account and region in the given named profile to create subsequent AWS managed
resources.
You can confirm the existense of the AWS `Provider` by running:
```bash
kubectl -n crossplane-system get provider/aws-provider
```
## Optional: Setup AWS Provider Manually
An AWS [user][aws user] with `Administrative` privileges is needed to enable
Crossplane to create the required resources. Once the user is provisioned, an
[Access Key][] needs to be created so the user can have API access.
Using the set of [access key credentials][AWS security credentials] for the user
with the right access, we need to [install][install-aws] [`aws cli`][aws command
line tool], and then [configure][aws-cli-configure] it.
When the AWS cli is configured, the credentials and configuration will be in
`~/.aws/credentials` and `~/.aws/config` respectively. These will be consumed in
the next step.
When configuring the AWS cli, the user credentials could be configured under a
specific [AWS named profile][], or under `default`. Without loss of generality,
in this guide let's assume that the credentials are configured under the
`aws_profile` profile (which could also be `default`). We'll use this profile to
setup cloud provider in the next section.
Crossplane uses the AWS user credentials that were configured in the previous
step to create resources in AWS. These credentials will be stored as a
[secret][kubernetes secret] in Kubernetes, and will be used by an AWS `Provider`
instance. The default AWS region is also pulled from the cli configuration, and
added to the AWS provider.
To store the credentials as a secret, run:
```bash
# retrieve profile's credentials, save it under 'default' profile, and base64 encode it
BASE64ENCODED_AWS_ACCOUNT_CREDS=$(echo -e "[default]\naws_access_key_id = $(aws configure get aws_access_key_id --profile $aws_profile)\naws_secret_access_key = $(aws configure get aws_secret_access_key --profile $aws_profile)" | base64 | tr -d "\n")
# retrieve the profile's region from config
AWS_REGION=$(aws configure get region --profile ${aws_profile})
```
At this point, the region and the encoded credentials are stored in respective
variables. Next, we'll need to create an instance of AWS provider:
```bash
cat > provider.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: aws-account-creds
namespace: crossplane-system
type: Opaque
data:
credentials: ${BASE64ENCODED_AWS_ACCOUNT_CREDS}
---
apiVersion: aws.crossplane.io/v1alpha3
kind: Provider
metadata:
name: aws-provider
spec:
region: ${AWS_REGION}
credentialsSecretRef:
namespace: crossplane-system
name: aws-account-creds
key: credentials
EOF
# apply it to the cluster:
kubectl apply -f "provider.yaml"
# delete the credentials variable
unset BASE64ENCODED_AWS_ACCOUNT_CREDS
```
The output will look like the following:
```bash
secret/aws-user-creds created
provider.aws.crossplane.io/aws-provider created
```
The `aws-provider` resource will be used in other resources that we will create,
to provide access information to the configured AWS account.
<!-- Named Links -->
[`aws` command line tool]: https://aws.amazon.com/cli/
[AWS SDK for GO]: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/setting-up.html
[installed]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
[configured]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
[AWS security credentials]: https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
[secret]:https://kubernetes.io/docs/concepts/configuration/secret/
[setup.sh]: https://github.com/crossplane/crossplane/blob/master/cluster/examples/aws-setup-provider/setup.sh
[aws named profile]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
[aws user]: https://docs.aws.amazon.com/mediapackage/latest/ug/setting-up-create-iam-user.html
[Access Key]: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html
[AWS security credentials]: https://docs.aws.amazon.com/general/latest/gr/aws-security-credentials.html
[aws command line tool]: https://aws.amazon.com/cli/
[install-aws]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
[aws-cli-configure]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
[kubernetes secret]: https://kubernetes.io/docs/concepts/configuration/secret/
[AWS named profile]: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html

View File

@ -1,125 +0,0 @@
# Adding Microsoft Azure to Crossplane
In this guide, we will walk through the steps necessary to configure your Azure
account to be ready for integration with Crossplane. The general steps we will
take are summarized below:
* Create a new service principal (account) that Crossplane will use to create
and manage Azure resources
* Add the required permissions to the account
* Consent to the permissions using an administrator account
## Preparing your Microsoft Azure Account
In order to manage resources in Azure, you must provide credentials for a Azure
service principal that Crossplane can use to authenticate. This assumes that you
have already [set up the Azure CLI
client](https://docs.microsoft.com/en-us/cli/azure/authenticate-azure-cli?view=azure-cli-latest)
with your credentials.
Create a JSON file that contains all the information needed to connect and
authenticate to Azure:
```bash
# create service principal with Owner role
az ad sp create-for-rbac --sdk-auth --role Owner > crossplane-azure-provider-key.json
```
Take note of the `clientID` value from the JSON file that we just created, and
save it to an environment variable:
```bash
export AZURE_CLIENT_ID=<clientId value from json file>
```
Now add the required permissions to the service principal that will allow it to
manage the necessary resources in Azure:
```bash
# add required Azure Active Directory permissions
az ad app permission add --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --api-permissions 1cda74f2-2616-4834-b122-5cb1b07f8a59=Role 78c8a3c8-a07e-4b9e-af1b-b5ccab50a175=Role
# grant (activate) the permissions
az ad app permission grant --id ${AZURE_CLIENT_ID} --api 00000002-0000-0000-c000-000000000000 --expires never
```
You might see an error similar to the following, but that is OK, the permissions
should have gone through still:
```console
Operation failed with status: 'Conflict'. Details: 409 Client Error: Conflict for url: https://graph.windows.net/e7985bc4-a3b3-4f37-b9d2-fa256023b1ae/oauth2PermissionGrants?api-version=1.6
```
Finally, you need to grant admin permissions on the Azure Active Directory to
the service principal because it will need to create other service principals
for your `AKSCluster`:
```bash
# grant admin consent to the service princinpal you created
az ad app permission admin-consent --id "${AZURE_CLIENT_ID}"
```
Note: You might need `Global Administrator` role to `Grant admin consent for
Default Directory`. Please contact the administrator of your Azure subscription.
To check your role, go to `Azure Active Directory` -> `Roles and
administrators`. You can find your role(s) by clicking on `Your Role (Preview)`
After these steps are completed, you should have the following file on your
local filesystem:
* `crossplane-azure-provider-key.json`
## Setup Azure Provider
Before creating any resources, we need to create and configure an Azure cloud
provider resource in Crossplane, which stores the cloud account information in
it. All the requests from Crossplane to Azure Cloud will use the credentials
attached to this provider resource. The following command assumes that you have
a `crossplane-azure-provider-key.json` file that belongs to the account youd
like Crossplane to use.
```bash
BASE64ENCODED_AZURE_ACCOUNT_CREDS=$(base64 crossplane-azure-provider-key.json | tr -d "\n")
```
Now well create our `Secret` that contains the credential and `Provider`
resource that refers to that secret:
```bash
cat > provider.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: azure-account-creds
namespace: crossplane-system
type: Opaque
data:
credentials: ${BASE64ENCODED_AZURE_ACCOUNT_CREDS}
---
apiVersion: azure.crossplane.io/v1alpha3
kind: Provider
metadata:
name: azure-provider
spec:
credentialsSecretRef:
namespace: crossplane-system
name: azure-account-creds
key: credentials
EOF
# apply it to the cluster:
kubectl apply -f "provider.yaml"
# delete the credentials variable
unset BASE64ENCODED_AZURE_ACCOUNT_CREDS
```
The output will look like the following:
```bash
secret/azure-user-creds created
provider.azure.crossplane.io/azure-provider created
```
The `azure-provider` resource will be used in other resources that we will
create, to provide access information to the configured Azure account.

View File

@ -1,261 +0,0 @@
# Adding Google Cloud Platform (GCP) to Crossplane
In this guide, we will walk through the steps necessary to configure your GCP
account to be ready for integration with Crossplane. The general steps we will
take are summarized below:
* Create a new example project that all resources will be deployed to
* Enable required APIs such as Kubernetes and CloudSQL
* Create a service account that will be used to perform GCP operations from
Crossplane
* Assign necessary roles to the service account
* Enable billing
For your convenience, the specific steps to accomplish those tasks are provided
for you below using either the `gcloud` command line tool, or the GCP console in
a web browser. You can choose whichever you are more comfortable with.
## Option 1: gcloud Command Line Tool
If you have the `gcloud` tool installed, you can run the commands below from the
crossplane directory.
Instructions for installing `gcloud` can be found in the [Google
docs](https://cloud.google.com/sdk/install).
### Using `gcp-credentials.sh`
In the `cluster/examples` directory you will find a helper script,
[`gcp-credentials.sh`](https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/cluster/examples/gcp-credentials.sh).
This script will prompt you for the organization, project, and billing account
that will be used by `gcloud` when creating a project, service account, and
credentials file (`crossplane-gcp-provider-key.json`). The chosen project and
created service account will have access to the services and roles sufficient to
run the Crossplane GCP examples.
```console
$ cluster/examples/gcp-credentials.sh
... EXAMPLE OUTPUT ONLY
export ORGANIZATION_ID=987654321
export PROJECT_ID=crossplane-example-1234
export EXAMPLE_SA=example-1234@crossplane-example-1234.iam.gserviceaccount.com
export BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 crossplane-gcp-provider-key.json | tr -d "\n")
```
After running `gcp-credentials.sh`, a series of `export` commands will be shown.
Copy and paste the `export` commands that are provided. These variable names
will be referenced throughout the Crossplane examples, generally with a `sed`
command.
You will also find a `crossplane-gcp-provider-key.json` file in the current
working directory. Be sure to remove this file when you are done with the
example projects.
### Running `gcloud` by hand
```bash
# list your organizations (if applicable), take note of the specific organization ID you want to use
# if you have more than one organization (not common)
gcloud organizations list
# create a new project (project id must be <=30 characters)
export EXAMPLE_PROJECT_ID=crossplane-example-123
gcloud projects create $EXAMPLE_PROJECT_ID --enable-cloud-apis # [--organization $ORGANIZATION_ID]
# or, record the PROJECT_ID value of an existing project
# export EXAMPLE_PROJECT_ID=$(gcloud projects list --filter NAME=$EXAMPLE_PROJECT_NAME --format="value(PROJECT_ID)")
# link billing to the new project
gcloud beta billing accounts list
gcloud beta billing projects link $EXAMPLE_PROJECT_ID --billing-account=$ACCOUNT_ID
# enable Kubernetes API
gcloud --project $EXAMPLE_PROJECT_ID services enable container.googleapis.com
# enable CloudSQL API
gcloud --project $EXAMPLE_PROJECT_ID services enable sqladmin.googleapis.com
# enable Redis API
gcloud --project $EXAMPLE_PROJECT_ID services enable redis.googleapis.com
# enable Compute API
gcloud --project $EXAMPLE_PROJECT_ID services enable compute.googleapis.com
# enable Service Networking API
gcloud --project $EXAMPLE_PROJECT_ID services enable servicenetworking.googleapis.com
# enable Additional APIs needed for the example or project
# See `gcloud services list` for a complete list
# create service account
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts create example-123 --display-name "Crossplane Example"
# export service account email
export EXAMPLE_SA="example-123@$EXAMPLE_PROJECT_ID.iam.gserviceaccount.com"
# create service account key (this will create a `crossplane-gcp-provider-key.json` file in your current working directory)
gcloud --project $EXAMPLE_PROJECT_ID iam service-accounts keys create --iam-account $EXAMPLE_SA crossplane-gcp-provider-key.json
# assign roles
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/iam.serviceAccountUser"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/container.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/redis.admin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/compute.networkAdmin"
gcloud projects add-iam-policy-binding $EXAMPLE_PROJECT_ID --member "serviceAccount:$EXAMPLE_SA" --role="roles/storage.admin"
```
## Option 2: GCP Console in a Web Browser
If you chose to use the `gcloud` tool, you can skip this section entirely.
Create a GCP example project which we will use to host our example GKE cluster,
as well as our example CloudSQL instance.
- Login into [GCP Console](https://console.cloud.google.com)
- Create a [new
project](https://console.cloud.google.com/flows/enableapi?apiid=container.googleapis.com,sqladmin.googleapis.com,redis.googleapis.com)
(either stand alone or under existing organization)
- Create Example Service Account
- Navigate to: [Create Service
Account](https://console.cloud.google.com/iam-admin/serviceaccounts)
- `Service Account Name`: type "example"
- `Service Account ID`: leave auto assigned
- `Service Account Description`: type "Crossplane example"
- Click `Create` button
- This should advance to the next section `2 Grant this service account to
project (optional)`
- We will assign this account 3 roles:
- `Service Account User`
- `Cloud SQL Admin`
- `Kubernetes Engine Admin`
- `Compute Network Admin`
- Click `Create` button
- This should advance to the next section `3 Grant users access to this
service account (optional)`
- We don't need to assign any user or admin roles to this account for the
example purposes, so you can leave following two fields blank:
- `Service account users role`
- `Service account admins role`
- Next, we will create and export service account key
- Click `+ Create Key` button.
- This should open a `Create Key` side panel
- Select `json` for the Key type (should be selected by default)
- Click `Create`
- This should show `Private key saved to your computer` confirmation
dialog
- You also should see `crossplane-example-1234-[suffix].json` file in your
browser's Download directory
- Save (copy or move) this file into example (this) directory, with new
name `crossplane-gcp-provider-key.json`
- Enable `Cloud SQL API`
- Navigate to [Cloud SQL Admin
API](https://console.developers.google.com/apis/api/sqladmin.googleapis.com/overview)
- Click `Enable`
- Enable `Kubernetes Engine API`
- Navigate to [Kubernetes Engine
API](https://console.developers.google.com/apis/api/container.googleapis.com/overview)
- Click `Enable`
- Enable `Cloud Memorystore for Redis`
- Navigate to [Cloud Memorystore for
Redis](https://console.developers.google.com/apis/api/redis.googleapis.com/overview)
- Click `Enable`
- Enable `Compute Engine API`
- Navigate to [Compute Engine
API](https://console.developers.google.com/apis/api/compute.googleapis.com/overview)
- Click `Enable`
- Enable `Service Networking API`
- Navigate to [Service Networking
API](https://console.developers.google.com/apis/api/servicenetworking.googleapis.com/overview)
- Click `Enable`
### Enable Billing
You will need to enable billing for your account in order to create and use
Kubernetes clusters with GKE.
- Go to [GCP Console](https://console.cloud.google.com)
- Select example project
- Click `Enable Billing`
- Go to [Kubernetes Clusters](https://console.cloud.google.com/kubernetes/list)
- Click `Enable Billing`
## Setup GCP Provider
Before creating any resources, we need to create and configure a GCP cloud
provider resource in Crossplane, which stores the cloud account information in
it. All the requests from Crossplane to GCP will use the credentials attached to
this provider resource. The following command assumes that you have a
`crossplane-gcp-provider-key.json` file that belongs to the account that will be
used by Crossplane, which has GCP project id. You should be able to get the
project id from the JSON credentials file or from the GCP console. Without loss
of generality, let's assume the project id is `my-cool-gcp-project` in this
guide.
First, let's encode the credential file contents and put it in a variable:
```bash
# base64 encode the GCP credentials
BASE64ENCODED_GCP_PROVIDER_CREDS=$(base64 crossplane-gcp-provider-key.json | tr -d "\n")
```
Next, store the project ID of the GCP project in which you would like to
provision infrastructure as a variable:
```bash
# replace this with your own gcp project id
PROJECT_ID=my-cool-gcp-project
```
Finally, store the namespace in which you want to save the provider's secret as
a variable:
```bash
# change this namespace value if you want to use a different namespace (e.g. gitlab-managed-apps)
PROVIDER_SECRET_NAMESPACE=crossplane-system
```
Now well create the `Secret` resource that contains the credential, and
`Provider` resource which refers to that secret:
```bash
cat > provider.yaml <<EOF
---
apiVersion: v1
kind: Secret
metadata:
name: gcp-account-creds
namespace: ${PROVIDER_SECRET_NAMESPACE}
type: Opaque
data:
credentials: ${BASE64ENCODED_GCP_PROVIDER_CREDS}
---
apiVersion: gcp.crossplane.io/v1alpha3
kind: Provider
metadata:
name: gcp-provider
spec:
# replace this with your own gcp project id
projectID: ${PROJECT_ID}
credentialsSecretRef:
namespace: ${PROVIDER_SECRET_NAMESPACE}
name: gcp-account-creds
key: credentials
EOF
# apply it to the cluster:
kubectl apply -f "provider.yaml"
# delete the credentials
unset BASE64ENCODED_GCP_PROVIDER_CREDS
```
The output will look like the following:
```bash
secret/gcp-account-creds created
provider.gcp.crossplane.io/gcp-provider created
```
The `gcp-provider` resource will be used in other resources that we will create,
to provide access information to the configured GCP account.

View File

@ -1,44 +0,0 @@
---
title: Experimental
toc: true
weight: 1100
indent: true
---
# Experimental
## Deprecated: templating-controller and related
The templating-controller has been deprecated in favor of composite
infrastructure and OAM.
The [templating-controller] allows you to create Crossplane packages without
writing any code using helm3 or kustomize yaml files and simple metadata.
The [Wordpress QuickStart Guide] provides an overview of using packages of
this type.
The namespace-scoped packages using the templating-controller are:
- [app-wordpress]
The cluster-scoped packages using the templating-controller are:
- [stack-gcp-sample]
- [stack-aws-sample]
- [stack-azure-sample]
Packages that use the [templating-controller] can be installed using
`PackageInstall` and `ClusterPackageInstall`, and CRDs provided by the
`Package` will be reconciled by the `templating-controller` which will apply
behavior (helm3, kustomize) to automatically generate the specified
resources.
See [packaging an app] to learn more.
<!-- Named Link -->
[templating-controller]: https://github.com/crossplane/templating-controller
[stack-gcp-sample]: https://github.com/crossplane/stack-gcp-sample
[stack-aws-sample]: https://github.com/crossplane/stack-aws-sample
[stack-azure-sample]: https://github.com/crossplane/stack-azure-sample
[app-wordpress]: https://github.com/crossplane/app-wordpress
[Wordpress Quickstart Guide]: https://github.com/crossplane/app-wordpress/blob/master/docs/quickstart.md
[packaging an app]: experimental/packaging_an_app.md

View File

@ -1,198 +0,0 @@
---
title: Packaging an Application
toc: false
weight: 2000
indent: true
---
# Packaging an Application
In the quick start guide, we demonstrated how Wordpress can be installed as a
Crossplane `Application`. Now we want to learn more about how to package any
application in a similar fashion. The good news is that we can use common
Kubernetes configuration tools, such as [Helm] and [Kustomize], which you may
already be familiar with.
## Setting up a Repository
The required components of an application repository are minimal. For example,
the required components of the [Wordpress application] we deployed in the quick
start are the following:
```
├── Dockerfile
├── .registry
│   ├── app.yaml
│   ├── behavior.yaml
│   ├── icon.svg
│   └── resources
│   ├── wordpress.apps.crossplane.io_wordpressinstances.crd.yaml
│   ├── wordpressinstance.icon.svg
│   ├── wordpressinstance.resource.yaml
│   └── wordpressinstance.ui-schema.yaml
├── helm-chart
│   ├── Chart.yaml
│   ├── templates
│   │   ├── app.yaml
│   │   ├── cluster.yaml
│   │   └── database.yaml
│   └── values.yaml
```
Let's take a look at each component in-depth.
### Dockerfile
The Dockerfile is only responsible for copying the configuration directory
(`helm-chart/` in this case) and the `.registry` directory. You can likely use a
very similar Dockerfile across all of your applications:
```Dockerfile
FROM alpine:3.7
WORKDIR /
COPY helm-chart /helm-chart
COPY .registry /.registry
```
### .registry
The `.registry` directory informs Crossplane how to install your application. It
consists of the following:
**app.yaml** `[required]`
The `app.yaml` file is responsible for defining the metadata for an application,
such as name, version, and required permissions. The Wordpress `app.yaml` is a
good reference for available fields:
```yaml
# Human readable title of application.
title: Wordpress
overviewShort: Cloud portable Wordpress deployments behind managed Kubernetes and SQL services are demonstrated in this Crossplane Stack.
overview: |-
This Wordpress application uses a simple controller that uses Crossplane to orchestrate managed SQL services and managed Kubernetes clusters which are then used to run a Wordpress deployment.
A simple Custom Resource Definition (CRD) is provided allowing for instances of this Crossplane managed Wordpress Application to be provisioned with a few lines of yaml.
The Sample Wordpress Application is intended for demonstration purposes and should not be used to deploy production instances of Wordpress.
# Markdown description of this entry
readme: |-
### Create wordpresses
Before wordpresses will provision, the Crossplane control cluster must
be configured to connect to a provider (e.g. GCP, Azure, AWS).
Once a provider is configured, starting the process of creating a
Wordpress Application instance is easy.
cat <<EOF | kubectl apply -f -
apiVersion: wordpress.samples.apps.crossplane.io/v1alpha1
kind: WordpressInstance
metadata:
name: wordpressinstance-sample
EOF
The stack (and Crossplane) will take care of the rest.
# Maintainer names and emails.
maintainers:
- name: Daniel Suskin
email: daniel@upbound.io
# Owner names and emails.
owners:
- name: Daniel Suskin
email: daniel@upbound.io
# Human readable company name.
company: Upbound
# Type of package: Provider, Stack, or Application
packageType: Application
# Keywords that describe this application and help search indexing
keywords:
- "samples"
- "examples"
- "tutorials"
- "wordpress"
# Links to more information about the application (about page, source code, etc.)
website: "https://upbound.io"
source: "https://github.com/crossplane/app-wordpress"
# RBAC Roles will be generated permitting this stack to use all verbs on all
# resources in the groups listed below.
permissionScope: Namespaced
dependsOn:
- crd: "kubernetesclusters.compute.crossplane.io/v1alpha1"
- crd: "mysqlinstances.database.crossplane.io/v1alpha1"
- crd: "kubernetesapplications.workload.crossplane.io/v1alpha1"
- crd: "kubernetesapplicationresources.workload.crossplane.io/v1alpha1"
# License SPDX name: https://spdx.org/licenses/
license: Apache-2.0
```
**behavior.yaml** `[required]`
While the `app.yaml` is responsible for metadata, that `behavior.yaml` is
responsible for operations. It is where you tell Crossplane how to create
resources in the cluster when an instance of the [CustomResourceDefinition] that
represents your application is created. Take a look at the Wordpress
`behavior.yaml` for reference:
```yaml
source:
path: "helm-chart" # where the configuration data exists in Docker container
crd:
kind: WordpressInstance # the kind of the CustomResourceDefinition
apiVersion: wordpress.apps.crossplane.io/v1alpha1 # the apiVersion of the CustomResourceDefinition
engine:
type: helm3 # the configuration engine to be used (helm3 and kustomize are valid options)
```
**icon.svg**
The `icon.svg` file is a logo for your application.
**resources/** `[required]`
The `resources/` directory contains the CustomResourceDefinition (CRD) that
Crossplane watches to apply the configuration data you supply. For the Wordpress
application, this is `wordpress.apps.crossplane.io_wordpressinstances.crd.yaml`.
CRDs can be generated from `go` code using projects like [controller-tools], or
can be written by hand.
You can also supply metadata files for your CRD, which can be parsed by a user
interface. The files must match the name of the CRD kind for your application:
- `<your-kind>.icon.svg`: an image to be displayed for your application CRD
- `<your-kind>.resource.yaml`: a description of your application CRD
- `<your-kind>.ui-schema.yaml`: the configurable fields on your CRD that you
wish to be displayed in a UI
Crossplane will take these files and apply them as [annotations] on the
installed application. They can then be parsed by a user interface.
### Configuration Directory
The configuration directory contains the actual manifests for deploying your
application. In the case of Wordpress, this includes a `KubernetesApplication`
(`helm-chart/templates/app.yaml`), a `KubernetesCluster` claim
(`helm-chart/templates/cluster.yaml`), and a `MySQLInstance` claim
(`helm-chart/templates/database.yaml`). The configuration tool for the manifests
in the directory should match the `engine` field in your
`.registry/behavior.yaml`. The options for engines at this time are `helm3` and
`kustomize`. Crossplane will pass values from the `spec` of the application's
CRD as variables in the manifests. For instance, the `provisionPolicy` field in
the `spec` of the `WordpressInstance` CRD will be passed to the Helm chart
defined in the `helm-chart/` directory.
<!-- Named Links -->
[Helm]: https://helm.sh/
[Kustomize]: https://kustomize.io/
[Wordpress application]: https://github.com/crossplane/app-wordpress
[CustomResourceDefinition]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[controller-tools]: https://github.com/kubernetes-sigs/controller-tools
[annotations]: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/

View File

@ -1,200 +0,0 @@
---
title: Observability Developer Guide
toc: true
weight: 1002
indent: true
---
# Observability Developer Guide
## Introduction
Observability is crucial to Crossplane users; both those operating Crossplane
and those using Crossplane to operate their infrastructure. Crossplane currently
approaches observability via Kubernetes events and structured logs. Timeseries
metrics are desired but [not yet implemented].
## Goals
In short, a non-admin user and an admin user should both be able to debug any
issues only by inspecting logs and events. There should be no need to rebuild
the Crossplane binary or to reach out to a Crossplane developer.
A user should be able to:
* Debug an issue without rebuilding the Crossplane binary
* Understand an issue without contacting a cluster admin
* Ask a cluster admin to check the logs for more details about the reason the
issue happened, if the details are not part of the error message
A cluster admin should be able to:
* Debug an issue without rebuilding the Crossplane binary
* Debug an issue only by looking at the logs
* Debug an issue without needing to contact a Crossplane developer
## Error reporting in the logs
Error reporting in the logs is mostly intended for consumption by Crossplane
cluster admins. A cluster admin should be able to debug any issue by inspecting
the logs, without needing to add more logs themselves or contact a Crossplane
developer. This means that logs should contain:
* Error messages, at either the info or debug level as contextually appropriate
* Any context leading up to an error, typically at debug level, so that the
errors can be debugged
## Error reporting as events
Error reporting as Kubernetes events is primarily aimed toward end-users of
Crossplane who are not cluster admins. Crossplane typically runs as a Kubernetes
pod, and thus it is unlikely that most users of Crossplane will have access to
its logs. [Events], on the other hand, are available as top-level Kubernetes
objects, and show up the objects they relate to when running `kubectl describe`.
Events should be recorded in the following cases:
* A significant operation is taken on a resource
* The state of a resource is changed
* An error occurs
The events recorded in these cases can be thought of as forming an event log of
things that happen for the resources that Crossplane manages. Each event should
refer back to the relevant controller and resource, and use other fields of the
Event kind as appropriate.
More details about examples of how to interact with events can be found in the
guide to [debugging an application cluster].
## Choosing between methods of error reporting
There are many ways to report errors, such as:
* Metrics
* Events
* Logging
* Tracing
It can be confusing to figure out which one is appropriate in a given situation.
This section will try to offer advice and a mindset that can be used to help
make this decision.
Let's set the context by listing the different user scenarios where error
reporting may be consumed. Here are the typical scenarios as we imagine them:
1. A person **using** a system needs to figure out why things aren't working as
expected, and whether they made a mistake that they can correct.
2. A person **operating** a service needs to monitor the service's **health**,
both now and historically.
3. A person **debugging** a problem which happened in a **live environment**
(often an **operator** of the system) needs information to figure out what
happened.
4. A person **developing** the software wants to **observe** what is happening.
5. A person **debugging** the software in a **development environment**
(typically a **developer** of the system) wants to debug a problem (there is
a lot of overlap between this and the live environment debugging scenario).
The goal is to satisfy the users in all of the scenarios. We'll refer to the
scenarios by number.
The short version is: we should do whatever satisfies all of the scenarios.
Logging and events are the recommendations for satisfying the scenarios,
although they don't cover scenario 2.
The longer version is:
* Scenario 1 is best served by events in the context of Crossplane, since the
users may not have access to read logs or metrics, and even if they did, it
would be hard to relate them back to the event the user is trying to
understand.
* Scenario 2 is best served by metrics, because they can be aggregated and
understood as a whole. And because they can be used to track things over time.
* Scenario 3 is best served by either logging that contains all the information
about and leading up to the event. Request-tracing systems are also useful for
this scenario.
* Scenario 4 is usually logs, maybe at a more verbose level than normal. But it
could be an attached debugger or some other type of tool. It could also be a
test suite.
* Scenario 5 is usually either logs, up to the highest imaginable verbosity, or
an attached debugging session. If there's a gap in reporting, it could involve
adding some print statements to get more logging.
As for the question of how to decide whether to log or not, we believe it helps
to try to visualize which of the scenarios the error or information in question
will be used for. We recommend starting with reporting as much information as
possible, but with configurable runtime behavior so that, for example, debugging
logs don't show up in production normally.
For the question of what constitutes an error, errors should be actionable by a
human. See the [Dave Cheney article] on this topic for some more discussion.
## In Practice
Crossplane provides two observability libraries as part of crossplane-runtime:
* [`event`] emits Kubernetes events.
* [`logging`] produces structured logs. Refer to its package documentation for
additional context on its API choices.
Keep the following in mind when using the above libraries:
* [Do] [not] use package level loggers or event recorders. Instantiate them in
`main()` and plumb them down to where they're needed.
* Each [`Reconciler`] implementation should use its own `logging.Logger` and
`event.Recorder`. Implementations are strongly encouraged to default to using
`logging.NewNopLogger()` and `event.NewNopRecorder()`, and accept a functional
loggers and recorder via variadic options. See for example the [managed
resource reconciler].
* Each controller should use its name as its event recorder's name, and include
its name under the `controller` structured logging key. The controllers name
should be of the form `controllertype/resourcekind`, for example
`managed/cloudsqlinstance` or `stacks/stackdefinition`. Controller names
should always be lowercase.
* Logs and events should typically be emitted by the `Reconcile` method of the
`Reconciler` implementation; not by functions called by `Reconcile`. Author
the methods orchestrated by `Reconcile` as if they were a library; prefer
surfacing useful information for the `Reconciler` to log (for example by
[wrapping errors]) over plumbing loggers and event recorders down to
increasingly deeper layers of code.
* Almost nothing is worth logging at info level. When deciding which logging
level to use, consider a production deployment of Crossplane reconciling tens
or hundreds of managed resources. If in doubt, pick debug. You can easily
increase the log level later if it proves warranted.
* The above is true even for errors; consider the audience. Is this an error
only the Crossplane cluster operator can fix? Does it indicate a significant
degradation of Crossplane's functionality? If so, log it at info. If the error
pertains to a single Crossplane resource emit an event instead.
* Always log errors under the structured logging key `error` (e.g.
`log.Debug("boom!, "error", err)`). Many logging implementations (including
Crossplane's) add context like stack traces for this key.
* Emit events liberally; they're rate limited and deduplicated.
* Follow [API conventions] when emitting events; ensure event reasons are unique
and `CamelCase`.
* Consider emitting events and logs when a terminal condition is encountered
(e.g. `Reconcile` returns) over logging logic flow. i.e. Prefer one log line
that reads "encountered an error fooing the bar" over two log lines that read
"about to foo the bar" and "encountered an error". Recall that if the audience
is a developer debugging Crossplane they will be provided a stack trace with
file and line context when an error is logged.
* Consider including the `reconcile.Request`, and the resource's UID and
resource version (not API version) under the keys `request`, `uid`, and
`version`. Doing so allows log readers to determine what specific version of a
resource the log pertains to.
Finally, when in doubt, aim for consistency with existing Crossplane controller
implementations.
<!-- Named Links -->
[not yet implemented]: https://github.com/crossplane/crossplane/issues/314
[Events]: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.16/#event-v1-core
[debugging an application cluster]: https://kubernetes.io/docs/tasks/debug-application-cluster/
[Dave Cheney article]: https://dave.cheney.net/2015/11/05/lets-talk-about-logging
[`event`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/event
[`logging`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/logging
[Do]: https://peter.bourgon.org/go-best-practices-2016/#logging-and-instrumentation
[not]: https://dave.cheney.net/2017/01/23/the-package-level-logger-anti-pattern
[`Reconciler`]: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler
[managed resource reconciler]: https://github.com/crossplane/crossplane-runtime/blob/a6bb0/pkg/reconciler/managed/reconciler.go#L436
[wrapping errors]: https://godoc.org/github.com/pkg/errors#Wrap
[API conventions]: https://github.com/kubernetes/community/blob/09f55c6/contributors/devel/sig-architecture/api-conventions.md#events

View File

@ -1,23 +0,0 @@
---
title: Contributing
toc: true
weight: 1000
---
# Contributing
The following documentation is for developers who wish to contribute to or
extend Crossplane. Please [open an
issue](https://github.com/crossplane/crossplane/issues/new) for any additional
documentation you would like to see in this section.
1. [Services Developer Guide]
2. [Observability Developer Guide]
# Experimental Projects
See [experimental] projects.
<!-- Named Link -->
[Services Developer Guide]: services_developer_guide.md
[Observability Developer Guide]: observability_developer_guide.md
[experimental]: experimental.md

View File

@ -1,646 +0,0 @@
---
title: Managed Resource Developer Guide
toc: true
weight: 1001
indent: true
---
# Managed Resource Developer Guide
Crossplane allows you to manage infrastructure directly from Kubernetes. Each
infrastructure API resource that Crossplane orchestrates is known as a "managed
resource". This guide will walk through the process of adding support for a new
kind of managed resource to Crossplane.
## What Makes a Crossplane Infrastructure Resource
Crossplane builds atop Kubernetes's powerful architecture in which declarative
configuration, known as resources, are continually 'reconciled' with reality by
one or more controllers. A controller is an endless loop that:
1. Observes the desired state (the declarative configuration resource).
1. Observes the actual state (the thing said configuration resource represents).
1. Tries to make the actual state match the desired state.
Typical Crossplane managed infrastructure consists of two configuration
resources and one controller. The GCP Provider's support for Google Cloud
Memorystore illustrates this. First, the configuration resources:
1. A [managed resource]. Managed resources are cluster scoped, high-fidelity
representations of a resource in an external system such as a cloud
provider's API. Managed resources are _non-portable_ across external systems
(i.e. cloud providers); they're tightly coupled to the implementation details
of the external resource they represent. Managed resources are defined by a
Provider. The GCP Provider's [`CloudMemorystoreInstance`] resource is an
example of a managed resource.
1. A provider. Providers enable access to an external system, typically by
indicating a Kubernetes Secret containing any credentials required to
authenticate to the system, as well as any other metadata required to
connect. Providers are cluster scoped, like managed resources and classes.
The GCP [`Provider`] is an example of a provider. Note that provider is a
somewhat overloaded term in the Crossplane ecosystem - it's also used to
refer to the controller manager for a particular cloud, for example
`provider-gcp`.
A managed resource is powered by a controller. This controller is responsible
for taking instances of the aforementioned high-fidelity managed resource kind
and reconciling them with an external system. The `CloudMemorystoreInstance`
controller watches for changes to `CloudMemorystoreInstance` resources and calls
Google's Cloud Memorystore API to create, update, or delete an instance as
necessary.
Crossplane does not require controllers to be written in any particular
language. The Kubernetes API server is our API boundary, so any process capable
of [watching the API server] and updating resources can be a Crossplane
controller.
## Getting Started
At the time of writing all Crossplane Services controllers are written in Go,
and built using [crossplane-runtime]. While it is possible to write a controller
using any language and tooling with a Kubernetes client this set of tools are
the "[golden path]". They're well supported, broadly used, and provide a shared
language with the Crossplane community. This guide targets [crossplane-runtime
v0.9.0]. It assumes the reader is familiar with the Kubernetes [API Conventions]
and the [kubebuilder book].
## Defining Resource Kinds
Let's assume we want to add Crossplane support for your favourite cloud's
database-as-a-service. Your favourite cloud brands these instances as "Favourite
DB instances". Under the hood they're powered by the open source FancySQL
engine. We'll name the new managed resource kind `FavouriteDBInstance` and the
new resource claim `FancySQLInstance`.
The first step toward implementing a new managed service is to define the code
level schema of its configuration resources. These are referred to as
[resources], (resource) [kinds], and [objects] interchangeably. The kubebuilder
scaffolding is a good starting point for any new Crossplane API kind, whether
they'll be a managed resource, resource class, or resource claim.
> Note that while Crossplane was originally derived from kubebuilder scaffolds
> its patterns have diverged somewhat. It is _possible_ to use kubebuilder to
> scaffold a resource, but the author must be careful to adapt said resource to
> Crossplane patterns. It may often be quicker to copy and modify a v1beta1 or
> above resource from the same provider repository, rather than using
> kubebuilder.
```console
# The resource claim.
kubebuilder create api \
--group example --version v1alpha1 --kind FancySQLInstance \
--resource=true --controller=false --namespaced=false
```
The above command should produce a scaffold similar to the below example:
```go
type FancySQLInstanceSpec struct {
// INSERT ADDITIONAL SPEC FIELDS - desired state of cluster
// Important: Run "make" to regenerate code after modifying this file
}
// FancySQLInstanceStatus defines the observed state of FancySQLInstance
type FancySQLInstanceStatus struct {
// INSERT ADDITIONAL STATUS FIELD - define observed state of cluster
// Important: Run "make" to regenerate code after modifying this file
}
// +kubebuilder:object:root=true
// FancySQLInstance is the Schema for the fancysqlinstances API
// +kubebuilder:resource:scope=Cluster
type FancySQLInstance struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec FancySQLInstanceSpec `json:"spec,omitempty"`
Status FancySQLInstanceStatus `json:"status,omitempty"`
}
```
Crossplane requires that these newly generated API type scaffolds be extended
with a set of struct fields, getters, and setters that are standard to all
Crossplane resource kinds. The getters and setter methods required to satisfy
crossplane-runtime interfaces are omitted from the below examples for brevity.
They can be added by hand, but new services are encouraged to use [`angryjet`]
to generate them automatically using a `//go:generate` comment per the
[`angryjet` documentation].
Note that in many cases a suitable provider will already exist. Frequently
adding support for a new managed service requires only the definition of the
managed resource itself.
### Managed Resource Kinds
Managed resources must:
* Satisfy crossplane-runtime's [`resource.Managed`] interface.
* Embed a [`ResourceStatus`] struct in their `Status` struct.
* Embed a [`ResourceSpec`] struct in their `Spec` struct.
* Embed a `Parameters` struct in their `Spec` struct.
* Use the `+kubebuilder:subresource:status` [comment marker].
* Use the `+kubebuilder:resource:scope=Cluster` [comment marker].
The `Parameters` struct should be a _high fidelity_ representation of the
writeable fields of the external resource's API. Put otherwise, if your
favourite cloud represents Favourite DB instances as a JSON object then
`FavouriteDBParameters` should marshal to a something as close to that JSON
object as possible while still complying with Kubernetes API conventions.
For example, assume the external API object for Favourite DB instance was:
```json
{
"id": 42,
"name": "mycoolinstance",
"fanciness_level": 100,
"version": "2.3",
"status": "ONLINE",
"hostname": "cool.fcp.example.org"
}
```
Further assume the `id`, `status`, and `hostname` fields were output only, and
the `version` field was optional. The `FavouriteDBInstance` managed resource
should look as follows:
```go
// FavouriteDBInstanceParameters define the desired state of an FavouriteDB
// instance. Most fields map directly to an Instance:
// https://favourite.example.org/api/v1/db#Instance
type FavouriteDBInstanceParameters struct {
// We're still working on a standard for naming external resources. See
// https://github.com/crossplane/crossplane/issues/624 for context.
// Name of this instance.
Name string `json:"name"`
// Note that fanciness_level becomes fancinessLevel below. Kubernetes API
// conventions trump cloud provider fidelity.
// FancinessLevel specifies exactly how fancy this instance is.
FancinessLevel int `json:"fancinessLevel"`
// Version specifies what version of FancySQL this instance will run.
// +optional
Version *string `json:"version,omitempty"`
}
// A FavouriteDBInstanceSpec defines the desired state of a FavouriteDBInstance.
type FavouriteDBInstanceSpec struct {
runtimev1alpha1.ResourceSpec `json:",inline"`
ForProvider FavouriteDBInstanceParameters `json:"forProvider"`
}
// A FavouriteDBInstanceStatus represents the observed state of a
// FavouriteDBInstance.
type FavouriteDBInstanceStatus struct {
runtimev1alpha1.ResourceStatus `json:",inline"`
// Note that we add the three "output only" fields here in the status,
// instead of the parameters. We want this representation to be high
// fidelity just like the parameters.
// ID of this instance.
ID int `json:"id,omitempty"`
// Status of this instance.
Status string `json:"status,omitempty"`
// Hostname of this instance.
Hostname string `json:"hostname,omitempty"`
}
// A FavouriteDBInstance is a managed resource that represents a Favourite DB
// instance.
// +kubebuilder:subresource:status
type FavouriteDBInstance struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec FavouriteDBInstanceSpec `json:"spec"`
Status FavouriteDBInstanceStatus `json:"status,omitempty"`
}
```
Note that Crossplane uses the GoDoc strings of API kinds to generate user facing
API documentation. __Document all fields__ and prefer GoDoc that assumes the
reader is running `kubectl explain`, or reading an API reference, not reading
the code. Refer to the [Managed Resource API Patterns] one pager for more detail
on authoring high fidelity managed resources.
### Provider Kinds
You'll typically only need to add a new Provider kind if you're creating an
infrastructure provider that adds support for a new infrastructure provider.
Providers must:
* Be named exactly `Provider`.
* Embed a [`ProviderSpec`] struct in their `Spec` struct.
* Use the `+kubebuilder:resource:scope=Cluster` [comment marker].
The Favourite Cloud `Provider` would look as follows. Note that the cloud to
which it belongs should be indicated by its API group, i.e. its API Version
would be `favouritecloud.crossplane.io/v1alpha1` or similar.
```go
// A ProviderSpec defines the desired state of a Provider.
type ProviderSpec struct {
runtimev1alpha1.ProviderSpec `json:",inline"`
// Information required outside of the Secret referenced in the embedded
// runtimev1alpha1.ProviderSpec that is required to authenticate to the provider.
// ProjectID is used as an example here.
ProjectID string `json:"projectID"`
}
// A Provider configures a Favourite Cloud 'provider', i.e. a connection to a
// particular Favourite Cloud project using a particular Favourite Cloud service
// account.
type Provider struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec ProviderSpec `json:"spec"`
}
```
### Finishing Touches
At this point we've defined the managed resource necessary to start
building controllers. Before moving on to the controllers:
* Add any kubebuilder [comment markers] that may be useful for your resource.
Comment markers can be used to validate input, or add additional columns to
the standard `kubectl get` output, among other things.
* Run `make reviewable` to generate Custom Resource Definitions and additional
helper methods for your new resource kinds.
* Make sure any package documentation (i.e. `// Package v1alpha1...` GoDoc,
including package level comment markers) are in a file named `doc.go`.
kubebuilder adds them to `groupversion_info.go`, but several code generation
tools only check `doc.go`.
Finally, add convenience [`GroupVersionKind`] variables for each new resource
kind. These are typically added to either `register.go` or
`groupversion_info.go` depending on which version of kubebuilder scaffolded the
API type:
```go
// FancySQLInstance type metadata.
var (
FancySQLInstanceKind = reflect.TypeOf(FancySQLInstance{}).Name()
FancySQLInstanceKindAPIVersion = FancySQLInstanceKind + "." + GroupVersion.String()
FancySQLInstanceGroupVersionKind = GroupVersion.WithKind(FancySQLInstanceKind)
)
```
Consider opening a draft pull request and asking a Crossplane maintainer for
review before you start work on the controller!
## Adding Controllers
Crossplane controllers, like those scaffolded by kubebuilder, are built around
the [controller-runtime] library. controller-runtime flavoured controllers
encapsulate most of their domain-specific logic in a [`reconcile.Reconciler`]
implementation. Most Crossplane controllers are one of the three kinds mentioned
under [What Makes a Crossplane Managed Service]. Each of these controller kinds
are similar enough across implementations that [crossplane-runtime] provides
'default' reconcilers. These reconcilers encode what the Crossplane community
has learned about managing external systems and narrow the problem space from
reconciling a Kubernetes resource kind with an arbitrary system down to
Crossplane-specific tasks.
crossplane-runtime provides the following `reconcile.Reconcilers`:
* The [`managed.Reconciler`] reconciles managed resources with external systems
by instantiating a client of the external API and using it to create, update,
or delete the external resource as necessary.
Crossplane controllers typically differ sufficiently from those scaffolded by
kubebuilder that there is little value in using kubebuilder to generate a
controller scaffold.
### Managed Resource Controllers
Managed resource controllers should use [`managed.NewReconciler`] to wrap a
managed-resource specific implementation of [`managed.ExternalConnecter`]. Parts
of `managed.Reconciler`'s behaviour is customisable; refer to the
[`managed.NewReconciler`] GoDoc for a list of options. The following is an
example controller for the `FavouriteDBInstance` managed resource we defined
earlier:
```go
import (
"context"
"fmt"
"strings"
"github.com/pkg/errors"
corev1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/types"
ctrl "sigs.k8s.io/controller-runtime"
"sigs.k8s.io/controller-runtime/pkg/client"
// An API client of the hypothetical FavouriteDB service.
"github.com/fcp-sdk/v1/services/database"
runtimev1alpha1 "github.com/crossplane/crossplane-runtime/apis/core/v1alpha1"
"github.com/crossplane/crossplane-runtime/pkg/meta"
"github.com/crossplane/crossplane-runtime/pkg/resource"
"github.com/crossplane/crossplane-runtime/pkg/reconciler/managed"
"github.com/crossplane/provider-fcp/apis/database/v1alpha3"
fcpv1alpha3 "github.com/crossplane/provider-fcp/apis/v1alpha3"
)
type FavouriteDBInstanceController struct{}
// SetupWithManager instantiates a new controller using a managed.Reconciler
// configured to reconcile FavouriteDBInstances using an ExternalClient produced by
// connecter, which satisfies the ExternalConnecter interface.
func (c *FavouriteDBInstanceController) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
Named(strings.ToLower(fmt.Sprintf("%s.%s", v1alpha3.FavouriteDBInstanceKind, v1alpha3.Group))).
For(&v1alpha3.FavouriteDBInstance{}).
Complete(managed.NewReconciler(mgr,
resource.ManagedKind(v1alpha3.FavouriteDBInstanceGroupVersionKind),
managed.WithExternalConnecter(&connecter{client: mgr.GetClient()})))
}
// Connecter satisfies the resource.ExternalConnecter interface.
type connecter struct{ client client.Client }
// Connect to the supplied resource.Managed (presumed to be a
// FavouriteDBInstance) by using the Provider it references to create a new
// database client.
func (c *connecter) Connect(ctx context.Context, mg resource.Managed) (managed.ExternalClient, error) {
// Assert that resource.Managed we were passed in fact contains a
// FavouriteDBInstance. We told NewControllerManagedBy that this was a
// controller For FavouriteDBInstance, so something would have to go
// horribly wrong for us to encounter another type.
i, ok := mg.(*v1alpha3.FavouriteDBInstance)
if !ok {
return nil, errors.New("managed resource is not a FavouriteDBInstance")
}
// Get the Provider referenced by the FavouriteDBInstance.
p := &fcpv1alpha3.Provider{}
if err := c.client.Get(ctx, meta.NamespacedNameOf(i.Spec.ProviderReference), p); err != nil {
return nil, errors.Wrap(err, "cannot get Provider")
}
// Get the Secret referenced by the Provider.
s := &corev1.Secret{}
n := types.NamespacedName{Namespace: p.Namespace, Name: p.Spec.Secret.Name}
if err := c.client.Get(ctx, n, s); err != nil {
return nil, errors.Wrap(err, "cannot get Provider secret")
}
// Create and return a new database client using the credentials read from
// our Provider's Secret.
client, err := database.NewClient(ctx, s.Data[p.Spec.Secret.Key])
return &external{client: client}, errors.Wrap(err, "cannot create client")
}
// External satisfies the resource.ExternalClient interface.
type external struct{ client database.Client }
// Observe the existing external resource, if any. The managed.Reconciler
// calls Observe in order to determine whether an external resource needs to be
// created, updated, or deleted.
func (e *external) Observe(ctx context.Context, mg resource.Managed) (managed.ExternalObservation, error) {
i, ok := mg.(*v1alpha3.FavouriteDBInstance)
if !ok {
return managed.ExternalObservation{}, errors.New("managed resource is not a FavouriteDBInstance")
}
// Use our FavouriteDB API client to get an up to date view of the external
// resource.
existing, err := e.client.GetInstance(ctx, i.Spec.Name)
// If we encounter an error indicating the external resource does not exist
// we want to let the managed.Reconciler know so it can create it.
if database.IsNotFound(err) {
return managed.ExternalObservation{ResourceExists: false}, nil
}
// Any other errors are wrapped (as is good Go practice) and returned to the
// managed.Reconciler. It will update the "Synced" status condition
// of the managed resource to reflect that the most recent reconcile failed
// and ensure the reconcile is reattempted after a brief wait.
if err != nil {
return managed.ExternalObservation{}, errors.Wrap(err, "cannot get instance")
}
// The external resource exists. Copy any output-only fields to their
// corresponding entries in our status field.
i.Status.Status = existing.GetStatus()
i.Status.Hostname = existing.GetHostname()
i.Status.ID = existing.GetID()
// Update our "Ready" status condition to reflect the status of the external
// resource. Most managed resources use the below well known reasons that
// the "Ready" status may be true or false, but managed resource authors
// are welcome to define and use their own.
switch i.Status.Status {
case database.StatusOnline:
// If the resource is available we also want to mark it as bindable to
// resource claims.
resource.SetBindable(i)
i.SetConditions(runtimev1alpha1.Available())
case database.StatusCreating:
i.SetConditions(runtimev1alpha1.Creating())
case database.StatusDeleting:
i.SetConditions(runtimev1alpha1.Deleting())
}
// Finally, we report what we know about the external resource. In this
// hypothetical case FancinessLevel is the only field that can be updated
// after creation time, so the resource does not need to be updated if
// the actual fanciness level matches our desired fanciness level. Any
// ConnectionDetails we return will be published to the managed resource's
// connection secret if it specified one.
o := managed.ExternalObservation{
ResourceExists: true,
ResourceUpToDate: existing.GetFancinessLevel == i.Spec.FancinessLevel,
ConnectionDetails: managed.ConnectionDetails{
runtimev1alpha1.ResourceCredentialsSecretUserKey: []byte(existing.GetUsername()),
runtimev1alpha1.ResourceCredentialsSecretEndpointKey: []byte(existing.GetHostname()),
},
}
return o, nil
}
// Create a new external resource based on the specification of our managed
// resource. managed.Reconciler only calls Create if Observe reported
// that the external resource did not exist.
func (e *external) Create(ctx context.Context, mg resource.Managed) (managed.ExternalCreation, error) {
i, ok := mg.(*v1alpha3.FavouriteDBInstance)
if !ok {
return managed.ExternalCreation{}, errors.New("managed resource is not a FavouriteDBInstance")
}
// Indicate that we're about to create the instance. Remember ExternalClient
// authors can use a bespoke condition reason here in cases where Creating
// doesn't make sense.
i.SetConditions(runtimev1alpha1.Creating())
// Create must return any connection details that are set or returned only
// at creation time. The managed.Reconciler will merge any details
// with those returned during the Observe phase.
password := database.GeneratePassword()
cd := managed.ConnectionDetails{runtimev1alpha1.ResourceCredentialsSecretPasswordKey: []byte(password)}
// Create a new instance.
new := database.Instance{Name: i.Name, FancinessLevel: i.FancinessLevel, Version: i.Version}
err := e.client.CreateInstance(ctx, new, password)
// Note that we use resource.Ignore to squash any error that indicates the
// external resource already exists. Create implementations must not return
// an error if asked to create a resource that already exists. Real managed
// resource controllers are advised to avoid unintentially 'adoptign' an
// existing, unrelated external resource, per
// https://github.com/crossplane/crossplane-runtime/issues/27
return managed.ExternalCreation{ConnectionDetails: cd}, errors.Wrap(resource.Ignore(database.IsExists, err), "cannot create instance")
}
// Update the existing external resource to match the specifications of our
// managed resource. managed.Reconciler only calls Update if Observe
// reported that the external resource was not up to date.
func (e *external) Update(ctx context.Context, mg resource.Managed) (managed.ExternalUpdate, error) {
i, ok := mg.(*v1alpha3.FavouriteDBInstance)
if !ok {
return managed.ExternalUpdate{}, errors.New("managed resource is not a FavouriteDBInstance")
}
// Recall that FancinessLevel is the only field that we _can_ update.
new := database.Instance{Name: i.Name, FancinessLevel: i.FancinessLevel}
err := e.client.UpdateInstance(ctx, new)
return managed.ExternalUpdate{}, errors.Wrap(err, "cannot update instance")
}
// Delete the external resource. managed.Reconciler only calls Delete
// when a managed resource with the 'Delete' reclaim policy has been deleted.
func (e *external) Delete(ctx context.Context, mg resource.Managed) error {
i, ok := mg.(*v1alpha3.FavouriteDBInstance)
if !ok {
return errors.New("managed resource is not a FavouriteDBInstance")
}
// Indicate that we're about to delete the instance.
i.SetConditions(runtimev1alpha1.Deleting())
// Delete the instance.
err := e.client.DeleteInstance(ctx, i.Spec.Name)
// Note that we use resource.Ignore to squash any error that indicates the
// external resource does not exist. Delete implementations must not return
// an error when asked to delete a non-existent external resource.
return errors.Wrap(resource.Ignore(database.IsNotFound, err), "cannot delete instance")
}
```
### Wrapping Up
Once all your controllers are in place you'll want to test them. Note that most
projects under the [crossplane org] [favor] table driven tests that use Go's
standard library `testing` package over kubebuilder's Gingko based tests. Please
do not add or proliferate Gingko based tests.
Finally, don't forget to plumb any newly added resource kinds and controllers up
to your controller manager. Simple providers may do this for each type within
within `main()`, but most more complicated providers take an approach in which
each package exposes an `AddToScheme` (for resource kinds) or `SetupWithManager`
(for controllers) function that invokes the same function within its child
packages, resulting in a `main.go` like:
```go
import (
"time"
"sigs.k8s.io/controller-runtime/pkg/client/config"
"sigs.k8s.io/controller-runtime/pkg/manager"
"sigs.k8s.io/controller-runtime/pkg/manager/signals"
crossplaneapis "github.com/crossplane/crossplane/apis"
fcpapis "github.com/crossplane/provider-fcp/apis"
"github.com/crossplane/provider-fcp/pkg/controller"
)
func main() {
cfg, err := config.GetConfig()
if err != nil {
panic(err)
}
mgr, err := manager.New(cfg, manager.Options{SyncPeriod: 1 * time.Hour})
if err != nil {
panic(err)
}
if err := crossplaneapis.AddToScheme(mgr.GetScheme()); err != nil {
panic(err)
}
if err := fcpapis.AddToScheme(mgr.GetScheme()); err != nil {
panic(err)
}
if err := controller.SetupWithManager(mgr); err != nil {
panic(err)
}
panic(mgr.Start(signals.SetupSignalHandler()))
}
```
## In Review
In this guide we walked through the process of defining the resource kinds and
controllers necessary to build support for new managed infrastructure; possibly
even a completely new infrastructure provider. Please do not hesitate to [reach
out] to the Crossplane maintainers and community for help designing and
implementing support for new managed services. We would highly value any
feedback you may have about the development process!
<!-- Named Links -->
[What Makes a Crossplane Managed Service]: #what-makes-a-crossplane-managed-service
[managed resource]: concepts.md#managed-resource
[dynamic provisioning]: concepts.md#dynamic-and-static-provisioning
[`CloudMemorystoreInstance`]: https://github.com/crossplane/provider-gcp/blob/85a6ed3c669a021f1d61be51b2cbe2714b0bc70b/apis/cache/v1beta1/cloudmemorystore_instance_types.go#L184
[`Provider`]: https://github.com/crossplane/provider-gcp/blob/85a6ed3c669a021f1d61be51b2cbe2714b0bc70b/apis/v1alpha3/types.go#L41
[watching the API server]: https://kubernetes.io/docs/reference/using-api/api-concepts/#efficient-detection-of-changes
[controller-runtime]: https://github.com/kubernetes-sigs/controller-runtime
[crossplane-runtime]: https://github.com/crossplane/crossplane-runtime/
[crossplane-runtime v0.9.0]: https://github.com/crossplane/crossplane-runtime/releases/tag/v0.9.0
[golden path]: https://charity.wtf/2018/12/02/software-sprawl-the-golden-path-and-scaling-teams-with-agency/
[API Conventions]: https://github.com/kubernetes/community/blob/c6e1e89a/contributors/devel/sig-architecture/api-conventions.md
[kubebuilder book]: https://book.kubebuilder.io/
[resources]: https://kubebuilder.io/cronjob-tutorial/gvks.html#kinds-and-resources
[kinds]: https://kubebuilder.io/cronjob-tutorial/gvks.html#kinds-and-resources
[objects]: https://kubernetes.io/docs/concepts/#kubernetes-objects
[comment marker]: https://kubebuilder.io/reference/markers.html
[comment markers]: https://kubebuilder.io/reference/markers.html
[`resource.Managed`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/resource#Managed
[`managed.Reconciler`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/reconciler/managed#Reconciler
[`managed.NewReconciler`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/reconciler/managed#NewReconciler
[`managed.ExternalConnecter`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/reconciler/managed#ExternalConnecter
[`managed.ExternalClient`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/reconciler/managed#ExternalClient
[`ResourceSpec`]: https://godoc.org/github.com/crossplane/crossplane-runtime/apis/core/v1alpha1#ResourceSpec
[`ResourceStatus`]: https://godoc.org/github.com/crossplane/crossplane-runtime/apis/core/v1alpha1#ResourceStatus
[`ProviderSpec`]: https://godoc.org/github.com/crossplane/crossplane-runtime/apis/core/v1alpha1#ProviderSpec
['managed.ExternalConnecter`]: https://godoc.org/github.com/crossplane/crossplane-runtime/pkg/reconciler/managed#ExternalConnecter
[opening a Crossplane issue]: https://github.com/crossplane/crossplane/issues/new/choose
[`GroupVersionKind`]: https://godoc.org/k8s.io/apimachinery/pkg/runtime/schema#GroupVersionKind
[`reconcile.Reconciler`]: https://godoc.org/sigs.k8s.io/controller-runtime/pkg/reconcile#Reconciler
[favor]: https://github.com/crossplane/crossplane/issues/452
[reach out]: https://github.com/crossplane/crossplane#get-involved
[crossplane org]: https://github.com/crossplane
[`angryjet`]: https://github.com/crossplane/crossplane-tools
[Managed Resource API Patterns]: ../design/one-pager-managed-resource-api-design.md
[Crossplane CLI]: https://github.com/crossplane/crossplane-cli#quick-start-stacks
[`angryjet` documentation]: https://github.com/crossplane/crossplane-tools/blob/master/README.md

View File

@ -1,23 +0,0 @@
---
title: FAQ
toc: true
weight: 1200
---
# Frequently Asked Questions (FAQs)
### Where did the name Crossplane come from?
Crossplane is the fusing of cross-cloud control plane. We wanted to use a noun
that refers to the entity responsible for connecting different cloud providers
and acts as control plane across them. Cross implies “cross-cloud” and “plane”
brings in “control plane”.
### What's up with popsicle?
We believe in a multi-flavor cloud.
### Related Projects
See [Related Projects].
[Related Projects]: related_projects.md

View File

@ -1,86 +0,0 @@
---
title: Related Projects
toc: true
weight: 1201
indent: true
---
# Related Projects
While there are many projects that address similar issues, none of them
encapsulate the full use case that Crossplane addresses. This list is not
exhaustive and is not meant to provide a deep analysis of the following
projects, but instead to motivate why Crossplane was created.
## Open Service Broker and Service Catalog
The [Open Service Broker] and the [Kubernetes Service Catalog] are able to
dynamically provision cloud services from Kubernetes. As a result it shares
similar goals with Crossplane. However, service broker does not have the
ability to define, compose, and publish your own infrastructure resources to
the Kubernetes API in a no-code way. Crossplane goes further by enabling
infrastructure operators to hide infrastructure complexity and include policy
guardrails, with a team-centric approach and a strong separation of concerns,
so applications can easily and safely consume the infrastructure they need,
using any tool that works with the Kubernetes API. Solutions like the [GCP
implementation of Open Service Broker][GCP OSB] have been deprecated in favor
of a more Kubernetes-native solution, but one that is Google-specific and
closed source.
## GCP Config Connector
The [GCP Config Connector] is the GCP replacement for Open Service Broker, and
implements a set of Kubernetes controllers that are able to provision managed
services in GCP. It defines a set of CRDs for managed services like CloudSQL,
and controllers that can provision them via their cloud APIs. It is similar to
Crossplane in that it can provision managed services in GCP. Crossplane goes
further by enabling you to provision managed services from any cloud
provider and the ability to define, compose, and publish your own
infrastructure resources in a no-code way. Crossplane supports a team-centric
approach with a strong separation of concerns, that enables applications to
easily and safely consume the infrastructure they need, using any tool that
works with the Kubernetes API. GCP Config Connector is closed-source.
## AWS Service Operator
The [AWS Service Operator] is a recent project that implements a set of
Kubernetes controllers that are able to provision managed services in AWS. It
defines a set of CRDs for managed services like DynamoDB, and controllers that
can provision them via AWS CloudFormation. It is similar to Crossplane in that
it can provision managed services in AWS. Crossplane goes further by
enabling you to provision managed services from any cloud provider and the
ability to define, compose, and publish your own infrastructure API types in
Kubernetes in a no-code way. Crossplane supports a team-centric approach with a
strong separation of concerns, that enables applications to easily and safely
consume the infrastructure they need, using any tool that works with the
Kubernetes API.
## AWS CloudFormation, GCP Deployment Manager, and Others
These products offer a declarative model for deploying and provisioning
infrastructure in each of the respective cloud providers. They only work for
one cloud provider, are generally closed source, and offer little or no
extensibility points, let alone being able to extend the Kubernetes API to
provide your own infrastructure abstractions in a no-code way. We have
considered using some of these products as a way to implement resource
controllers in Crossplane. These projects use an Infrastructure as Code
approach to management, while Crossplane offers an API-driven control plane.
## Terraform and Pulumi
[Terraform] and [Pulumi] are tools for provisioning infrastructure across cloud
providers. It offers a declarative configuration language with support for
templating, composability, referential integrity and dependency management.
Terraform can declaratively manage any compatible API and perform changes when
the tool is run by a human or in a deployment pipeline. Terraform is an
Infrastructure as Code tool, while Crossplane offers an API-driven control plane.
<!-- Named Links -->
[Open Service Broker]: https://www.openservicebrokerapi.org/
[Kubernetes Service Catalog]: https://kubernetes.io/docs/concepts/extend-kubernetes/service-catalog/
[GCP OSB]: https://cloud.google.com/kubernetes-engine/docs/concepts/google-cloud-platform-service-broker
[GCP Config Connector]: https://cloud.google.com/config-connector/docs/overview
[AWS Service Operator]: https://github.com/awslabs/aws-service-operator
[Terraform]: https://www.terraform.io/
[Pulumi]: https://www.pulumi.com/

View File

@ -1,395 +0,0 @@
---
title: Install & Configure
toc: true
weight: 2
indent: true
---
# Install & Configure Crossplane
Crossplane can be easily installed into any existing Kubernetes cluster using
the regularly published Helm chart. The Helm chart contains all the custom
resources and controllers needed to deploy and configure Crossplane.
See [Install] and [Configure] docs for installing alternate versions and more
detailed instructions.
## Get a Kubernetes Cluster
<ul class="nav nav-tabs">
<li class="active"><a href="#setup-mac-brew" data-toggle="tab">macOS via Homebrew</a></li>
<li><a href="#setup-mac-linux" data-toggle="tab">macOS / Linux</a></li>
<li><a href="#setup-windows" data-toggle="tab">Windows</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="setup-mac-brew" markdown="1">
For macOS via Homebrew use the following:
```
brew upgrade
brew install kind
brew install kubectl
brew install helm
kind create cluster --image kindest/node:v1.16.9 --wait 5m
```
</div>
<div class="tab-pane fade" id="setup-mac-linux" markdown="1">
For macOS / Linux use the following:
* [Kubernetes cluster]
* [Kind]
* [Minikube], minimum version `v0.28+`
* etc.
* [Helm], minimum version `v2.12.0+`.
* For Helm 2, make sure Tiller is initialized with sufficient permissions to
work on `crossplane-system` namespace.
</div>
<div class="tab-pane fade" id="setup-windows" markdown="1">
For Windows use the following:
* [Kubernetes cluster]
* [Kind]
* [Minikube], minimum version `v0.28+`
* etc.
* [Helm], minimum version `v2.12.0+`.
* For Helm 2, make sure Tiller is initialized with sufficient permissions to
work on `crossplane-system` namespace.
</div>
</div>
## Install Crossplane
<ul class="nav nav-tabs">
<li class="active"><a href="#install-tab-helm3" data-toggle="tab">Helm 3 (alpha)</a></li>
<li><a href="#install-tab-helm2" data-toggle="tab">Helm 2 (alpha)</a></li>
<li><a href="#install-tab-helm3-master" data-toggle="tab">Helm 3 (master)</a></li>
<li><a href="#install-tab-helm2-master" data-toggle="tab">Helm 2 (master)</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="install-tab-helm3" markdown="1">
Use Helm 3 to install the latest official `alpha` release of Crossplane, suitable for community use and testing:
```
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
# Kubernetes 1.15 and newer versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane
# Kubernetes 1.14 and older versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane --disable-openapi-validation
```
</div>
<div class="tab-pane fade" id="install-tab-helm2" markdown="1">
Use Helm 2 to install the latest official `alpha` release of Crossplane, suitable for community use and testing:
```
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane
```
</div>
<div class="tab-pane fade" id="install-tab-helm3-master" markdown="1">
Use Helm 3 to install the latest `master` pre-release version of Crossplane:
```
kubectl create namespace crossplane-system
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search repo crossplane-master --devel
# Kubernetes 1.15 and newer versions
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --devel
# Kubernetes 1.14 and older versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane --version <version> --devel --disable-openapi-validation
```
For example:
```
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version 0.11.0-rc.100.gbc5d311 --devel
```
</div>
<div class="tab-pane fade" id="install-tab-helm2-master" markdown="1">
Use Helm 2 to install the latest `master` pre-release version of Crossplane, which is suitable for testing pre-release versions:
```
kubectl create namespace crossplane-system
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search crossplane-master
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version>
```
For example:
```
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version 0.11.0-rc.100.gbc5d311
```
</div>
</div>
## Install Crossplane CLI
The [Crossplane CLI] adds a set of `kubectl crossplane` commands to simplify common tasks:
```
curl -sL https://raw.githubusercontent.com/crossplane/crossplane-cli/master/bootstrap.sh | bash
```
## Select Provider
Install and configure a provider for Crossplane to use for infrastructure provisioning:
<ul class="nav nav-tabs">
<li class="active"><a href="#provider-tab-aws" data-toggle="tab">AWS</a></li>
<li><a href="#provider-tab-gcp" data-toggle="tab">GCP</a></li>
<li><a href="#provider-tab-azure" data-toggle="tab">Azure</a></li>
<li><a href="#provider-tab-alibaba" data-toggle="tab">Alibaba</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="provider-tab-aws" markdown="1">
### Install AWS Provider
```
PACKAGE=crossplane/provider-aws:v0.10.0
NAME=provider-aws
kubectl crossplane package install --cluster --namespace crossplane-system ${PACKAGE} ${NAME}
```
### Get AWS Account Keyfile
Using an AWS account with permissions to manage RDS databases:
```
AWS_PROFILE=default && echo -e "[default]\naws_access_key_id = $(aws configure get aws_access_key_id --profile $AWS_PROFILE)\naws_secret_access_key = $(aws configure get aws_secret_access_key --profile $AWS_PROFILE)" > creds.conf
```
### Create a Provider Secret
```
kubectl create secret generic aws-creds -n crossplane-system --from-file=key=./creds.conf
```
### Configure the Provider
Create the following `provider.yaml`:
```
apiVersion: aws.crossplane.io/v1alpha3
kind: Provider
metadata:
name: aws-provider
spec:
region: us-west-2
credentialsSecretRef:
namespace: crossplane-system
name: aws-creds
key: key
```
Then apply it:
```
kubectl apply -f provider.yaml
```
</div>
<div class="tab-pane fade" id="provider-tab-gcp" markdown="1">
### Install GCP Provider
```
PACKAGE=crossplane/provider-gcp:v0.10.0
NAME=provider-gcp
kubectl crossplane package install --cluster --namespace crossplane-system ${PACKAGE} ${NAME}
```
### Get GCP Account Keyfile
```
# replace this with your own gcp project id and service account name
PROJECT_ID=my-project
SA_NAME=my-service-account-name
# create service account
SA="${SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
gcloud iam service-accounts create $SA_NAME --project $PROJECT_ID
# enable cloud API
SERVICE="sqladmin.googleapis.com"
gcloud services enable $SERVICE --project $PROJECT_ID
# grant access to cloud API
ROLE="roles/cloudsql.admin"
gcloud projects add-iam-policy-binding --role="$ROLE" $PROJECT_ID --member "serviceAccount:$SA"
# create service account keyfile
gcloud iam service-accounts keys create creds.json --project $PROJECT_ID --iam-account $SA
```
### Create a Provider Secret
```
kubectl create secret generic gcp-creds -n crossplane-system --from-file=key=./creds.json
```
### Configure the Provider
Create the following `provider.yaml`:
```
apiVersion: gcp.crossplane.io/v1alpha3
kind: Provider
metadata:
name: gcp-provider
spec:
# replace this with your own gcp project id
projectID: my-project
credentialsSecretRef:
namespace: crossplane-system
name: gcp-creds
key: key
```
Then apply it:
```
kubectl apply -f provider.yaml
```
</div>
<div class="tab-pane fade" id="provider-tab-azure" markdown="1">
### Install Azure Provider
```
PACKAGE=crossplane/provider-azure:v0.10.0
NAME=provider-azure
kubectl crossplane package install --cluster --namespace crossplane-system ${PACKAGE} ${NAME}
```
### Get Azure Principal Keyfile
```
# create service principal with Owner role
az ad sp create-for-rbac --sdk-auth --role Owner > "creds.json"
# add Azure Active Directory permissions
AZURE_CLIENT_ID=$(jq -r ".clientId" < "./creds.json")
RW_ALL_APPS=1cda74f2-2616-4834-b122-5cb1b07f8a59
RW_DIR_DATA=78c8a3c8-a07e-4b9e-af1b-b5ccab50a175
AAD_GRAPH_API=00000002-0000-0000-c000-000000000000
az ad app permission add --id "${AZURE_CLIENT_ID}" --api ${AAD_GRAPH_API} --api-permissions ${RW_ALL_APPS}=Role ${RW_DIR_DATA}=Role
az ad app permission grant --id "${AZURE_CLIENT_ID}" --api ${AAD_GRAPH_API} --expires never > /dev/null
az ad app permission admin-consent --id "${AZURE_CLIENT_ID}"
```
### Create a Provider Secret
```
kubectl create secret generic azure-creds -n crossplane-system --from-file=key=./creds.json
```
### Configure the Provider
Create the following `provider.yaml`:
```
apiVersion: azure.crossplane.io/v1alpha3
kind: Provider
metadata:
name: azure-provider
spec:
credentialsSecretRef:
namespace: crossplane-system
name: azure-creds
key: key
```
Then apply it:
```
kubectl apply -f provider.yaml
```
</div>
<div class="tab-pane fade" id="provider-tab-alibaba" markdown="1">
### Install Alibaba Provider
```
PACKAGE=crossplane/provider-alibaba:v0.1.0
NAME=provider-alibaba
kubectl crossplane package install --cluster --namespace crossplane-system ${PACKAGE} ${NAME}
```
### Create a Provider Secret
```
kubectl create secret generic alibaba-creds --from-literal=accessKeyId=<your-key> --from-literal=accessKeySecret=<your-secret> -n crossplane-system
```
### Configure the Provider
Create the following `provider.yaml`:
```
apiVersion: alibaba.crossplane.io/v1alpha1
kind: Provider
metadata:
name: alibaba-provider
spec:
credentialsSecretRef:
namespace: crossplane-system
name: alibaba-creds
key: credentials
region: cn-beijing
```
Then apply it:
```
kubectl apply -f provider.yaml
```
</div>
</div>
## Next Steps
Now that you have a provider configured, you can [provision infrastructure].
## More Info
See [Install] and [Configure] docs for installing alternate versions and more
detailed instructions.
## Uninstall Provider
```
kubectl delete -f provider.yaml
kubectl delete secret -n crossplane-system --all
```
## Uninstall Crossplane
```
helm delete crossplane --namespace crossplane-system
```
<!-- Named Links -->
[provision infrastructure]: provision-infrastructure.md
[Install]: ../reference/install.md
[Configure]: ../reference/configure.md
[Kubernetes cluster]: https://kubernetes.io/docs/setup/
[Minikube]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[Helm]: https://docs.helm.sh/using_helm/
[Kind]: https://kind.sigs.k8s.io/docs/user/quick-start/
[Crossplane CLI]: https://github.com/crossplane/crossplane-cli

View File

@ -1,249 +0,0 @@
---
title: Provision Infrastructure
toc: true
weight: 3
indent: true
---
# Provision Infrastructure
Crossplane allows you to provision infrastructure anywhere using the Kubernetes
API. Once you have [installed a provider] and [configured your credentials], you
can create any infrastructure resource currently supported by the provider.
Let's start by provisioning a database on your provider of choice.
Each provider below offers their own flavor of a managed database. When the
provider is installed into your Crossplane cluster, it installs a cluster-scoped
CRD that represents the managed service offering, as well as controllers that
know how to create, update, and delete instances of the service on the cloud
provider.
<ul class="nav nav-tabs">
<li class="active"><a href="#aws-tab-1" data-toggle="tab">AWS</a></li>
<li><a href="#gcp-tab-1" data-toggle="tab">GCP</a></li>
<li><a href="#azure-tab-1" data-toggle="tab">Azure</a></li>
<li><a href="#alibaba-tab-1" data-toggle="tab">Alibaba</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="aws-tab-1" markdown="1">
The AWS provider supports provisioning an [RDS] instance with the `RDSInstance`
CRD it installs into your cluster.
```yaml
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
name: rdspostgresql
spec:
forProvider:
dbInstanceClass: db.t2.small
masterUsername: masteruser
allocatedStorage: 20
engine: postgres
engineVersion: "9.6"
skipFinalSnapshotBeforeDeletion: true
writeConnectionSecretToRef:
namespace: crossplane-system
name: aws-rdspostgresql-conn
providerRef:
name: aws-provider
reclaimPolicy: Delete
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/provision/aws.yaml
```
Creating the above instance will cause Crossplane to provision an RDS instance
on AWS. You can view the progress with the following command:
```console
kubectl get rdsinstances.database.aws.crossplane.io rdspostgresql
```
When provisioning is complete, you should see `READY: True` in the output. You
can then delete the `RDSInstance`:
```console
kubectl delete rdsinstances.database.aws.crossplane.io rdspostgresql
```
</div>
<div class="tab-pane fade" id="gcp-tab-1" markdown="1">
The GCP provider supports provisioning a [CloudSQL] instance with the
`CloudSQLInstance` CRD it installs into your cluster.
```yaml
apiVersion: database.gcp.crossplane.io/v1beta1
kind: CloudSQLInstance
metadata:
name: cloudsqlpostgresql
spec:
forProvider:
databaseVersion: POSTGRES_9_6
region: us-central1
settings:
tier: db-custom-1-3840
dataDiskType: PD_SSD
dataDiskSizeGb: 10
writeConnectionSecretToRef:
namespace: crossplane-system
name: cloudsqlpostgresql-conn
providerRef:
name: gcp-provider
reclaimPolicy: Delete
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/provision/gcp.yaml
```
Creating the above instance will cause Crossplane to provision a CloudSQL
instance on GCP. You can view the progress with the following command:
```console
kubectl get cloudsqlinstances.database.gcp.crossplane.io cloudsqlpostgresql
```
When provisioning is complete, you should see `READY: True` in the output. You
can then delete the `CloudSQLInstance`:
```console
kubectl delete cloudsqlinstances.database.gcp.crossplane.io cloudsqlpostgresql
```
</div>
<div class="tab-pane fade" id="azure-tab-1" markdown="1">
The Azure provider supports provisioning an [Azure Database for PostgreSQL]
instance with the `PostgreSQLServer` CRD it installs into your cluster.
> Note: provisioning an Azure Database for PostgreSQL requires the presence of a
> [Resource Group] in your Azure account. We go ahead and provision a new
> `ResourceGroup` here in case you do not already have a suitable one in your
> account.
```yaml
apiVersion: azure.crossplane.io/v1alpha3
kind: ResourceGroup
metadata:
name: sqlserverpostgresql-rg
spec:
location: West US 2
reclaimPolicy: Delete
providerRef:
name: azure-provider
---
apiVersion: database.azure.crossplane.io/v1beta1
kind: PostgreSQLServer
metadata:
name: sqlserverpostgresql
spec:
forProvider:
administratorLogin: myadmin
resourceGroupNameRef:
name: sqlserverpostgresql-rg
location: West US 2
sslEnforcement: Disabled
version: "9.6"
sku:
tier: GeneralPurpose
capacity: 2
family: Gen5
storageProfile:
storageMB: 20480
writeConnectionSecretToRef:
namespace: crossplane-system
name: sqlserverpostgresql-conn
providerRef:
name: azure-provider
reclaimPolicy: Delete
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/provision/azure.yaml
```
Creating the above instance will cause Crossplane to provision a PostgreSQL
database instance on Azure. You can view the progress with the following
command:
```console
kubectl get postgresqlservers.database.azure.crossplane.io sqlserverpostgresql
```
When provisioning is complete, you should see `READY: True` in the output. You
can then delete the `PostgreSQLServer`:
```console
kubectl delete postgresqlservers.database.azure.crossplane.io sqlserverpostgresql
kubectl delete resourcegroup.azure.crossplane.io sqlserverpostgresql-rg
```
</div>
<div class="tab-pane fade" id="alibaba-tab-1" markdown="1">
The Alibaba provider supports provisioning an [ApsaraDB for RDS] instance with
the `RDSInstance` CRD it installs into your cluster.
```yaml
apiVersion: database.alibaba.crossplane.io/v1alpha1
kind: RDSInstance
metadata:
name: rdspostgresql
spec:
forProvider:
engine: PostgreSQL
engineVersion: "9.4"
dbInstanceClass: rds.pg.s1.small
dbInstanceStorageInGB: 20
securityIPList: "0.0.0.0/0"
masterUsername: "test123"
writeConnectionSecretToRef:
namespace: crossplane-system
name: alibaba-rdspostgresql-conn
providerRef:
name: alibaba-provider
reclaimPolicy: Delete
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/provision/alibaba.yaml
```
Creating the above instance will cause Crossplane to provision an RDS instance
on Alibaba. You can view the progress with the following command:
```console
kubectl get rdsinstances.database.alibaba.crossplane.io rdspostgresql
```
When provisioning is complete, you should see `READY: True` in the output. You
can then delete the `RDSInstance`:
```console
kubectl delete rdsinstances.database.alibaba.crossplane.io rdspostgresql
```
</div>
</div>
## Next Steps
Now that you have seen how to provision individual infrastructure resources,
let's take a look at how we can compose infrastructure resources together and
publish them as a single unit to be consumed in the [next section].
<!-- Named Links -->
[installed a provider]: install-configure.md
[configured your credentials]: install-configure.md
[RDS]: https://aws.amazon.com/rds/
[CloudSQL]: https://cloud.google.com/sql
[Azure Database for PostgreSQL]: https://azure.microsoft.com/en-us/services/postgresql/
[Resource Group]: https://docs.microsoft.com/en-us/azure/azure-resource-manager/management/manage-resource-groups-portal#what-is-a-resource-group
[ApsaraDB for RDS]: https://www.alibabacloud.com/product/apsaradb-for-rds-postgresql
[next section]: publish-infrastructure.md

View File

@ -1,824 +0,0 @@
---
title: Publish Infrastructure
toc: true
weight: 4
indent: true
---
# Publish Infrastructure
Provisioning infrastructure using the Kubernetes API is a powerful capability,
but combining primitive infrastructure resources into a single unit and
publishing them to be provisioned by developers and consumed by applications
greatly enhances this functionality.
As mentioned in the [last section], CRDs that represent infrastructure resources
on a provider are installed at the *cluster scope*. However, applications are
typically provisioned at the *namespace scope* using Kubernetes primitives such
as `Pod` or `Deployment`. To make infrastructure resources available to be
provisioned at the namespace scope, they can be *published*. This consists of
creating the following resources:
- `InfrastructureDefinition`: defines a new kind of composite resource
- `InfrastructurePublication`: makes an `InfrastructureDefinition` available at
the namespace scope
- `Composition`: defines one or more resources and their configuration
In addition to making provisioning available at the namespace scope,
[composition] also allows for multiple types of managed resources to satisfy the
same namespace-scoped resource. In the examples below, we will define and
publish a new `PostgreSQLInstance` resource that only takes a single `storageGB`
parameter, and specifies that it will create a connection `Secret` with keys for
`username`, `password`, and `endpoint`. We will then create a `Composition` for
each provider that can satisfy and be parameterized by a `PostgreSQLInstance`.
Let's get started!
> Note: Crossplane must be granted RBAC permissions to managed new
> infrastructure types that we define. This is covered in greater detail in the
> [composition] section, but you can run the following command to grant all
> necessary RBAC permissions for this quick start guide: `kubectl apply -f
> https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/clusterrole.yaml`
## Create InfrastructureDefinition
The next step is defining an `InfrastructureDefinition` that declares the schema
for a `PostgreSQLInstance`. You will notice that this looks very similar to a CRD,
and after the `InfrastructureDefinition` is created, we will in fact have a
`PostgreSQLInstance` CRD present in our cluster.
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: InfrastructureDefinition
metadata:
name: postgresqlinstances.database.example.org
spec:
connectionSecretKeys:
- username
- password
- endpoint
- port
crdSpecTemplate:
group: database.example.org
version: v1alpha1
names:
kind: PostgreSQLInstance
listKind: PostgreSQLInstanceList
plural: postgresqlinstances
singular: postgresqlinstance
validation:
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
parameters:
type: object
properties:
storageGB:
type: integer
required:
- storageGB
required:
- parameters
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/definition.yaml
```
You are now able to create instances of kind `PostgreSQLInstance` at the cluster
scope.
## Create InfrastructurePublication
The `InfrastructureDefinition` will make it possible to create
`PostgreSQLInstance` objects in our Kubernetes cluster at the cluster scope, but
we want to make them available at the namespace scope as well. This is done by
defining an `InfrastructurePublication` that references the new
`PostgreSQLInstance` type.
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: InfrastructurePublication
metadata:
name: postgresqlinstances.database.example.org
spec:
infrastructureDefinitionRef:
name: postgresqlinstances.database.example.org
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/publication.yaml
```
This will create a new CRD named `PostgreSQLInstanceRequirement`, which is the
namespace-scoped variant of the `PostgreSQLInstance` CRD. You are now able to
create instances of kind `PostgreSQLInstanceRequirement` at the namespace scope.
## Create Compositions
Now it is time to define the resources that represent the primitive
infrastructure units that actually get provisioned. For each provider we will
define a `Composition` that satisfies the requirements of the
`PostgreSQLInstance` `InfrastructureDefinition`. In this case, each will result
in the provisioning of a public PostgreSQL instance on the provider.
<ul class="nav nav-tabs">
<li class="active"><a href="#aws-tab-1" data-toggle="tab">AWS (Default VPC)</a></li>
<li><a href="#aws-new-tab-1" data-toggle="tab">AWS (New VPC)</a></li>
<li><a href="#gcp-tab-1" data-toggle="tab">GCP</a></li>
<li><a href="#azure-tab-1" data-toggle="tab">Azure</a></li>
<li><a href="#alibaba-tab-1" data-toggle="tab">Alibaba</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="aws-tab-1" markdown="1">
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: Composition
metadata:
name: postgresqlinstances.aws.database.example.org
labels:
provider: aws
guide: quickstart
vpc: default
spec:
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
from:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstance
to:
- base:
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
spec:
forProvider:
dbInstanceClass: db.t2.small
masterUsername: masteruser
engine: postgres
engineVersion: "9.6"
skipFinalSnapshotBeforeDeletion: true
publiclyAccessible: true
writeConnectionSecretToRef:
namespace: crossplane-system
providerRef:
name: aws-provider
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.uid"
toFieldPath: "spec.writeConnectionSecretToRef.name"
transforms:
- type: string
string:
fmt: "%s-postgresql"
- fromFieldPath: "spec.parameters.storageGB"
toFieldPath: "spec.forProvider.allocatedStorage"
connectionDetails:
- fromConnectionSecretKey: username
- fromConnectionSecretKey: password
- fromConnectionSecretKey: endpoint
- fromConnectionSecretKey: port
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/composition-aws.yaml
```
> Note that this Composition will create an RDS instance using your default VPC,
> which may or may not allow connections from the internet depending on how it
> is configured. Select the AWS (New VPC) Composition if you wish to create an
> RDS instance that will allow traffic from the internet.
</div>
<div class="tab-pane fade" id="aws-new-tab-1" markdown="1">
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: Composition
metadata:
name: vpcpostgresqlinstances.aws.database.example.org
labels:
provider: aws
guide: quickstart
vpc: new
spec:
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
from:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstance
to:
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: VPC
spec:
cidrBlock: 192.168.0.0/16
enableDnsSupport: true
enableDnsHostNames: true
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: Subnet
metadata:
labels:
zone: us-west-2a
spec:
cidrBlock: 192.168.64.0/18
vpcIdSelector:
matchControllerRef: true
availabilityZone: us-west-2a
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: Subnet
metadata:
labels:
zone: us-west-2b
spec:
cidrBlock: 192.168.128.0/18
vpcIdSelector:
matchControllerRef: true
availabilityZone: us-west-2b
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: Subnet
metadata:
labels:
zone: us-west-2c
spec:
cidrBlock: 192.168.192.0/18
vpcIdSelector:
matchControllerRef: true
availabilityZone: us-west-2c
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: database.aws.crossplane.io/v1beta1
kind: DBSubnetGroup
spec:
forProvider:
description: An excellent formation of subnetworks.
subnetIdSelector:
matchControllerRef: true
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: InternetGateway
spec:
vpcIdSelector:
matchControllerRef: true
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: RouteTable
spec:
vpcIdSelector:
matchControllerRef: true
routes:
- destinationCidrBlock: 0.0.0.0/0
gatewayIdSelector:
matchControllerRef: true
associations:
- subnetIdSelector:
matchLabels:
zone: us-west-2a
- subnetIdSelector:
matchLabels:
zone: us-west-2b
- subnetIdSelector:
matchLabels:
zone: us-west-2c
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: network.aws.crossplane.io/v1alpha3
kind: SecurityGroup
spec:
vpcIdSelector:
matchControllerRef: true
groupName: crossplane-getting-started
description: Allow access to PostgreSQL
ingress:
- fromPort: 5432
toPort: 5432
protocol: tcp
cidrBlocks:
- cidrIp: 0.0.0.0/0
description: Everywhere
providerRef:
name: aws-provider
reclaimPolicy: Delete
- base:
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
spec:
forProvider:
dbSubnetGroupNameSelector:
matchControllerRef: true
vpcSecurityGroupIDSelector:
matchControllerRef: true
dbInstanceClass: db.t2.small
masterUsername: masteruser
engine: postgres
engineVersion: "9.6"
skipFinalSnapshotBeforeDeletion: true
publiclyAccessible: true
writeConnectionSecretToRef:
namespace: crossplane-system
providerRef:
name: aws-provider
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.uid"
toFieldPath: "spec.writeConnectionSecretToRef.name"
transforms:
- type: string
string:
fmt: "%s-postgresql"
- fromFieldPath: "spec.parameters.storageGB"
toFieldPath: "spec.forProvider.allocatedStorage"
connectionDetails:
- fromConnectionSecretKey: username
- fromConnectionSecretKey: password
- fromConnectionSecretKey: endpoint
- fromConnectionSecretKey: port
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/composition-aws-with-vpc.yaml
```
</div>
<div class="tab-pane fade" id="gcp-tab-1" markdown="1">
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: Composition
metadata:
name: postgresqlinstances.gcp.database.example.org
labels:
provider: gcp
guide: quickstart
spec:
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
from:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstance
to:
- base:
apiVersion: database.gcp.crossplane.io/v1beta1
kind: CloudSQLInstance
spec:
forProvider:
databaseVersion: POSTGRES_9_6
region: us-central1
settings:
tier: db-custom-1-3840
dataDiskType: PD_SSD
ipConfiguration:
ipv4Enabled: true
authorizedNetworks:
- value: "0.0.0.0/0"
writeConnectionSecretToRef:
namespace: crossplane-system
providerRef:
name: gcp-provider
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.uid"
toFieldPath: "spec.writeConnectionSecretToRef.name"
transforms:
- type: string
string:
fmt: "%s-postgresql"
- fromFieldPath: "spec.parameters.storageGB"
toFieldPath: "spec.forProvider.settings.dataDiskSizeGb"
connectionDetails:
- fromConnectionSecretKey: username
- fromConnectionSecretKey: password
- fromConnectionSecretKey: endpoint
- name: port
value: "5432"
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/composition-gcp.yaml
```
</div>
<div class="tab-pane fade" id="azure-tab-1" markdown="1">
> Note: the `Composition` for Azure also includes a `ResourceGroup` and
> `PostgreSQLServerFirewallRule` that are required to provision a publicly
> available PostgreSQL instance on Azure. Composition enables scenarios such as
> this, as well as far more complex ones. See the [composition] documentation
> for more information.
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: Composition
metadata:
name: postgresqlinstances.azure.database.example.org
labels:
provider: azure
guide: quickstart
spec:
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
from:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstance
to:
- base:
apiVersion: azure.crossplane.io/v1alpha3
kind: ResourceGroup
spec:
location: West US 2
reclaimPolicy: Delete
providerRef:
name: azure-provider
- base:
apiVersion: database.azure.crossplane.io/v1beta1
kind: PostgreSQLServer
spec:
forProvider:
administratorLogin: myadmin
resourceGroupNameSelector:
matchControllerRef: true
location: West US 2
sslEnforcement: Disabled
version: "9.6"
sku:
tier: GeneralPurpose
capacity: 2
family: Gen5
writeConnectionSecretToRef:
namespace: crossplane-system
providerRef:
name: azure-provider
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.uid"
toFieldPath: "spec.writeConnectionSecretToRef.name"
transforms:
- type: string
string:
fmt: "%s-postgresql"
- fromFieldPath: "spec.parameters.storageGB"
toFieldPath: "spec.forProvider.storageProfile.storageMB"
transforms:
- type: math
math:
multiply: 1024
connectionDetails:
- fromConnectionSecretKey: username
- fromConnectionSecretKey: password
- fromConnectionSecretKey: endpoint
- name: port
value: "5432"
- base:
apiVersion: database.azure.crossplane.io/v1alpha3
kind: PostgreSQLServerFirewallRule
spec:
forProvider:
serverNameSelector:
matchControllerRef: true
resourceGroupNameSelector:
matchControllerRef: true
properties:
startIpAddress: 0.0.0.0
endIpAddress: 255.255.255.254
reclaimPolicy: Delete
providerRef:
name: azure-provider
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/composition-azure.yaml
```
</div>
<div class="tab-pane fade" id="alibaba-tab-1" markdown="1">
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: Composition
metadata:
name: postgresqlinstances.alibaba.database.example.org
labels:
provider: alibaba
guide: quickstart
spec:
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
from:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstance
to:
- base:
apiVersion: database.alibaba.crossplane.io/v1alpha1
kind: RDSInstance
spec:
forProvider:
engine: PostgreSQL
engineVersion: "9.4"
dbInstanceClass: rds.pg.s1.small
securityIPList: "0.0.0.0/0"
masterUsername: "myuser"
writeConnectionSecretToRef:
namespace: crossplane-system
providerRef:
name: alibaba-provider
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.uid"
toFieldPath: "spec.writeConnectionSecretToRef.name"
transforms:
- type: string
string:
fmt: "%s-postgresql"
- fromFieldPath: "spec.parameters.storageGB"
toFieldPath: "spec.forProvider.dbInstanceStorageInGB"
connectionDetails:
- fromConnectionSecretKey: username
- fromConnectionSecretKey: password
- fromConnectionSecretKey: endpoint
- fromConnectionSecretKey: port
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/composition-alibaba.yaml
```
</div>
</div>
## Create Requirement
Now that we have defined our new type of infrastructure (`PostgreSQLInstance`)
and created at least one composition that can satisfy it, we can create a
`PostgreSQLInstanceRequirement` in the namespace of our choosing. In each
`Composition` we defined we added a `provider: <name-of-provider>` label. In the
`PostgreSQLInstanceRequirement` below we can use the `compositionSelector` to
match our `Composition` of choice.
<ul class="nav nav-tabs">
<li class="active"><a href="#aws-tab-2" data-toggle="tab">AWS (Default VPC)</a></li>
<li><a href="#aws-tab-new-2" data-toggle="tab">AWS (New VPC)</a></li>
<li><a href="#gcp-tab-2" data-toggle="tab">GCP</a></li>
<li><a href="#azure-tab-2" data-toggle="tab">Azure</a></li>
<li><a href="#alibaba-tab-2" data-toggle="tab">Alibaba</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="aws-tab-2" markdown="1">
```yaml
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstanceRequirement
metadata:
name: my-db
namespace: default
spec:
parameters:
storageGB: 20
compositionSelector:
matchLabels:
provider: aws
vpc: default
writeConnectionSecretToRef:
name: db-conn
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/master/docs/snippets/publish/requirement-aws.yaml
```
</div>
<div class="tab-pane fade" id="aws-tab-new-2" markdown="1">
```yaml
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstanceRequirement
metadata:
name: my-db
namespace: default
spec:
parameters:
storageGB: 20
compositionSelector:
matchLabels:
provider: aws
vpc: new
writeConnectionSecretToRef:
name: db-conn
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/requirement-aws.yaml
```
</div>
<div class="tab-pane fade" id="gcp-tab-2" markdown="1">
```yaml
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstanceRequirement
metadata:
name: my-db
namespace: default
spec:
parameters:
storageGB: 20
compositionSelector:
matchLabels:
provider: gcp
writeConnectionSecretToRef:
name: db-conn
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/requirement-gcp.yaml
```
</div>
<div class="tab-pane fade" id="azure-tab-2" markdown="1">
```yaml
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstanceRequirement
metadata:
name: my-db
namespace: default
spec:
parameters:
storageGB: 20
compositionSelector:
matchLabels:
provider: azure
writeConnectionSecretToRef:
name: db-conn
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/requirement-azure.yaml
```
</div>
<div class="tab-pane fade" id="alibaba-tab-2" markdown="1">
```yaml
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstanceRequirement
metadata:
name: my-db
namespace: default
spec:
parameters:
storageGB: 20
compositionSelector:
matchLabels:
provider: alibaba
writeConnectionSecretToRef:
name: db-conn
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/requirement-alibaba.yaml
```
</div>
</div>
After creating the `PostgreSQLInstanceRequirement` Crossplane will provision a
database instance on your provider of choice. Once provisioning is complete, you
should see `READY: True` in the output when you run:
```console
kubectl get postgresqlinstancerequirement.database.example.org my-db
```
> Note: while waiting for the `PostgreSQLInstanceRequirement` to become ready, you
> may want to look at other resources in your cluser. The following commands
> will allow you to view groups of Crossplane resources:
>
> - `kubectl get managed`: get all resources that represent a unit of external
> infrastructure
> - `kubectl get <name-of-provider>`: get all resources related to `<provider>`
> - `kubectl get crossplane`: get all resources related to Crossplane
You should also see a `Secret` in the `default` namespace named `db-conn` that
contains fields for `username`, `password`, and `endpoint`:
```console
kubectl get secrets db-conn
```
## Consume Infrastructure
Because connection secrets are written as a Kubernetes `Secret` they can easily
be consumed by Kubernetes primitives. The most basic building block in
Kubernetes is the `Pod`. Let's define a `Pod` that will show that we are able to
connect to our newly provisioned database.
```yaml
apiVersion: v1
kind: Pod
metadata:
name: see-db
namespace: default
spec:
containers:
- name: see-db
image: postgres:9.6
command: ['psql']
args: ['-c', 'SELECT current_database();']
env:
- name: PGDATABASE
value: postgres
- name: PGHOST
valueFrom:
secretKeyRef:
name: db-conn
key: endpoint
- name: PGUSER
valueFrom:
secretKeyRef:
name: db-conn
key: username
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: db-conn
key: password
- name: PGPORT
valueFrom:
secretKeyRef:
name: db-conn
key: port
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/publish/pod.yaml
```
This `Pod` simply connects to a PostgreSQL database and prints its name, so you
should see the following output (or similar) after creating it if you run
`kubectl logs see-db`:
```SQL
current_database
------------------
postgres
(1 row)
```
## Clean Up
To clean up the infrastructure that was provisioned, you can delete the
`PostgreSQLInstanceRequirement`:
```console
kubectl delete postgresqlinstancerequirement.database.example.org my-db
```
To clean up the `Pod`, run:
```console
kubectl delete pod see-db
```
> Don't clean up your `InfrastructureDefinition`, `InfrastructurePublication`,
> or `Composition` just yet if you plan to continue on to the next section of
> the guide! We'll use them again when we deploy an OAM application.
## Next Steps
Now you have seen how to provision and publish more complex infrastructure
setups. In the [next section] you will learn how to consume infrastructure
alongside your [OAM] application manifests.
<!-- Named Links -->
[last section]: provision-infrastructure.yaml
[composition]: ../introduction/composition.md
[next section]: run-applications.md
[OAM]: https://oam.dev/

Binary file not shown.

Before

Width:  |  Height:  |  Size: 212 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 841 KiB

View File

@ -1,505 +0,0 @@
---
title: Run Applications
toc: true
weight: 5
indent: true
---
# Run Applications
Crossplane strives to be the best Kubernetes add-on to provision and manage the
infrastructure and services your applications need directly from kubectl. A huge
part of this mission is arriving at an elegant, flexible way to model and manage
cloud native applications. Crossplane allows your team to deploy and run
applications using the [Open Application Model] (OAM).
OAM is a team-centric model for cloud native apps. Like Crossplane, OAM focuses
on the different people who might be involved in the deployment of a cloud
native application. In this getting started guide:
* _Infrastructure Operators_ provide the infrastructure applications need.
* _Application Developers_ build and supply the components of an application.
* _Application Operators_ compose, deploy, and run application configurations.
We'll play the roles of each of these team members as we deploy an application -
Service Tracker - that consists of several services. One of these services, the
`data-api`, is backed by a managed PostgreSQL database that is provisioned
on-demand by Crossplane.
![Service Tracker Diagram]
> This guide follows on from the previous one in which we covered defining,
> composing, and [publishing infrastructure]. You'll need to have defined and
> published a PostgreSQLInstance with at least one working Composition in order
> to create the OAM application we'll use in this guide.
## Infrastructure Operator
### Install workloads and traits
As the infrastructure operator our work is almost done - we defined, published,
and composed the infrastructure that our application developer and operator
teammates will use in the previous guide. One task remains, which is to define
the [_workloads_] and [_traits_] that our platform supports.
OAM applications consist of workloads, each of which may be modified by traits.
The infrastructure operator may choose which workloads and traits by creating
or deleting `WorkloadDefinitions` and `TraitDefinitions` like below:
```yaml
---
apiVersion: core.oam.dev/v1alpha2
kind: WorkloadDefinition
metadata:
name: containerizedworkloads.core.oam.dev
spec:
definitionRef:
name: containerizedworkloads.core.oam.dev
```
Run the following command to add support for all the workloads and traits required
by this guide:
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/run/definitions.yaml
```
Now that we've defined our workloads and traits, we must install Crossplane's
OAM addon. This addon packages the controllers that reconcile core OAM workloads
and traits.
<ul class="nav nav-tabs">
<li class="active"><a href="#install-tab-helm3" data-toggle="tab">Helm 3</a></li>
<li><a href="#install-tab-helm2" data-toggle="tab">Helm 2</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="install-tab-helm3" markdown="1">
Use Helm 3 to install the latest official `alpha` release of Crossplane OAM
Addon, suitable for community use and testing:
```console
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install addon-oam-kubernetes-local --namespace crossplane-system crossplane-alpha/oam-core-resources
```
> Note that the OAM Addon requires at least Kubernetes 1.16.
</div>
<div class="tab-pane fade" id="install-tab-helm2" markdown="1">
Use Helm 2 to install the latest official `alpha` release of the Crossplane OAM
Addon, suitable for community use and testing:
```console
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name addon-oam-kubernetes-local --namespace crossplane-system crossplane-alpha/oam-core-resources
```
> Note that the OAM Addon requires at least Kubernetes 1.16.
</div>
</div>
## Application Developer
### Publish Application Components
Now we'll play the role of the application developer. Our Service Tracker
application consists of a UI service, four API services, and a PostgreSQL
database. Under the Open Application Model application developers define
[_components_] that application operators may compose into applications, which
produce workloads. Creating components allows us as application developers to
communicate any fundamental, suggested, or optional properties of our services
and their infrastructure requirements.
```yaml
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: data-api-database
spec:
workload:
apiVersion: database.example.org/v1alpha1
kind: PostgreSQLInstanceRequirement
metadata:
name: app-postgresql
spec:
parameters:
storageGB: 20
compositionSelector:
matchLabels:
guide: quickstart
parameters:
- name: database-secret
description: Secret to which to write PostgreSQL database connection details.
required: true
fieldPaths:
- spec.writeConnectionSecretToRef.name
- name: database-provider
description: |
Cloud provider that should be used to create a PostgreSQL database.
Either alibaba, aws, azure, or gcp.
fieldPaths:
- spec.compositionSelector.matchLabels.provider
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: data-api
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
metadata:
name: data-api
spec:
containers:
- name: data-api
image: artursouza/rudr-data-api:0.50
env:
- name: DATABASE_USER
fromSecret:
key: username
- name: DATABASE_PASSWORD
fromSecret:
key: password
- name: DATABASE_HOSTNAME
fromSecret:
key: endpoint
- name: DATABASE_PORT
fromSecret:
key: port
- name: DATABASE_NAME
value: postgres
- name: DATABASE_DRIVER
value: postgresql
ports:
- name: http
containerPort: 3009
protocol: TCP
livenessProbe:
exec:
command: [wget, -q, http://127.0.0.1:3009/status, -O, /dev/null, -S]
parameters:
- name: database-secret
description: Secret from which to read PostgreSQL connection details.
required: true
fieldPaths:
- spec.containers[0].env[0].fromSecret.name
- spec.containers[0].env[1].fromSecret.name
- spec.containers[0].env[2].fromSecret.name
- spec.containers[0].env[3].fromSecret.name
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: flights-api
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
metadata:
name: flights-api
spec:
containers:
- name: flights-api
image: sonofjorel/rudr-flights-api:0.49
env:
- name: DATA_SERVICE_URI
ports:
- name: http
containerPort: 3003
protocol: TCP
parameters:
- name: data-uri
description: URI at which the data service is serving
required: true
fieldPaths:
- spec.containers[0].env[0].value
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: quakes-api
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
metadata:
name: quakes-api
spec:
containers:
- name: quakes-api
image: sonofjorel/rudr-quakes-api:0.49
env:
- name: DATA_SERVICE_URI
ports:
- name: http
containerPort: 3012
protocol: TCP
parameters:
- name: data-uri
description: URI at which the data service is serving
required: true
fieldPaths:
- spec.containers[0].env[0].value
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: service-tracker-ui
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
metadata:
name: web-ui
spec:
containers:
- name: service-tracker-ui
image: sonofjorel/rudr-web-ui:0.49
env:
- name: FLIGHT_API_ROOT
- name: WEATHER_API_ROOT
- name: QUAKES_API_ROOT
ports:
- name: http
containerPort: 8080
protocol: TCP
parameters:
- name: flights-uri
description: URI at which the flights service is serving
required: true
fieldPaths:
- spec.containers[0].env[0].value
- name: weather-uri
description: URI at which the weather service is serving
required: true
fieldPaths:
- spec.containers[0].env[1].value
- name: quakes-uri
description: URI at which the quakes service is serving
required: true
fieldPaths:
- spec.containers[0].env[2].value
---
apiVersion: core.oam.dev/v1alpha2
kind: Component
metadata:
name: weather-api
spec:
workload:
apiVersion: core.oam.dev/v1alpha2
kind: ContainerizedWorkload
metadata:
name: weather-api
spec:
containers:
- name: weather-api
image: sonofjorel/rudr-weather-api:0.49
env:
- name: DATA_SERVICE_URI
ports:
- name: http
containerPort: 3015
protocol: TCP
parameters:
- name: data-uri
description: URI at which the data service is serving
required: true
fieldPaths:
- spec.containers[0].env[0].value
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/run/components.yaml
```
Each of the above components describes a particular kind of workload. The
Service Tracker application consists of two kinds of workload:
* A [`ContainerizedWorkload`] is a long-running containerized process.
* A `PostSQLInstanceRequirement` is a PostgreSQL instance and database.
All OAM components configure a kind of workload, and any kind of Kubernetes
resource may act as an OAM workload as long as an infrastructure operator has
allowed it to by authoring a `WorkloadDefinition`.
## Application Operator
### Run The Application
Finally, we'll play the role of an application operator and tie together the
application components and infrastructure that our application developer and
infrastructure operator team-mates have published. In OAM this is done by
authoring an [_application configuration_]:
```yaml
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
name: service-tracker
spec:
components:
- componentName: data-api-database
parameterValues:
- name: database-secret
value: tracker-database-secret
- componentName: data-api
parameterValues:
- name: database-secret
value: tracker-database-secret
- componentName: flights-api
parameterValues:
- name: data-uri
value: "http://data-api.default.svc.cluster.local:3009/"
traits:
- trait:
apiVersion: core.oam.dev/v1alpha2
kind: ManualScalerTrait
metadata:
name: flights-api
spec:
replicaCount: 2
- componentName: quakes-api
parameterValues:
- name: data-uri
value: "http://data-api.default.svc.cluster.local:3009/"
traits:
- trait:
apiVersion: core.oam.dev/v1alpha2
kind: ManualScalerTrait
metadata:
name: quakes-api
spec:
replicaCount: 2
- componentName: weather-api
parameterValues:
- name: data-uri
value: "http://data-api.default.svc.cluster.local:3009/"
traits:
- trait:
apiVersion: core.oam.dev/v1alpha2
kind: ManualScalerTrait
metadata:
name: weather-api
spec:
replicaCount: 2
- componentName: service-tracker-ui
parameterValues:
- name: flights-uri
value: "http://flights-api.default.svc.cluster.local:3003/"
- name: weather-uri
value: "http://weather-api.default.svc.cluster.local:3015/"
- name: quakes-uri
value: "http://quakes-api.default.svc.cluster.local:3012/"
```
```console
kubectl apply -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/run/appconfig.yaml
```
This application configuration names each of components the application
developer created earlier to produce workloads. The application operator may (or
in some cases _must_) provide parameter values for a component in order to
override or specify certain configuration values. Component parameters represent
configuration settings that the component author - the application developer -
deemed to be of interest to application operators.
```yaml
- componentName: data-api-database
parameterValues:
- name: database-provider
value: azure
```
> If you created Compositions for more than one provider in the previous guide
> you can add the above parameter to the `data-api-database` component to choose
> which cloud provider the Service Tracker's database should run on.
You might notice that some components include a [`ManualScalerTrait`]. Traits
augment the workload produced by a component with additional features, allowing
application operators to make decisions about the configuration of a component
without having to involve the developer. The `ManualScalerTrait` allows an
application operator to specify how many replicas should exist of any scalable
kind of workload.
> Note that the OAM spec also includes the concept of an _application scope_.
> Crossplane does not yet support scopes.
## Use The Application
Finally, we'll open and use the Service Tracker application we just deployed.
<ul class="nav nav-tabs">
<li class="active"><a href="#connect-tab-lb" data-toggle="tab">Load Balancer</a></li>
<li><a href="#connect-tab-forward" data-toggle="tab">Port Forward</a></li>
</ul>
<br>
<div class="tab-content">
<div class="tab-pane fade in active" id="connect-tab-lb" markdown="1">
If you deployed Service Tracker to a managed cluster like AKS, ACK, EKS, or GKE
with support for load balancer Services you should be able to browse to the IP
of the `web-ui` service on port 8080 - for example <http://10.0.0.1:8080>.
```console
kubectl get svc web-ui -o=jsonpath={.status.loadBalancer.ingress[0].ip}
```
</div>
<div class="tab-pane fade" id="connect-tab-forward" markdown="1">
If you're using a cluster that doesn't support load balancer Services, like Kind
or Minikube you can use a port forward instead, and connect to
<http://localhost:8080.>
```console
kubectl port-forward deployment/web-ui 8080
```
</div>
</div>
You should see the Service Tracker dashboard in your browser. Hit 'Refresh Data'
for each of the services to ensure the Service Tracker web UI can connect to its
various data API services and populate its PostgreSQL database:
![Service Tracker Dashboard]
If everything was successful you should be able to browse to Flights,
Earthquakes, or Weather to see what's going on in the world today:
![Service Tracker Flights]
## Clean Up
To shut down your application, simply run:
```console
kubectl delete applicationconfiguration service-tracker
```
If you also wish to delete the components, workload definitions, and trait
definitions we created in this guide, run:
```console
kubectl delete -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/run/components.yaml
kubectl delete -f https://raw.githubusercontent.com/crossplane/crossplane/release-0.11/docs/snippets/run/definitions.yaml
```
[Open Application Model]: https://oam.dev/
[publishing infrastructure]: publish-infrastructure.md
[Service Tracker Diagram]: run-applications-diagram.jpg
[_workloads_]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/3.workload.md
[_traits_]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/6.traits.md
[_components_]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/4.component.md
[_application configuration_]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/7.application_configuration.md
[`ContainerizedWorkload`]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/core/workloads/containerized_workload/containerized_workload.md
[`ManualScalerTrait`]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/core/traits/manual_scaler_trait.md
[_application scope_]: https://github.com/oam-dev/spec/blob/1.0.0-alpha2/5.application_scopes.md
[Service Tracker Dashboard]: run-applications-dash.png
[Service Tracker Flights]: run-applications-flights.png

View File

@ -1,7 +0,0 @@
---
title: Argo CD
toc: true
weight: 201
indent: true
redirect_to: https://aws.amazon.com/blogs/opensource/connecting-aws-managed-services-to-your-argo-cd-pipeline-with-open-source-crossplane/
---

View File

@ -1,30 +0,0 @@
---
title: Guides
toc: true
weight: 200
---
# Guides
Because of Crossplane's standardization on the Kubernetes API, it integrates
well with many other projects. Below is a collection of guides and tutorials that
demonstrate how to use Crossplane with a variety tools and projects often used
with Kubernetes plus some deep dive content on Crossplane itself!
- [Argo CD] - use GitOps to provision managed services with Crossplane and Argo CD.
- [Knative] - use managed services provisioned by Crossplane in your Knative services.
- [Okteto] - use managed services in your Okteto development workflow.
- [Open Policy Agent] - set global policy on provisioning cloud resources with Crossplane and OPA.
- [OpenFaaS] - consume managed services with for your serverless functions.
- [Provider Internals] - translate provider APIs into managed resource CRDs and explore managed resource API patterns & best practices.
- [Velero] - backup and restore your Crossplane infrastructure resources.
<!-- Named Links -->
[Velero]: https://www.youtube.com/watch?v=eV_2QoMRqGw&list=PL510POnNVaaYFuK-B_SIUrpIonCtLVOzT&index=18&t=183s
[Argo CD]: https://aws.amazon.com/blogs/opensource/connecting-aws-managed-services-to-your-argo-cd-pipeline-with-open-source-crossplane/
[Open Policy Agent]: https://github.com/crossplane/tbs/tree/master/episodes/14
[Knative]: https://github.com/crossplane/tbs/tree/master/episodes/15
[OpenFaaS]: https://github.com/crossplane/tbs/tree/master/episodes/13
[Okteto]: https://github.com/crossplane/tbs/tree/master/episodes/10
[Provider Internals]: https://github.com/crossplane/tbs/tree/master/episodes/7

View File

@ -1,7 +0,0 @@
---
title: Knative
toc: true
weight: 202
indent: true
redirect_to: https://github.com/crossplane/tbs/tree/master/episodes/15
---

View File

@ -1,7 +0,0 @@
---
title: Okteto
toc: true
weight: 203
indent: true
redirect_to: https://github.com/crossplane/tbs/tree/master/episodes/10
---

View File

@ -1,7 +0,0 @@
---
title: Open Policy Agent
toc: true
weight: 204
indent: true
redirect_to: https://github.com/crossplane/tbs/tree/master/episodes/14
---

View File

@ -1,7 +0,0 @@
---
title: OpenFaaS
toc: true
weight: 205
indent: true
redirect_to: https://github.com/crossplane/tbs/tree/master/episodes/13
---

View File

@ -1,7 +0,0 @@
---
title: Provider Internals
toc: true
weight: 206
indent: true
redirect_to: https://github.com/crossplane/tbs/tree/master/episodes/7
---

View File

@ -1,7 +0,0 @@
---
title: Velero
toc: true
weight: 207
indent: true
redirect_to: https://www.youtube.com/watch?v=eV_2QoMRqGw&list=PL510POnNVaaYFuK-B_SIUrpIonCtLVOzT&index=18&t=183s
---

Binary file not shown.

Before

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

View File

@ -1,898 +0,0 @@
---
title: Publishing Infrastructure
toc: true
weight: 103
indent: true
---
# Publishing Infrastructure
Crossplane allows infrastructure operators to define and compose new kinds of
infrastructure resources then publish them for the application operators they
support to use, all without writing any code.
Infrastructure providers extend Crossplane, enabling it to manage a wide array
of infrastructure resources like Azure SQL servers and AWS ElastiCache clusters.
Infrastructure composition allows infrastructure operators to define, share, and
reuse new kinds of infrastructure resources that are _composed_ of these
infrastructure resources. Infrastructure operators may configure one or more
compositions of any defined resource, and may publish any defined resource to
their application operators, who may then declare that their application
requires that kind of resource.
Composition can be used to build a catalogue of kinds and configuration classes
of infrastructure that fit the needs and opinions of your organisation. As an
infrastructure operator you might define your own `MySQLInstance` resource. This
resource would allow your application operators to configure only the settings
that _your_ organisation needs - perhaps engine version and storage size. All
other settings are deferred to a selectable composition representing a
configuration class like "production" or "staging". Compositions can hide
infrastructure complexity and include policy guardrails so that applications can
easily and safely consume the infrastructure they need, while conforming to your
organisational best-practices.
> Note that composition is an **experimental** feature of Crossplane. Refer to
> [Current Limitations] for information on functionality that is planned but not
> yet implemented.
## Concepts
![Infrastructure Composition Concepts]
A _Composite_ infrastructure resource is composed of other resources. Its
configuration schema is user-defined. The `MySQLInstance` resources in the above
diagram are composite infrastructure resources.
A `Composition` specifies how Crossplane should reconcile a composite
infrastructure resource - i.e. what infrastructure resources it should compose.
For example the Azure `Composition` configures Crossplane to reconcile a
`MySQLInstance` with an Azure `MySQLServer` and a `MySQLServerFirewallRule`.
A _Requirement_ for an infrastructure resource declares that an application
requires a particular kind of composite infrastructure resource, as well as
specifying how to configure that resource. The `MySQLInstanceRequirement`
resources in the above diagram declare that the application pods each require a
`MySQLInstance`.
An `InfrastructureDefinition` defines a new kind of composite infrastructure
resource. The `InfrastructureDefinition` in the above diagram defines the
existence and schema of the `MySQLInstance` composite infrastructure resource.
An `InfrastructurePublication` publishes a defined composite infrastructure
resource, allowing application operators to declare a requirement for it. The
`InfrastructurePublication` in the above diagram allows application operators to
author a `MySQLInstanceRequirement`.
> Note that composite resources and compositions are _cluster scoped_ - they
> exist outside of any Kubernetes namespace. A requirement is a namespaced proxy
> for a composite resource. This enables an RBAC model under which application
> operators may only interact with infrastructure via requirements.
## Defining and Publishing Infrastructure
New kinds of infrastructure resource are typically defined and published by an
infrastructure operator. There are three steps to this process:
1. Define your resource and its schema.
1. Specify one or more possible ways your resource may be composed.
1. Optionally, publish your resource so that applications may require it.
### Define your Infrastructure Resource
Infrastructure resources are defined by an `InfrastructureDefinition`:
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: InfrastructureDefinition
metadata:
# InfrastructureDefinitions follow the constraints of CustomResourceDefinition
# names. They must be named <plural>.<group>, per the plural and group names
# configured by the crdSpecTemplate below.
name: mysqlinstances.example.org
spec:
# Composite infrastructure resources may optionally expose a connection secret
# - a Kubernetes Secret containing all of the details a pod might need to
# connect to the infrastructure resource. Resources that wish to expose a
# connection secret must declare what keys they support. These keys form a
# 'contract' - any composition that intends to be compatible with this
# infrastructure resource must compose resources that supply these connection
# secret keys.
connectionSecretKeys:
- username
- password
- hostname
- port
# A template for the spec of a CustomResourceDefinition. Only the group,
# version, names, validation, and additionalPrinterColumns fields of a CRD
# spec are supported.
crdSpecTemplate:
group: example.org
version: v1alpha1
names:
kind: MySQLInstance
listKind: MySQLInstanceList
plural: mysqlinstances
singular: mysqlinstance
validation:
# This schema defines the configuration fields that the composite resource
# supports. It uses the same structural OpenAPI schema as a Kubernetes CRD
# - for example, this resource supports a spec.parameters.version enum.
# The following fields are reserved for Crossplane's use, and will be
# overwritten if included in this validation schema:
#
# - soec.resourceRef
# - spec.resourceRefs
# - spec.requirementRef
# - spec.writeConnectionSecretToRef
# - spec.reclaimPolicy
# - status.conditions
openAPIV3Schema:
type: object
properties:
spec:
type: object
properties:
parameters:
type: object
properties:
version:
description: MySQL engine version
type: string
enum: ["5.6", "5.7"]
storageGB:
type: integer
location:
description: Geographic location of this MySQL server.
type: string
required:
- version
- storageGB
- location
required:
- parameters
```
Refer to the Kubernetes documentation on [structural schemas] for full details
on how to configure the `openAPIV3Schema` for your composite resource.
`kubectl describe` can be used to confirm that a new composite infrastructure
resource was successfully defined. Note the `Established` condition and events,
which indicate the process was successful.
```console
$ kubectl describe infrastructuredefinition mysqlinstances.example.org
Name: mysqlinstances.example.org
Namespace:
Labels: <none>
Annotations: API Version: apiextensions.crossplane.io/v1alpha1
Kind: InfrastructureDefinition
Metadata:
Creation Timestamp: 2020-05-15T05:30:44Z
Finalizers:
finalizer.apiextensions.crossplane.io
published.apiextensions.crossplane.io
Generation: 1
Resource Version: 1418120
Self Link: /apis/apiextensions.crossplane.io/v1alpha1/infrastructuredefinitions/mysqlinstances.example.org
UID: f8fedfaf-4dfd-4b8a-8228-6af0f4abd7a0
Spec:
Connection Secret Keys:
username
password
hostname
port
Crd Spec Template:
Group: example.org
Names:
Kind: MySQLInstance
List Kind: MySQLInstanceList
Plural: mysqlinstances
Singular: mysqlinstance
Validation:
openAPIV3Schema:
Properties:
Spec:
Parameters:
Properties:
Location:
Description: Geographic location of this MySQL server.
Type: string
Storage GB:
Type: integer
Version:
Description: MySQL engine version
Enum:
5.6
5.7
Type: string
Required:
version
storageGB
location
Type: object
Required:
parameters
Type: object
Type: object
Version: v1alpha1
Status:
Conditions:
Last Transition Time: 2020-05-15T05:30:45Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Last Transition Time: 2020-05-15T05:30:45Z
Reason: Created CRD and started controller
Status: True
Type: Established
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ApplyInfrastructureDefinition 4m10s apiextensions/infrastructuredefinition.apiextensions.crossplane.io waiting for CustomResourceDefinition to be established
Normal RenderCustomResourceDefinition 55s (x8 over 4m10s) apiextensions/infrastructuredefinition.apiextensions.crossplane.io Rendered CustomResourceDefinition
Normal ApplyInfrastructureDefinition 55s (x7 over 4m9s) apiextensions/infrastructuredefinition.apiextensions.crossplane.io Applied CustomResourceDefinition and (re)started composite controller
```
### Specify How Your Resource May Be Composed
Once a new kind of infrastructure resource is defined Crossplane must be
instructed how to reconcile that kind of infrastructure resource. This is done
by authoring a `Composition`.
A `Composition`:
* Declares one kind of composite infrastructure resource that it satisfies.
* Specifies a "base" configuration for one or more infrastructure resources.
* Specifies "patches" that overlay configuration values from an instance of the
composite infrastructure resource onto each "base".
Multiple compositions may satisfy a particular kind of composite infrastructure
resource, and the author of a composite resource (or resource requirement) may
select which composition will be used. This allows an infrastructure operator to
offer their application operators a choice between multiple opinionated classes
of infrastructure, allowing them to explicitly specify only some configuration.
An infrastructure operator may offer their application operators the choice
between an "Azure" and a "GCP" composition when creating a `MySQLInstance` for
example, Or they may offer a choice between a "production" and a "staging"
`MySQLInstance` composition. In either case the application operator may
configure any value supported by the composite infrastructure resource's schema,
with all other values being deferred to the composition.
> Note that per [Current Limitations] it is not currently possible to specify a
> default or an enforced composition for a particular kind of infrastructure
> resource, but these options will be added in a future release of Crossplane.
> In the meantime composite resource and/or resource requirement authors must
> specify a composition for their resource to use.
The below `Composition` satisfies the `MySQLInstance` defined in the previous
section by composing an Azure SQL server, firewall rule, and resource group:
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: Composition
metadata:
name: example-azure
labels:
purpose: example
provider: azure
spec:
# This Composition declares that it satisfies the MySQLInstance composite
# resource defined above - i.e. it patches "from" a MySQLInstance.
from:
apiVersion: example.org/v1alpha1
kind: MySQLInstance
# This Composition reconciles an instance of a MySQLInstance by patching from
# the MySQLInstance "to" new instances of the infrastructure resources below.
# These resources may be the managed resources of an infrastructure provider
# such as provider-azure, or other composite infrastructure resources.
to:
# A MySQLInstance that uses this Composition will be composed of an Azure
# ResourceGroup. The "base" for this ResourceGroup specifies the base
# configuration that may be extended or mutated by the patches below.
- base:
apiVersion: azure.crossplane.io/v1alpha3
kind: ResourceGroup
spec:
reclaimPolicy: Delete
providerRef:
name: example
# Patches copy or "overlay" the value of a field path within the composite
# resource (the MySQLInstance) to a field path within the composed resource
# (the ResourceGroup). In the below example any labels and annotations will
# be propagated from the MySQLInstance to the ResourceGroup, as will the
# location.
patches:
- fromFieldPath: "metadata.labels"
toFieldPath: "metadata.labels"
- fromFieldPath: "metadata.annotations"
toFieldPath: "metadata.annotations"
- fromFieldPath: "spec.parameters.location"
toFieldPath: "spec.location"
# Sometimes it is necessary to "transform" the value from the composite
# resource into a value suitable for the composed resource, for example an
# Azure based composition may represent geographical locations differently
# from a GCP based composition that satisfies the same composite resource.
# This can be done by providing an optional array of transforms, such as
# the below that will transform the MySQLInstance spec.parameters.location
# value "us-west" into the ResourceGroup spec.location value "West US".
transforms:
- type: map
map:
us-west: West US
us-east: East US
au-east: Australia East
# A MySQLInstance that uses this Composition will also be composed of an
# Azure MySQLServer.
- base:
apiVersion: database.azure.crossplane.io/v1beta1
kind: MySQLServer
spec:
forProvider:
# When this MySQLServer is created it must specify a ResourceGroup in
# which it will exist. The below resourceGroupNameSelector corresponds
# to the spec.forProvider.resourceGroupName field of the MySQLServer.
# It selects a ResourceGroup with a matching controller reference.
# Two resources that are part of the same composite resource will have
# matching controller references, so this MySQLServer will always
# select the ResourceGroup above. If this Composition included more
# than one ResourceGroup they could be differentiated by matchLabels.
resourceGroupNameSelector:
matchControllerRef: true
administratorLogin: notadmin
sslEnforcement: Disabled
sku:
tier: GeneralPurpose
capacity: 8
family: Gen5
storageProfile:
backupRetentionDays: 7
geoRedundantBackup: Disabled
providerRef:
name: example
writeConnectionSecretToRef:
namespace: crossplane-system
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.labels"
toFieldPath: "metadata.labels"
- fromFieldPath: "metadata.annotations"
toFieldPath: "metadata.annotations"
- fromFieldPath: "metadata.uid"
toFieldPath: "spec.writeConnectionSecretToRef.name"
transforms:
# Transform the value from the MySQLInstance using Go string formatting.
# This can be used to prefix or suffix a string, or to convert a number
# to a string. See https://golang.org/pkg/fmt/ for more detail.
- type: string
string:
fmt: "%s-mysqlserver"
- fromFieldPath: "spec.parameters.version"
toFieldPath: "spec.forProvider.version"
- fromFieldPath: "spec.parameters.location"
toFieldPath: "spec.forProvider.location"
transforms:
- type: map
map:
us-west: West US
us-east: East US
au-east: Australia East
- fromFieldPath: "spec.parameters.storageGB"
toFieldPath: "spec.forProvider.storageProfile.storageMB"
# Transform the value from the MySQLInstance by multiplying it by 1024 to
# convert Gigabytes to Megabytes.
transforms:
- type: math
math:
multiply: 1024
# In addition to a base and patches, this composed MySQLServer declares that
# it can fulfil the connectionSecretKeys contract required by the definition
# of the MySQLInstance. This MySQLServer writes a connection secret with a
# username, password, and endpoint that may be used to connect to it. These
# connection details will also be exposed via the composite resource's
# connection secret. Exactly one composed resource must provide each secret
# key, but different composed resources may provide different keys.
connectionDetails:
- fromConnectionSecretKey: username
- fromConnectionSecretKey: password
# The name of the required MySQLInstance connection secret key can be
# supplied if it is different from the connection secret key exposed by
# the MySQLServer.
- name: hostname
fromConnectionSecretKey: endpoint
# In some cases it may be desirable to inject a fixed connection secret
# value, for example to expose fixed, non-sensitive connection details
# like standard ports that are not published to the composed resource's
# connection secret.
- name: port
value: "3306"
# A MySQLInstance that uses this Composition will also be composed of an
# Azure MySQLServerFirewallRule.
- base:
apiVersion: database.azure.crossplane.io/v1alpha3
kind: MySQLServerFirewallRule
spec:
forProvider:
resourceGroupNameSelector:
matchControllerRef: true
serverNameSelector:
matchControllerRef: true
properties:
startIpAddress: 10.10.0.0
endIpAddress: 10.10.255.254
virtualNetworkSubnetIdSelector:
name: sample-subnet
providerRef:
name: example
reclaimPolicy: Delete
patches:
- fromFieldPath: "metadata.labels"
toFieldPath: "metadata.labels"
- fromFieldPath: "metadata.annotations"
toFieldPath: "metadata.annotations"
# Some composite resources may be "dynamically provisioned" - i.e. provisioned
# on-demand to satisfy an application's requirement for infrastructure. The
# writeConnectionSecretsToNamespace and reclaimPolicy fields configure default
# values used when dynamically provisioning a composite resource; they are
# explained in more detail below.
writeConnectionSecretsToNamespace: crossplane-system
reclaimPolicy: Delete
```
Field paths reference a field within a Kubernetes object via a simple string.
API conventions describe the syntax as "standard JavaScript syntax for accessing
that field, assuming the JSON object was transformed into a JavaScript object,
without the leading dot, such as metadata.name". Array indices are specified via
square braces while object fields may be specified via a period or via square
braces.Kubernetes field paths do not support advanced features of JSON paths,
such as `@`, `$`, or `*`. For example given the below `Pod`:
```yaml
apiVersion: v1
kind: Pod
metadata:
name: example-pod
annotations:
example.org/a: example-annotation
spec:
containers:
- name: example-container
image: example:latest
command: [example]
args: ["--debug", "--example"]
```
* `metadata.name` would contain "example-pod"
* `metadata.annotations['example.org/a']` would contain "example-annotation"
* `spec.containers[0].name` would contain "example-container"
* `spec.containers[0].args[1]` would contain "--example"
> Note that Compositions provide _intentionally_ limited functionality when
> compared to powerful templating and composition tools like Helm or Kustomize.
> This allows a Composition to be a schemafied Kubernetes-native resource that
> can be stored in and validated by the Kubernetes API server at authoring time
> rather than invocation time.
### Publish Your Infrastructure Resource
An infrastructure operator may choose to publish any defined kind of composite
infrastructure resource to their application operators. Doing so is optional -
a kind of resource that is defined but not published may still be created at the
cluster scope, but application operators may not create a namespaced requirement
for that kind of resource.
Infrastructure is published by an `InfrastructurePublication`:
```yaml
apiVersion: apiextensions.crossplane.io/v1alpha1
kind: InfrastructurePublication
metadata:
# InfrastructurePublications must use the same name as the
# InfrastructureDefinition that defines the resource they publish.
name: mysqlinstances.example.org
spec:
infrastructureDefinitionRef:
name: mysqlinstances.example.org
```
`kubectl describe` can be used to confirm that a composite infrastructure
resource was successfully published. Note the `Established` condition and
events, which indicate the process was successful.
```console
$ kubectl describe infrastructurepublication mysqlinstances.example.org
Name: mysqlinstances.example.org
Namespace:
Labels: <none>
Annotations: API Version: apiextensions.crossplane.io/v1alpha1
Kind: InfrastructurePublication
Metadata:
Creation Timestamp: 2020-05-15T05:30:44Z
Finalizers:
published.apiextensions.crossplane.io
Generation: 1
Resource Version: 1418122
Self Link: /apis/apiextensions.crossplane.io/v1alpha1/infrastructurepublications/mysqlinstances.example.org
UID: 534c5151-2adc-4b4b-9248-fa30740e6b32
Spec:
Infrastructure Definition Ref:
Name: mysqlinstances.example.or
Status:
Conditions:
Last Transition Time: 2020-05-15T05:30:46Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Last Transition Time: 2020-05-15T05:30:46Z
Reason: Created CRD and started controller
Status: True
Type: Established
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ApplyInfrastructurePublication 7m35s apiextensions/infrastructurepublication.apiextensions.crossplane.io waiting for CustomResourceDefinition to be established
Normal GetInfrastructureDefinition 7m32s (x7 over 7m36s) apiextensions/infrastructurepublication.apiextensions.crossplane.io Got published InfrastructureDefinition
Normal RenderCustomResourceDefinition 7m32s (x7 over 7m36s) apiextensions/infrastructurepublication.apiextensions.crossplane.io Rendered CustomResourceDefinition
Normal ApplyInfrastructurePublication 7m32s (x5 over 7m34s) apiextensions/infrastructurepublication.apiextensions.crossplane.io Applied CustomResourceDefinition and (re)started requirement controller
```
### Permit Crossplane to Reconcile Your Infrastructure Resource
Typically Crossplane runs using a service account that does not have access to
reconcile arbitrary kinds of resource. A `ClusterRole` can grant Crossplane
permission to reconcile your newly defined and published resource:
```yaml
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: mysqlinstances.example.org
labels:
rbac.crossplane.io/aggregate-to-crossplane: "true"
rules:
- apiGroups:
- example.org
resources:
- mysqlinstances
- mysqlinstances/status
- mysqlinstancerequirements
- mysqlinstancerequirements/status
verbs:
- "*"
```
## Using Infrastructure
![Infrastructure Composition Provisioning]
Crossplane offers several ways for both infrastructure operators and application
operators to use the composite infrastructure they've defined and published:
1. Only infrastructure operators can create or manage infrastructure of a kind
that is not published to application operators.
1. Infrastructure operators can create infrastructure of a published kind. This
allows an application operator to create a requirement that specifically
requires the resource the infrastructure operator created.
1. Application operators can create a requirement for infrastructure of a kind
that has been published, and it will be provisioned on-demand.
Options one and two are frequently referred to as "static provisioning", while
option three is known as "dynamic provisioning".
> Note that infrastructure operator focused Crossplane concepts are cluster
> scoped - they exist outside any namespace. Crossplane assumes infrastructure
> operators will have similar RBAC permissions to cluster administrators, and
> will thus be permitted to manage cluster scoped resources. Application
> operator focused Crossplane concepts are namespaced. Crossplane assumes
> application operators will be permitted access to the namespace(s) in which
> their applications run, and not to cluster scoped resources.
### Creating and Managing Composite Infrastructure
An infrastructure operator may wish to author a composite resource of a kind
that is published - i.e. a resource kind that has an `InfrastructurePublication`
as well as an `InfrastructureDefinition` - so that an application operator may
later author a requirement for that exact resource. This pattern is useful for
infrastructure resources that may take several minutes to provision - the
infrastructure operator can keep a pool of resources available in advance in
order to ensure application requirements may be instantly satisfied.
In some cases an infrastructure operator may wish to use Crossplane to model
composite infrastructure that they do not wish to allow application operators to
provision. Consider a `VPCNetwork` composite resource that creates an AWS VPC
network with an internet gateway, route table, and several subnets. Defining
this resource as a composite allows the infrastructure operator to easily reuse
their configuration, but it does not make sense in their organisation to allow
application operators to create "supporting infrastructure" like a VPC network.
In both of the above scenarios the infrastructure operator may statically
provision a composite resource; i.e. author it directly rather than via its
corresponding requirement. The `MySQLInstance` composite infrastructure resource
defined above could be authored as follows:
```yaml
apiVersion: example.org/v1alpha1
kind: MySQLInstance
metadata:
# Composite resources are cluster scoped, so there's no need for a namespace.
name: example
spec:
# The schema of the spec.parameters object is defined by the earlier example
# of an InfrastructureDefinition. The location, storageGB, and version fields
# are patched onto the ResourceGroup, MySQLServer, and MySQLServerFirewallRule
# that this MySQLInstance composes.
parameters:
location: au-east
storageGB: 20
version: "5.7"
# Support for a compositionRef is automatically injected into the schema of
# all defined composite infrastructure resources. This allows the resource
# author to explicitly reference a Composition that this composite resource
# should use - in this case the earlier example-azure Composition. Note that
# it is also possible to select a composition by labels - see the below
# MySQLInstanceRequirement for an example of this approach.
compositionRef:
name: example-azure
# Support for a writeConnectionSecretToRef is automatically injected into the
# schema of all defined composite infrastructure resources. This allows the
# resource to write a connection secret containing any details required to
# connect to it - in this case the hostname, username, and password. Composite
# resource authors may omit this reference if they do not need or wish to
# write these details.
writeConnectionSecretToRef:
namespace: infra-secrets
name: example-mysqlinstance
# Support for a reclaimPolicy is automatically injected into the schema of all
# defined composite infrastructure resources. The reclaim policy applies only
# to published kinds of infrastructure - it controls whether the resource is
# deleted or retained when its corresponding Requirement is deleted. If an
# application authored a MySQLInstanceRequirement for this MySQLInstance then
# later deleted their MySQLInstanceRequirement this MySQLInstance and all of
# the resources it composes would be deleted. If the policy were instead set
# to 'Retain' the MySQLInstance would be retained, for example to allow an
# infrastructure operator to perform manual cleanup.
reclaimPolicy: Delete
```
Any updates to the `MySQLInstance` composite infrastructure resource will be
immediately reconciled with the resources it composes. For example if more
storage were needed an update to the `spec.parameters.storageGB` field would
immediately be propagated to the `spec.forProvider.storageProfile.storageMB`
field of the composed `MySQLServer` due to the relationship established between
these two fields by the patches configured in the `example-azure` `Composition`.
`kubectl describe` may be used to examine a composite infrastructure resource.
Note the `Synced` and `Ready` conditions below. The former indicates that
Crossplane is successfully reconciling the composite resource by updating the
composed resources. The latter indicates that all composed resources are also
indicating that they are in condition `Ready`, and therefore the composite
resource should be online and ready to use. More detail about the health and
configuration of the composite resource can be determined by describing each
composite resource. The kinds and names of each composed resource are exposed as
"Resource Refs" - for example `kubectl describe mysqlserver example-zrpgr` will
describe the detailed state of the composed Azure `MySQLServer`.
```console
$ kubectl describe mysqlinstance.example.org
Name: example
Namespace:
Labels: <none>
Annotations: API Version: example.org/v1alpha1
Kind: MySQLInstance
Metadata:
Creation Timestamp: 2020-05-15T06:53:16Z
Generation: 4
Resource Version: 1425809
Self Link: /apis/example.org/v1alpha1/mysqlinstances/example
UID: f654dd52-fe0e-47c8-aa9b-235c77505674
Spec:
Composition Ref:
Name: example-azure
Parameters:
Location: au-east
Storage GB: 20
Version: 5.7
Reclaim Policy: Delete
Resource Refs:
API Version: azure.crossplane.io/v1alpha3
Kind: ResourceGroup
Name: example-wspmk
UID: 4909ab46-95ef-4ba7-8f7a-e1d9ee1a6b23
API Version: database.azure.crossplane.io/v1beta1
Kind: MySQLServer
Name: example-zrpgr
UID: 3afb903e-32db-4834-a6e7-31249212dca0
API Version: database.azure.crossplane.io/v1alpha3
Kind: MySQLServerFirewallRule
Name: example-h4zjn
UID: 602c8412-7c33-4338-a3af-78166c17b1a0
Write Connection Secret To Ref:
Name: example-mysqlinstance
Namespace: infra-secrets
Status:
Conditions:
Last Transition Time: 2020-05-15T06:56:46Z
Reason: Resource is available for use
Status: True
Type: Ready
Last Transition Time: 2020-05-15T06:53:16Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SelectComposition 10s (x7 over 3m40s) composite/mysqlinstances.example.org Successfully selected composition
Normal PublishConnectionSecret 10s (x7 over 3m40s) composite/mysqlinstances.example.org Successfully published connection details
Normal ComposeResources 10s (x7 over 3m40s) composite/mysqlinstances.example.org Successfully composed resources
```
### Creating an Infrastructure Requirement
Infrastructure requirements represent an application's requirement for a defined
kind of infrastructure, for example the above `MySQLInstance`. Requirements are
a proxy for the kind of resource they require, allowing application operators to
provision and consume infrastructure. An infrastructure requirement may require
pre-existing, statically provisioned infrastructure or it may dynamically
provision infrastructure on-demand. A requirement for a defined kind of
composite infrastructure resource is always suffixed with `Requirement`. A
requirement for a composite infrastructure resource of kind `MySQLInstance` is,
for example, of kind `MySQLInstanceRequirement`.
The below requirement explicitly requires the `MySQLInstance` authored in the
previous example:
```yaml
# The MySQLInstanceRequirement always has the same API group and version as the
# resource it requires. Its kind is always suffixed with Requirement.
apiVersion: example.org/v1alpha1
kind: MySQLInstanceRequirement
metadata:
# Infrastructure requirements are namespaced.
namespace: default
name: example
spec:
# The schema of the spec.parameters object is defined by the earlier example
# of an InfrastructureDefinition. The location, storageGB, and version fields
# are patched onto the ResourceGroup, MySQLServer, and MySQLServerFirewallRule
# composed by the required MySQLInstance.
parameters:
location: au-east
storageGB: 20
version: "5.7"
# Support for a resourceRef is automatically injected into the schema of all
# published infrastructure requirement resources. The resourceRef specifies
# the explicit MySQLInstance that is required.
resourceRef:
name: example
# Support for a writeConnectionSecretToRef is automatically injected into the
# schema of all published infrastructure requirement resources. This allows
# the resource to write a connection secret containing any details required to
# connect to it - in this case the hostname, username, and password.
writeConnectionSecretToRef:
name: example-mysqlinstance
```
A requirement may omit the `resourceRef` and instead include a `compositionRef`
(as in the previous `MySQLInstance` example) or a `compositionSelector` in order
to trigger dynamic provisioning. A requirement that does not include a reference
to an existing composite infrastructure resource will have a suitable composite
resource provisioned on demand:
```yaml
apiVersion: example.org/v1alpha1
kind: MySQLInstanceRequirement
metadata:
namespace: default
name: example
spec:
parameters:
location: au-east
storageGB: 20
version: "5.7"
# Support for a compositionSelector is automatically injected into the schema
# of all published infrastructure requirement resources. This selector selects
# the example-azure composition by its labels.
compositionSelector:
matchLabels:
purpose: example
provider: azure
writeConnectionSecretToRef:
name: example-mysqlinstance
```
> Note that compositionSelector labels can form a shared language between the
> infrastructure operators who define compositions and the application operators
> who require composite resources. Compositions could be labelled by zone, size,
> or purpose in order to allow application operators to request a class of
> composite resource by describing their needs such as "east coast, production".
Like composite resources, requirements can be examined using `kubectl describe`.
The `Synced` and `Ready` conditions have the same meaning as the `MySQLInstance`
above. The "Resource Ref" indicates the name of the composite resource that was
either explicitly required, or in the case of the below requirement dynamically
provisioned.
```console
$ kubectl describe mysqlinstancerequirement.example.org example
Name: example
Namespace: default
Labels: <none>
Annotations: API Version: example.org/v1alpha1
Kind: MySQLInstanceRequirement
Metadata:
Creation Timestamp: 2020-05-15T07:08:11Z
Finalizers:
finalizer.apiextensions.crossplane.io
Generation: 3
Resource Version: 1428420
Self Link: /apis/example.org/v1alpha1/namespaces/default/mysqlinstancerequirements/example
UID: d87e9580-9d2e-41a7-a198-a39851815840
Spec:
Composition Selector:
Match Labels:
Provider: azure
Purpose: example
Parameters:
Location: au-east
Storage GB: 20
Version: 5.7
Resource Ref:
Name: default-example-8t4tb
Write Connection Secret To Ref:
Name: example-mysqlinstance
Status:
Conditions:
Last Transition Time: 2020-05-15T07:26:49Z
Reason: Resource is available for use
Status: True
Type: Ready
Last Transition Time: 2020-05-15T07:08:11Z
Reason: Successfully reconciled resource
Status: True
Type: Synced
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ConfigureCompositeResource 8m23s requirement/mysqlinstances.example.org Successfully configured composite resource
Normal BindCompositeResource 8m23s (x7 over 8m23s) requirement/mysqlinstances.example.org Composite resource is not yet ready
Normal BindCompositeResource 4m53s (x4 over 23m) requirement/mysqlinstances.example.org Successfully bound composite resource
Normal PropagateConnectionSecret 4m53s (x4 over 23m) requirement/mysqlinstances.example.org Successfully propagated connection details from composite resource
```
## Current Limitations
Defining and publishing composite infrastructure resources is an experimental
feature of Crossplane. At present the below functionality is planned but not yet
implemented:
* Updates to a composite infrastructure resource are immediately applied to the
resources it composes, but updates to a requirement are not yet applied to the
composite resource that was allocated to satisfy the requirement. In a future
release of Crossplane updating a requirement will update its allocated
composite infrastructure resource.
* A composite infrastructure resource (or dynamically provisioned requirement)
must specify a composition reference or selector. In a future Crossplane
release it will be possible to configure a default composition for resources
that provide no information about what composition they desire, and to
configure a single composition that will be enforced for all resources.
* Only three transforms are currently supported - string format, multiplication,
and map. Crossplane intends to limit the set of supported transforms, and will
add more as clear use cases appear.
* Compositions are mutable, and updating a composition causes all composite
resources that use that composition to be updated accordingly. A future
release of Crossplane may alter this behaviour.
* Composite resources that are retained when the requirement they were allocated
to is deleted become available for allocation to another requirement. This may
not be the desired behaviour, for example if the application that previously
required the composite infrastructure resource wrote sensitive data to it. A
future release of Crossplane may alter the reclaim policy options.
[Current Limitations]: #current-limitations
[Infrastructure Composition Concepts]: composition-concepts.png
[structural schemas]: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#specifying-a-structural-schema
[Infrastructure Composition Provisioning]: composition-provisioning.png

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

View File

@ -1,261 +0,0 @@
---
title: Managed Resources
toc: true
weight: 102
indent: true
---
# Managed Resources
## Overview
Managed resources are the Crossplane representation of the cloud [provider][provider]
resources and they are considered primitive low level custom resources that can
be used directly to provision external cloud resources for an application or as
part of an infrastructure composition.
For example, `RDSInstance` in AWS Provider corresponds to an actual RDS Instance
in AWS. There is a one-to-one relationship and the changes on managed resources are
reflected directly on the corresponding resource in the provider.
You can browse [API Reference][api-reference] to discover all available managed resources.
## Syntax
Crossplane API conventions extend the Kubernetes API conventions for the schema
of Crossplane managed resources. Following is an example of `RDSInstance`:
```yaml
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
metadata:
name: foodb
spec:
forProvider:
dbInstanceClass: db.t2.small
masterUsername: root
allocatedStorage: 20
engine: mysql
writeConnectionSecretToRef:
name: mysql-secret
namespace: crossplane-system
providerRef:
name: mycreds
reclaimPolicy: Delete
```
In Kubernetes, `spec` top field represents the desired state of the user.
Crossplane adheres to that and has its own conventions about how the fields under
`spec` should look like.
* `writeConnectionSecretToRef`: A reference to the secret that you want this
managed resource to write its connection secret that you'd be able to mount
to your pods in the same namespace. For `RDSInstance`, this secret would contain
`endpoint`, `username` and `password`.
* `providerRef`: Reference to the `Provider` resource that will provide information
regarding authentication of Crossplane to the provider. `Provider` resources
refer to `Secret` and potentially contain other information regarding authentication.
* `reclaimPolicy`: Enum to specify whether the actual cloud resource should be
deleted when this managed resource is deleted in Kubernetes API server. Possible
values are `Delete` and `Retain`.
* `forProvider`: While the rest of the fields relate to how Crossplane should
behave, the fields under `forProvider` are solely used to configure the actual
external resource. In most of the cases, the field names correspond to the what
exists in provider's API Reference.
The objects under `forProvider` field can get huge depending on the provider
API. For example, GCP `ServiceAccount` has only a few fields while GCP `CloudSQLInstance`
has over 100 fields that you can configure.
### Versioning
Crossplane closely follows the [Kubernetes API versioning conventions][api-versioning]
for the CRDs that it deploys. In short, for `vXbeta` and `vX` versions, you can
expect that either automatic migration or instructions for manual migration will
be provided when a new version of that CRD schema is released.
### Grouping
In general, managed resources are high fidelity resources meaning they will provide
parameters and behaviors that are provided by the external resource API. This applies
to grouping of resources, too. For example, `RDSInstance` appears under `database`
API group, so, its `APIVersion` and `Kind` look like the following:
```yaml
apiVersion: database.aws.crossplane.io/v1beta1
kind: RDSInstance
```
## Behavior
As a general rule, managed resource controllers try not to make any decision
that is not specified by the user in the desired state since managed resources
are the lowest level primitives that operate directly on the cloud provider APIs.
### Continuous Reconciliation
Crossplane providers continuously reconcile the managed resource to achieve the
desired state. The parameters under `spec` are considered the one and only source
of truth for the external resource. This means that if someone changed a
configuration in the UI of the provider, like AWS Console, Crossplane will change
it back to what's given under `spec`.
#### Immutable Properties
There are configuration parameters in external resources that cloud providers do not
allow to be changed. If the corresponding field in the managed resource is changed
by the user, Crossplane submits the new desired state to the provider and returns
the error, if any. For example, in AWS, you cannot change the region of
an `RDSInstance`.
Some infrastructure tools such as Terraform delete and recreate the resource
to accommodate those changes but Crossplane does not take that route. Unless
the managed resource is deleted and its `reclaimPolicy` is `Delete`, its controller
never deletes the external resource in the provider.
> Immutable fields are marked as `immutable` in Crossplane codebase
but Kubernetes does not yet have immutable field notation in CRDs.
### External Name
By default the name of the managed resource is used as the name of the external
cloud resource that will show up in your cloud console.
To specify a different external name, Crossplane has a special annotation to
represent the name of the external resource. For example, I would like to
have a `CloudSQLInstance` with an external name that is different than its
managed resource name:
```yaml
apiVersion: database.gcp.crossplane.io/v1beta1
kind: CloudSQLInstance
metadata:
name: foodb
annotations:
crossplane.io/external-name: my-special-db
spec:
...
```
When you create this managed resource, you will see that the name of `CloudSQLInstance`
in GCP console will be `my-special-db`.
If the annotation is not given, Crossplane will fill it with the name of the managed
resource by default. In cases where provider doesn't allow you to name the resource,
like AWS VPC, the controller creates the resource and sets external annotation
to be the name that the cloud provider chose. So, you would see something like
`vpc-28dsnh3` as the value of `crossplane.io/external-name` annotation of your
AWS `VPC` resource even if you added your own custom external name during creation.
### Late Initialization
For some of the optional fields, users rely on the default that the cloud provider
chooses for them. Since Crossplane treats the managed resource as the source of
the truth, values of those fields need to exist in `spec` of the managed resource.
So, in each reconciliation, Crossplane will fill the value of a field that is
left empty by the user but is assigned a value by the provider. For example,
there could be two fields like `region` and `availabilityZone` and you might want
to give only `region` and leave the availability zone to be chosen by the cloud
provider. In that case, if the provider assigns an availability zone, Crossplane
gets that value and fills `availabilityZone`. Note that if the field is already
filled, the controller won't override its value.
### Deletion
When a deletion request is made for a managed resource, its controller starts the
deletion process immediately. However, the managed resource is kept in the Kubernetes
API (via a finalizer) until the controller confirms the external resource in the
cloud is gone. So you can be sure that if the managed resource is deleted, then
the external cloud resource is also deleted. Any errors that happen during
deletion will be added to the `status` of the managed resource, so you can
troubleshoot any issues.
## Dependencies
In many cases, an external resource refers to another one for a specific configuration.
For example, you could want your Azure Kubernetes cluster in a specific
Virtual Network. External resources have specific fields for these relations, however,
they usually require the information to be supplied in different formats. In Azure
MySQL, you might be required to enter only the name of the Virtual Network while in
Azure Kubernetes, it could be required to enter a string in a specific format
that includes other information such as resource group name.
In Crossplane, users have 3 fields to refer to another resource. Here is an example
from Azure MySQL managed resource referring to a Azure Resource Group:
```yaml
spec:
forProvider:
resourceGroupName: foo-res-group
resourceGroupNameRef:
name: resourcegroup
resourceGroupNameSelector:
matchLabels:
app: prod
```
In this example, the user provided only a set of labels to select a `ResourceGroup`
managed resource that already exists in the cluster via `resourceGroupNameSelector`.
Then after a specific `ResourceGroup` is selected, `resourceGroupNameRef` is filled
with the name of that `ResourceGroup` managed resource. Then in the last step,
Crossplane fills the actual `resourceGroupName` field with whatever format Azure
accepts it. Once a dependency is resolved, the controller never changes it.
Users are able to specify any of these three fields:
- Selector to select via labels
- Reference to point to a determined managed resource
- Actual value that will be submitted to the provider
It's important to note that in case a reference exists, the managed resource does
not create the external resource until the referenced object is ready. In this
example, creation call of Azure MySQL Server will not be made until referenced
`ResourceGroup` has its `status.condition` named `Ready` to be true.
## Importing Existing Resources
If you have some resources that are already provisioned in the cloud provider,
you can import them as managed resources and let Crossplane manage them. What you
need to do is to enter the name of the external resource as well as the required
fields on the managed resource. For example, let's say I have a GCP Network
provisioned from GCP console and I would like to migrate it to Crossplane. Here
is the YAML that I need to create:
```yaml
apiVersion: compute.gcp.crossplane.io/v1beta1
kind: Network
metadata:
name: foo-network
annotation:
crossplane.io/external-name: existing-network
spec:
providerRef:
name: gcp-creds
```
Crossplane will check whether a GCP Network called `existing-network` exists, and
if it does, then the optional fields under `forProvider` will be filled with the
values that are fetched from the provider.
Note that if a resource has required fields, you must fill those fields or the
creation of the managed resource will be rejected. So, in those cases, you will
need to enter the name of the resource as well as the required fields as indicated
in the [API Reference][api-reference] documentation.
## Backup and Restore
Crossplane adheres to Kubernetes conventions as much as possible and one of the
advantages we gain is backup & restore ability with tools that work with native
Kubernetes types, like [Verero][velero].
If you'd like to backup and restore manually, you can simply export them and save
YAMLs in your file system. When you reload them, as we've discovered in import
section, their `crosssplane.io/external-name` annotation and required fields are
there and those are enough to import a resource. The tool you're using needs to
store `annotations` and `spec` fields, which most tools do including Velero.
[api-versioning]: https://kubernetes.io/docs/reference/using-api/api-overview/#api-versioning
[velero]: https://velero.io/
[api-reference]: api-docs/overview.md
[provider]: provider.md

View File

@ -1,133 +0,0 @@
---
title: Introduction
toc: true
weight: 100
---
# Overview
Crossplane introduces multiple building blocks that enable you to provision,
publish, and consume infrastructure using the Kubernetes API. These individual
concepts work together to allow for powerful separation of concern between
different personas in an organization, meaning that each member of a team
interacts with Crossplane at an appropriate level of abstraction.
![Crossplane Concepts]
The diagram above illustrates a common workflow using most of Crossplane's
functionality.
An infrastructure operator...
1. Installs Crossplane and one or more [providers] (in this case
[provider-azure]) as [packages]. This enables provisioning of external
infrastructure from the Kubernetes cluster.
2. Creates an `InfrastructureDefinition` to define a new type of resource (in
this case a `MySQLInstance`). Crossplane creates a cluster-scoped CRD of kind
`MySQLInstance` in response.
3. Creates an `InfrastructurePublication` to make provisioning a `MySQLInstance`
possible at the namespace scope. Crossplane creates a namespace-scoped CRD of
kind `MySQLInstanceRequirement` in response.
4. Creates a `Composition` that instructs Crossplane how to render one or more
infrastructure primitives installed by providers in response to the creation
of a `MySQLInstance` or `MySQLInstanceRequirement`. In this case the
`Composition` specifies that Azure `MySQLServer` and `MySQLFirewallRule`
[managed resources] should be created.
An application developer...
1. Creates an [OAM] `Component` for their service that specifies that they wish
to be run as an OAM `ContainerizedWorkload`.
2. Creates an OAM `Component` for their MySQL database that can be satisfied by
the published `MySQLInstanceRequirement` type.
An application operator...
1. Creates an OAM `ApplicationConfiguration`, which is comprised of the two
`Component` types that were defined by the application developer, and a
`ManualScalerTrait` trait to modify the replicas in the
`ContainerizedWorkload`. In response, Crossplane translates the OAM types
into Kubernetes-native types, in this case a `Deployment` and `Service` for
the `ContainerizedWorkload` component, and a `MySQLServer` and
`MySQLFirewallRule` for the `MySQLInstanceRequirement` component.
2. Crossplane provisions the external infrastructure and makes the connection
information available to the application, allowing it to connect to and
consume the MySQL database on Azure.
The concepts used in this workflow are explained in greater detail below and in
their individual documentation.
## Packages
Packages allow Crossplane to be extended to include new functionality. This
typically looks like bundling a set of Kubernetes [CRDs] and [controllers] that
represent and manage external infrastructure (i.e. a provider), then installing
them into a cluster where Crossplane is running. Crossplane handles making sure
any new CRDs do not conflict with existing ones, as well as manages the RBAC and
security of new packages. Packages are not strictly required to be providers,
but it is the most common use-case for packages at this time.
## Providers
Providers are packages that enable Crossplane to provision infrastructure on an
external service. They bring CRDs (i.e. managed resources) that map one-to-one
to external infrastructure resources, as well as controllers to manage the
life-cycle of those resources. You can read more about providers, including how
to install and configure them, in the [providers documentation].
## Managed Resources
Managed resources are Kubernetes custom resources that represent infrastructure
primitives. Managed resources with an API version of `v1beta1` or higher support
every field that the cloud provider does for the given resource. You can find
the Managed Resources and their API specifications for each provider on
[doc.crds.dev] and learn more in the [managed resource documentation].
## Composition
Composition refers the machinery that allows you to bundle managed resources
into higher-level infrastructure units, using only the Kubernetes API. New
infrastructure units are defined using the `InfrastructureDefinition`,
`InfrastructurePublication`, and `Composition` types, which result in the
creation of new CRDs in a cluster. Creating instances of these new CRDs result
in the creation of one or more managed resources. You can learn more about all
of these concepts in the [composition documentation].
## OAM
Crossplane supports application management as the Kubernetes implementation of
the [Open Application Model]. As such, Crossplane currently implements the
following OAM API types as Kubernetes custom resources.
* `WorkloadDefinition`: defines the kind of components that an application
developer can use in an application, along with the component's schema.
* Crossplane also implements the core `ContainerizedWorkload` type.
Infrastructure owners may define any resource as a workload type by
referencing it in a `WorkloadDefinition`.
* `Component`: describe functional units that may be instantiated as part of a
larger distributed application. For example, each micro-service in an
application is described as a `Component`.
* `Trait`: a discretionary runtime overlay that augments a component workload
type with additional features. It represents an opportunity for those in the
application operator role to make specific decisions about the configuration
of components, without having to involve the developer.
* Crossplane also implements the core `ManualScalerTrait` type.
* `ApplicationConfiguration`: includes one or more component instances, each
represented by a component definition that defines how an instance of a
component spec should be deployed.
For more information, take a look at the [OAM documentation].
<!-- Named Links -->
[Crossplane Concepts]: crossplane-concepts.png
[providers]: #providers
[provider-azure]: https://github.com/crossplane/provider-azure
[packages]: #packages
[managed resources]: #managed-resources
[OAM]: #oam
[CRDs]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
[controllers]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#custom-controllers
[providers documentation]: providers.md
[doc.crds.dev]: https://doc.crds.dev
[managed resources documentation]: managed-resources.md
[composition documentation]: composition.md
[Open Application Model]: https://oam.dev/
[OAM documentation]: oam.md

View File

@ -1,129 +0,0 @@
---
title: Providers
toc: true
weight: 101
indent: true
---
# Providers
Providers extend Crossplane to enable infrastructure resource provisioning.
In order to provision a resource, a Custom Resource Definition(CRD)
needs to be registered in your Kubernetes cluster and its controller should
be watching the Custom Resources those CRDs define. Provider packages
contain many Custom Resource Definitions and their controllers.
Here is the list of current providers:
### AWS Provider
* [GitHub][provider-aws]
* [API Reference][aws-reference]
### GCP Provider
* [GitHub][provider-gcp]
* [API Reference][gcp-reference]
### Azure Provider
* [GitHub][provider-azure]
* [API Reference][azure-reference]
### Rook Provider
* [GitHub][provider-rook]
* [API Reference][rook-reference]
### Alibaba Cloud Provider
* [GitHub][provider-alibaba]
* [API Reference][alibaba-reference]
## Installing Providers
The core Crossplane controller can install provider controllers and CRDs for you
through its own provider packaging mechanism, which is triggered by the application
of a `ClusterPackageInstall` resource. For example, in order to request
installation of the `provider-gcp` package, apply the following resource to the
cluster where Crossplane is running:
```yaml
apiVersion: packages.crossplane.io/v1alpha1
kind: ClusterPackageInstall
metadata:
name: provider-gcp
namespace: crossplane-system
spec:
package: "crossplane/provider-gcp:master"
```
The field `spec.package` is where you refer to the container image of the
provider. Crossplane Package Manager will unpack that container, register
CRDs and set up necessary RBAC rules and then start the controllers.
There are a few other ways to to trigger the installation of provider
packages:
* As part of Crossplane Helm chart by adding the following
statement to your `helm install` command: `--set clusterPackages.gcp.deploy=true`
It will install the default version hard-coded in that release of Crossplane
Helm chart but if you'd like to specif an exact version, you can add:
`--set clusterPackages.gcp.version=master`.
* Using [Crossplane kubectl plugin][crossplane-cli]:
`kubectl crossplane package install --cluster -n crossplane-system 'crossplane/provider-gcp:master' provider-gcp`
You can uninstall a provider by deleting the `ClusterPackageInstall` resource
you've created.
## Configuring Providers
In order to authenticate with the external provider API, the provider controllers
need to have access to credentials. It could be an IAM User for AWS, a Service
Account for GCP or a Service Principal for Azure. Every provider has a type called
`Provider` that has information about how to authenticate to the provider API.
An example `Provider` resource for Azure looks like the following:
```yaml
apiVersion: azure.crossplane.io/v1alpha3
kind: Provider
metadata:
name: prod-acc
spec:
credentialsSecretRef:
namespace: crossplane-system
name: azure-prod-creds
key: credentials
```
You can see that there is a reference to a key in a specific `Secret`. The value
of that key should contain the credentials that the controller will use. The
documentation of each provider should give you an idea of how that credentials
blob should look like. See [Getting Started][getting-started] guide for more details.
The following is an example usage of Azure `Provider`, referenced by a `MySQLServer`:
```yaml
apiVersion: database.azure.crossplane.io/v1beta1
kind: MySQLServer
metadata:
name: prod-sql
spec:
providerRef: prod-acc
...
```
The Azure provider controller will use that provider for this instance of `MySQLServer`.
Since every resource has its own reference to a `Provider`, you can have multiple
`Provider` resources in your cluster referenced by different resources.
<!-- Named Links -->
[provider-aws]: https://github.com/crossplane/provider-aws
[aws-reference]: https://doc.crds.dev/github.com/crossplane/provider-aws
[provider-gcp]: https://github.com/crossplane/provider-gcp
[gcp-reference]: https://doc.crds.dev/github.com/crossplane/provider-gcp
[provider-azure]: https://github.com/crossplane/provider-azure
[azure-reference]: https://doc.crds.dev/github.com/crossplane/provider-azure
[provider-rook]: https://github.com/crossplane/provider-rook
[rook-reference]: https://doc.crds.dev/github.com/crossplane/provider-rook
[provider-alibaba]: https://github.com/crossplane/provider-alibaba
[alibaba-reference]: https://doc.crds.dev/github.com/crossplane/provider-alibaba
[getting-started]: ../getting-started/install-configure.md
[crossplane-cli]: https://github.com/crossplane/crossplane-cli

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 292 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 375 KiB

View File

@ -1,310 +0,0 @@
<?xml version="1.0" encoding="utf-8"?>
<!-- Generator: Adobe Illustrator 23.0.1, SVG Export Plug-In . SVG Version: 6.00 Build 0) -->
<svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" x="0px" y="0px"
viewBox="0 0 1312.19 279.51" style="enable-background:new 0 0 1312.19 279.51;" xml:space="preserve">
<style type="text/css">
.st0{clip-path:url(#SVGID_2_);fill:#F7D186;}
.st1{clip-path:url(#SVGID_4_);fill:#FF9234;}
.st2{clip-path:url(#SVGID_6_);enable-background:new ;}
.st3{clip-path:url(#SVGID_8_);}
.st4{clip-path:url(#SVGID_10_);}
.st5{clip-path:url(#SVGID_12_);fill:#FFCD3C;}
.st6{clip-path:url(#SVGID_14_);enable-background:new ;}
.st7{clip-path:url(#SVGID_16_);}
.st8{clip-path:url(#SVGID_18_);}
.st9{clip-path:url(#SVGID_20_);fill:#F3807B;}
.st10{clip-path:url(#SVGID_22_);enable-background:new ;}
.st11{clip-path:url(#SVGID_24_);}
.st12{clip-path:url(#SVGID_26_);}
.st13{clip-path:url(#SVGID_28_);fill:#35D0BA;}
.st14{clip-path:url(#SVGID_30_);fill:#D8AE64;}
.st15{clip-path:url(#SVGID_32_);fill:#004680;}
.st16{clip-path:url(#SVGID_34_);fill:#004680;}
.st17{clip-path:url(#SVGID_36_);fill:#004680;}
.st18{clip-path:url(#SVGID_38_);fill:#004680;}
.st19{clip-path:url(#SVGID_40_);fill:#004680;}
.st20{clip-path:url(#SVGID_42_);fill:#004680;}
.st21{clip-path:url(#SVGID_44_);fill:#004680;}
.st22{clip-path:url(#SVGID_46_);fill:#004680;}
.st23{clip-path:url(#SVGID_48_);fill:#004680;}
.st24{clip-path:url(#SVGID_50_);fill:#004680;}
</style>
<g>
<g>
<defs>
<path id="SVGID_1_" d="M115.47,94.13c-8.4,0-15.22,6.81-15.22,15.22v143.2c0,8.4,6.81,15.22,15.22,15.22s15.22-6.81,15.22-15.22
v-143.2C130.68,100.94,123.87,94.13,115.47,94.13"/>
</defs>
<clipPath id="SVGID_2_">
<use xlink:href="#SVGID_1_" style="overflow:visible;"/>
</clipPath>
<rect x="89.53" y="83.41" class="st0" width="51.87" height="195.07"/>
</g>
<g>
<defs>
<path id="SVGID_3_" d="M176.53,75.36c0.05-0.96,0.07-1.93,0.07-2.9c0-0.95-0.02-1.89-0.07-2.82
c-1.47-32.22-28.06-57.88-60.64-57.88S56.72,37.42,55.25,69.64c-0.04,0.94-0.07,1.88-0.07,2.82c0,1.04,0.03,2.07,0.08,3.09
c-0.02,0.5-0.08,1-0.08,1.51v99.64c0,19.06,15.59,34.65,34.65,34.65h52.14c19.06,0,34.65-15.59,34.65-34.65V77.07
C176.62,76.49,176.56,75.93,176.53,75.36"/>
</defs>
<clipPath id="SVGID_4_">
<use xlink:href="#SVGID_3_" style="overflow:visible;"/>
</clipPath>
<rect x="44.47" y="1.04" class="st1" width="142.87" height="221.04"/>
</g>
<g>
<defs>
<path id="SVGID_5_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_6_">
<use xlink:href="#SVGID_5_" style="overflow:visible;"/>
</clipPath>
<g class="st2">
<g>
<defs>
<rect id="SVGID_7_" x="16.08" y="24.9" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_8_">
<use xlink:href="#SVGID_7_" style="overflow:visible;"/>
</clipPath>
<g class="st3">
<defs>
<rect id="SVGID_9_" x="9.23" y="92.99" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -54.1638 118.2926)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_10_">
<use xlink:href="#SVGID_9_" style="overflow:visible;"/>
</clipPath>
<g class="st4">
<defs>
<rect id="SVGID_11_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_12_">
<use xlink:href="#SVGID_11_" style="overflow:visible;"/>
</clipPath>
<rect x="7.4" y="16.22" class="st5" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_13_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_14_">
<use xlink:href="#SVGID_13_" style="overflow:visible;"/>
</clipPath>
<g class="st6">
<g>
<defs>
<rect id="SVGID_15_" x="-37.52" y="-28.7" width="207.96" height="207.96"/>
</defs>
<clipPath id="SVGID_16_">
<use xlink:href="#SVGID_15_" style="overflow:visible;"/>
</clipPath>
<g class="st7">
<defs>
<rect id="SVGID_17_" x="-40.95" y="35.1" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -33.3744 68.1028)" width="212.95" height="78.48"/>
</defs>
<clipPath id="SVGID_18_">
<use xlink:href="#SVGID_17_" style="overflow:visible;"/>
</clipPath>
<g class="st8">
<defs>
<rect id="SVGID_19_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_20_">
<use xlink:href="#SVGID_19_" style="overflow:visible;"/>
</clipPath>
<rect x="-48.24" y="-39.42" class="st9" width="227.51" height="227.51"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_21_" d="M55.55,69.64c-0.04,0.93-0.06,1.87-0.06,2.82c0,1.04,0.02,2.07,0.08,3.09c-0.02,0.51-0.08,1-0.08,1.52
v99.64c0,19.05,15.59,34.64,34.64,34.64h52.14c19.06,0,34.65-15.59,34.65-34.64V77.07c0-0.58-0.06-1.14-0.09-1.71
c0.05-0.96,0.07-1.93,0.07-2.89c0-0.95-0.02-1.89-0.06-2.82c-1.47-32.22-28.06-57.88-60.64-57.88
C83.61,11.76,57.02,37.42,55.55,69.64z"/>
</defs>
<clipPath id="SVGID_22_">
<use xlink:href="#SVGID_21_" style="overflow:visible;"/>
</clipPath>
<g class="st10">
<g>
<defs>
<rect id="SVGID_23_" x="61.1" y="69.92" width="197.24" height="197.24"/>
</defs>
<clipPath id="SVGID_24_">
<use xlink:href="#SVGID_23_" style="overflow:visible;"/>
</clipPath>
<g class="st11">
<defs>
<rect id="SVGID_25_" x="53.98" y="137.74" transform="matrix(0.7071 -0.7071 0.7071 0.7071 -72.6974 163.0359)" width="212.95" height="63.07"/>
</defs>
<clipPath id="SVGID_26_">
<use xlink:href="#SVGID_25_" style="overflow:visible;"/>
</clipPath>
<g class="st12">
<defs>
<rect id="SVGID_27_" x="54.67" y="9.89" width="124.35" height="201.53"/>
</defs>
<clipPath id="SVGID_28_">
<use xlink:href="#SVGID_27_" style="overflow:visible;"/>
</clipPath>
<rect x="52.14" y="60.96" class="st13" width="216.62" height="216.62"/>
</g>
</g>
</g>
</g>
</g>
<g>
<defs>
<path id="SVGID_29_" d="M104.38,211.52l26.4,26.39V211.3C130.78,211.3,103.72,211.52,104.38,211.52"/>
</defs>
<clipPath id="SVGID_30_">
<use xlink:href="#SVGID_29_" style="overflow:visible;"/>
</clipPath>
<rect x="93.65" y="200.58" class="st14" width="47.85" height="48.06"/>
</g>
<g>
<defs>
<path id="SVGID_31_" d="M307.52,195.1c-38.8,0-70.21-31.6-70.21-70.41c0-38.6,31.4-70.21,70.21-70.21c20.2,0,39.6,8.8,52.81,24
c4.2,5,3.8,12.2-1,16.4c-4.8,4.4-12.2,3.8-16.4-1c-9-10.2-21.8-16-35.4-16c-25.8,0-47.01,21-47.01,46.8
c0,26,21.2,47.01,47.01,47.01c13.6,0,26.4-5.8,35.4-16c4.2-4.8,11.6-5.4,16.4-1c4.8,4.2,5.2,11.4,1,16.4
C347.12,186.3,327.72,195.1,307.52,195.1"/>
</defs>
<use xlink:href="#SVGID_31_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_32_">
<use xlink:href="#SVGID_31_" style="overflow:visible;"/>
</clipPath>
<rect x="226.59" y="43.77" class="st15" width="147.35" height="162.05"/>
</g>
<g>
<defs>
<path id="SVGID_33_" d="M438.53,98.89c0,6.4-5.2,11.6-11.8,11.6c-12.8,0-22.4,10.4-22.4,24.6v48.41c0,6.4-5.2,11.6-11.6,11.6
c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,9.8,3.6,11.2,8.6c6.8-4,14.6-6.2,22.8-6.2
C433.33,87.29,438.53,92.49,438.53,98.89"/>
</defs>
<use xlink:href="#SVGID_33_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_34_">
<use xlink:href="#SVGID_33_" style="overflow:visible;"/>
</clipPath>
<rect x="370.4" y="74.17" class="st16" width="78.84" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_35_" d="M497.53,195.7c-30.4,0-55-24.8-55-55c0-30.4,24.6-55.21,55-55.21c30.4,0,55.21,24.8,55.21,55.21
C552.74,170.9,527.94,195.7,497.53,195.7 M497.53,108.69c-17.6,0-31.8,14.4-31.8,32c0,17.4,14.2,31.8,31.8,31.8
c17.6,0,31.8-14.4,31.8-31.8C529.34,123.09,515.14,108.69,497.53,108.69"/>
</defs>
<use xlink:href="#SVGID_35_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_36_">
<use xlink:href="#SVGID_35_" style="overflow:visible;"/>
</clipPath>
<rect x="431.81" y="74.77" class="st17" width="131.65" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_37_" d="M571.94,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.8,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C590.94,195.1,577.54,187.3,571.94,174.9"/>
</defs>
<use xlink:href="#SVGID_37_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_38_">
<use xlink:href="#SVGID_37_" style="overflow:visible;"/>
</clipPath>
<rect x="560.02" y="74.17" class="st18" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_39_" d="M663.75,174.9c-2.8-5.8-0.2-12.8,5.6-15.4c6-2.8,12.8-0.2,15.4,5.6c1.6,3.2,6,6.8,13.8,6.8
c10.8,0,14.6-6.6,14.6-11c0-6-1.6-7.8-17.2-11.8c-7-1.6-14.2-3.4-20.4-7.4c-8.4-5.6-13-14-13-24.4c0-8.2,3.6-16.4,9.8-22.4
c6.6-6.4,15.8-10,26.2-10c14.81,0,27.41,7.2,32.8,19c2.8,5.8,0.2,12.6-5.6,15.4c-5.8,2.8-12.8,0.2-15.4-5.6
c-1.2-2.6-5-5.6-11.8-5.6c-9.2,0-12.6,5.8-12.6,9.2c0,4,0.8,5.6,15.6,9.4c13,3.2,34.8,8.6,34.8,34.2c0,8.6-3.6,17.2-10,23.6
c-5,4.8-13.8,10.6-27.8,10.6C682.75,195.1,669.35,187.3,663.75,174.9"/>
</defs>
<use xlink:href="#SVGID_39_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_40_">
<use xlink:href="#SVGID_39_" style="overflow:visible;"/>
</clipPath>
<rect x="651.83" y="74.17" class="st19" width="95.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_41_" d="M859.17,139.9c0,14.8-5,28.4-14.4,38.61c-9.8,10.6-23.2,16.6-38,16.6c-10.6,0-20.6-3.2-29-8.8v47.2
c0,6.4-5.4,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6V96.49c0-6.4,5.2-11.6,11.6-11.6c5.4,0,10.2,3.8,11.4,9
c8.6-5.8,18.8-9,29.4-9c14.8,0,28.2,5.8,38,16.4C854.17,111.49,859.17,125.29,859.17,139.9 M835.96,139.9
c0-18.4-12.2-31.8-29.2-31.8c-16.8,0-29,13.4-29,31.8c0,18.4,12.2,31.8,29,31.8C823.77,171.7,835.96,158.3,835.96,139.9"/>
</defs>
<use xlink:href="#SVGID_41_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_42_">
<use xlink:href="#SVGID_41_" style="overflow:visible;"/>
</clipPath>
<rect x="743.64" y="74.17" class="st20" width="126.25" height="181.65"/>
</g>
<g>
<defs>
<path id="SVGID_43_" d="M889.77,195.1c-6.4,0-11.6-5.2-11.6-11.6V66.29c0-6.4,5.2-11.6,11.6-11.6c6.4,0,11.8,5.2,11.8,11.6V183.5
C901.57,189.9,896.17,195.1,889.77,195.1"/>
</defs>
<use xlink:href="#SVGID_43_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_44_">
<use xlink:href="#SVGID_43_" style="overflow:visible;"/>
</clipPath>
<rect x="867.45" y="43.97" class="st21" width="44.84" height="161.85"/>
</g>
<g>
<defs>
<path id="SVGID_45_" d="M1025.38,96.49v87.01c0,6.4-5.2,11.6-11.6,11.6c-5.6,0-10.2-3.8-11.4-9c-8.4,5.8-18.6,9-29.4,9
c-14.8,0-28.2-5.8-38.01-16.6c-9.2-10-14.4-23.8-14.4-38.4c0-14.8,5.2-28.6,14.4-38.61c9.8-10.8,23.21-16.6,38.01-16.6
c10.8,0,21,3.2,29.4,9c1.2-5.2,5.8-9,11.4-9C1020.18,84.89,1025.38,90.09,1025.38,96.49 M1002.18,140.1c0-18.6-12.4-32-29.2-32
c-17,0-29.2,13.4-29.2,32c0,18.4,12.2,31.8,29.2,31.8C989.78,171.9,1002.18,158.5,1002.18,140.1"/>
</defs>
<use xlink:href="#SVGID_45_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_46_">
<use xlink:href="#SVGID_45_" style="overflow:visible;"/>
</clipPath>
<rect x="909.85" y="74.17" class="st22" width="126.25" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_47_" d="M1136.79,132.7v50.8c0,6.4-5.2,11.6-11.8,11.6c-6.4,0-11.6-5.2-11.6-11.6v-50.8
c0-11.8-6.6-24.6-21.4-24.6c-13.4,0-23.4,10.6-23.4,24.6v0.8v0.8v49.2c0,6.4-5.2,11.6-11.6,11.6c-6.4,0-11.6-5.2-11.6-11.6v-49.4
v-1.4V96.49c0-6.4,5.2-11.6,11.6-11.6c4.8,0,8.8,2.8,10.6,6.8c7-4.4,15.4-6.8,24.4-6.8
C1117.39,84.89,1136.79,105.49,1136.79,132.7"/>
</defs>
<use xlink:href="#SVGID_47_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_48_">
<use xlink:href="#SVGID_47_" style="overflow:visible;"/>
</clipPath>
<rect x="1034.66" y="74.17" class="st23" width="112.85" height="131.65"/>
</g>
<g>
<defs>
<path id="SVGID_49_" d="M1207.2,196.1c-14.8,0-28.2-6.4-38.01-17.2c-9.4-10-14.4-23.81-14.4-38.4c0-31.61,22.2-55.21,51.4-55.21
c29.41,0,50.8,23.2,50.8,55.21c0,6.4-5.2,11.6-11.8,11.6h-65.41c4,12.2,14.4,20.8,27.4,20.8c7.83,0,14.48-1.65,19.23-6.21
c1.44-1.38,2.7-3.03,3.77-4.99c3.4-5.6,10.6-7.2,16-4c5.6,3.4,7.2,10.6,4,16C1241.2,189.9,1225.6,196.1,1207.2,196.1
M1179.59,128.7h52.61c-4-13.8-15-20.2-26-20.2C1195.4,108.49,1183.79,114.89,1179.59,128.7"/>
</defs>
<use xlink:href="#SVGID_49_" style="overflow:visible;fill-rule:evenodd;clip-rule:evenodd;fill:#004680;"/>
<clipPath id="SVGID_50_">
<use xlink:href="#SVGID_49_" style="overflow:visible;"/>
</clipPath>
<rect x="1144.07" y="74.57" class="st24" width="123.65" height="132.25"/>
</g>
</g>
</svg>

Before

Width:  |  Height:  |  Size: 14 KiB

View File

@ -1,26 +0,0 @@
---
title: Configure
toc: true
weight: 302
indent: true
---
# Configure Your Cloud Provider Account
In order for Crossplane to be able to manage resources in a specific cloud
provider, you will need to create an account for Crossplane to use. Use the
links below for cloud-specific instructions to create an account that can be
used throughout the guides:
* [Google Cloud Platform (GCP) Service Account]
* [Microsoft Azure Service Principal]
* [Amazon Web Services (AWS) IAM User]
Once you have configured your cloud provider account, you can get started
provisioning resources!
<!-- Named Links -->
[Google Cloud Platform (GCP) Service Account]: ../cloud-providers/gcp/gcp-provider.md
[Microsoft Azure Service Principal]: ../cloud-providers/azure/azure-provider.md
[Amazon Web Services (AWS) IAM User]: ../cloud-providers/aws/aws-provider.md

View File

@ -1,347 +0,0 @@
---
title: Install
toc: true
weight: 301
indent: true
---
# Install Crossplane
Crossplane can be easily installed into any existing Kubernetes cluster using
the regularly published Helm chart. The Helm chart contains all the custom
resources and controllers needed to deploy and configure Crossplane.
## Pre-requisites
* [Kubernetes cluster]
* For example [Minikube], minimum version `v0.28+`
* [Helm], minimum version `v2.12.0+`.
* For Helm 2, make sure Tiller is initialized with sufficient permissions to
work on `crossplane-system` namespace.
## Installation
Helm charts for Crossplane are currently published to the `alpha` and `master`
channels. In the future, `beta` and `stable` will also be available.
### Alpha
The alpha channel is the most recent release of Crossplane that is considered
ready for testing by the community.
Install with Helm 2:
```console
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane
```
Install with Helm 3:
If your Kubernetes version is lower than 1.15 and you'd like to install
Crossplane via Helm 3, you'll need Helm v3.1.0+ that has the flag
`--disable-openapi-validation`.
```console
kubectl create namespace crossplane-system
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
# Kubernetes 1.15 and newer versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane
# Kubernetes 1.14 and older versions
helm install crossplane --namespace crossplane-system crossplane-alpha/crossplane --disable-openapi-validation
```
### Master
The `master` channel contains the latest commits, with all automated tests
passing. `master` is subject to instability, incompatibility, and features may
be added or removed without much prior notice. It is recommended to use one of
the more stable channels, but if you want the absolute newest Crossplane
installed, then you can use the `master` channel.
To install the Helm chart from master, you will need to pass the specific
version returned by the `search` command:
Install with Helm 2:
```console
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search crossplane-master
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version>
```
For example:
```console
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version 0.0.0-249.637ccf9
```
Install with Helm 3:
If your Kubernetes version is lower than 1.15 and you'd like to install
Crossplane via Helm 3, you'll need Helm v3.1.0+.
```console
kubectl create namespace crossplane-system
helm repo add crossplane-master https://charts.crossplane.io/master/
helm search repo crossplane-master --devel
# Kubernetes 1.15 and newer versions
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --devel
# Kubernetes 1.14 and older versions
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --devel --disable-openapi-validation
```
## Installing Infrastructure Providers
You can add additional functionality to Crossplane's control plane by installing
`providers`. For example, each supported cloud provider has its own
corresponding Crossplane `provider` that contains all the functionality for that
particular cloud. After a cloud provider's infrastructure `provider` is
installed, you will be able to provision and manage resources within that cloud
from Crossplane.
### Installation with Helm
You can include deployment of additional infrastructure providers into your helm
installation by setting `clusterPackages.<provider-name>.deploy` to `true`.
For example, the following will install `master` version of the GCP package.
Using Helm 2:
```console
helm install --name crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --set clusterPackages.gcp.deploy=true --set clusterPackages.gcp.version=master
```
Using Helm 3:
```console
kubectl create namespace crossplane-system
helm install crossplane --namespace crossplane-system crossplane-master/crossplane --version <version> --set clusterPackages.gcp.deploy=true --set clusterPackages.gcp.version=master --devel
```
See [helm configuration parameters](#configuration) for supported packages and
parameters.
### Manual Installation
After Crossplane has been installed, it is possible to extend Crossplane's
functionality by installing Crossplane providers.
#### GCP Provider
To get started with Google Cloud Platform (GCP), create a file named
`provider-gcp.yaml` with the following content:
```yaml
apiVersion: packages.crossplane.io/v1alpha1
kind: ClusterPackageInstall
metadata:
name: provider-gcp
namespace: crossplane-system
spec:
package: "crossplane/provider-gcp:v0.10.0"
```
Then you can install the GCP provider into Crossplane in the `gcp` namespace
with the following command:
```console
kubectl apply -f provider-gcp.yaml
```
#### AWS Provider
To get started with Amazon Web Services (AWS), create a file named
`provider-aws.yaml` with the following content:
```yaml
apiVersion: packages.crossplane.io/v1alpha1
kind: ClusterPackageInstall
metadata:
name: provider-aws
namespace: crossplane-system
spec:
package: "crossplane/provider-aws:v0.10.0"
```
Then you can install the AWS provider into Crossplane in the `aws` namespace
with the following command:
```console
kubectl apply -f provider-aws.yaml
```
#### Azure Provider
To get started with Microsoft Azure, create a file named `provider-azure.yaml`
with the following content:
```yaml
apiVersion: packages.crossplane.io/v1alpha1
kind: ClusterPackageInstall
metadata:
name: provider-azure
namespace: crossplane-system
spec:
package: "crossplane/provider-azure:v0.10.0"
```
Then you can install the Azure provider into Crossplane in the `azure` namespace
with the following command:
```console
kubectl apply -f provider-azure.yaml
```
#### Rook Provider
To get started with Rook, create a file named `provider-rook.yaml` with the
following content:
```yaml
apiVersion: packages.crossplane.io/v1alpha1
kind: ClusterPackageInstall
metadata:
name: provider-rook
namespace: crossplane-system
spec:
package: "crossplane/provider-rook:v0.7.0"
```
Then you can install the Rook provider into Crossplane in the `rook` namespace
with the following command:
```console
kubectl apply -f provider-rook.yaml
```
### Uninstalling Infrastructure Providers
The infrastructure can be uninstalled simply by deleting the provider resources
from the cluster with a command similar to what's shown below. **Note** that
this will also **delete** any resources that Crossplane has provisioned in the
cloud provider if their `ReclaimPolicy` is set to `Delete`.
After you have ensured that you are completely done with all your cloud provider
resources, you can then run one of the commands below, depending on which cloud
provider you are removing, to remove its provider from Crossplane:
#### Uninstalling GCP
```console
kubectl delete -f provider-gcp.yaml
```
#### Uninstalling AWS
```console
kubectl delete -f provider-aws.yaml
```
#### Uninstalling Azure
```console
kubectl delete -f provider-azure.yaml
```
#### Uninstalling Rook
```console
kubectl delete -f provider-rook.yaml
```
## Uninstalling the Chart
To uninstall/delete the `crossplane` deployment:
```console
helm delete --purge crossplane
```
That command removes all Kubernetes components associated with Crossplane,
including all the custom resources and controllers.
## Configuration
The following tables lists the configurable parameters of the Crossplane chart
and their default values.
| Parameter | Description | Default |
| -------------------------------- | --------------------------------------------------------------- | ------------------------------------------------------ |
| `image.repository` | Image | `crossplane/crossplane` |
| `image.tag` | Image tag | `master` |
| `image.pullPolicy` | Image pull policy | `Always` |
| `imagePullSecrets` | Names of image pull secrets to use | `dockerhub` |
| `replicas` | The number of replicas to run for the Crossplane operator | `1` |
| `deploymentStrategy` | The deployment strategy for the Crossplane operator | `RollingUpdate` |
| `clusterPackages.aws.deploy` | Deploy AWS package | `false`
| `clusterPackages.aws.version` | AWS provider version to deploy | `<latest released version>`
| `clusterPackages.gcp.deploy` | Deploy GCP package | `false`
| `clusterPackages.gcp.version` | GCP provider version to deploy | `<latest released version>`
| `clusterPackages.azure.deploy` | Deploy Azure package | `false`
| `clusterPackages.azure.version` | Azure provider version to deploy | `<latest released version>`
| `clusterPackages.rook.deploy` | Deploy Rook package | `false`
| `clusterPackages.rook.version` | Rook provider version to deploy | `<latest released version>`
| `personas.deploy` | Install roles and bindings for Crossplane user personas | `true`
| `templateStacks.enabled` | Enable experimental template stacks support | `true`
| `templateStacks.controllerImage` | Template Stack controller image | `crossplane/templating-controller:v0.2.1`
| `priorityClassName ` | Priority class name for crossplane and package manager pods | `""`
| `resourcesCrossplane.limits.cpu` | CPU resource limits for Crossplane | `100m`
| `resourcesCrossplane.limits.memory` | Memory resource limits for Crossplane | `512Mi`
| `resourcesCrossplane.requests.cpu` | CPU resource requests for Crossplane | `100m`
| `resourcesCrossplane.requests.memory` | Memory resource requests for Crossplane | `256Mi`
| `resourcesPackageManager.limits.cpu` | CPU resource limits for PackageManager | `100m`
| `resourcesPackageManager.limits.memory` | Memory resource limits for PackageManager | `512Mi`
| `resourcesPackageManager.requests.cpu` | CPU resource requests for PackageManager | `100m`
| `resourcesPackageManager.requests.memory` | Memory resource requests for PackageManager | `256Mi`
| `forceImagePullPolicy` | Force the named ImagePullPolicy on Package install and containers | `""`
| `insecureAllowAllApigroups` | Enable core Kubernetes API group permissions for Packages. When enabled, Packages may declare dependency on core Kubernetes API types.) | `false` |
| `insecurePassFullDeployment` | Enable packages to pass their full deployment, including security context. When omitted, Package controller deployments will have security context removed and all containers will have `privileged` and `allowPrivilegeEscalation` set to `false`, and `runAsNonRoot` set to `true`. | `false` |
| `insecureInstallJob` | Enable package install jobs to run as root. When omitted, Package install jobs will have security context removed and all containers will have `privileged` and `allowPrivilegeEscalation` set to `false`, and `runAsNonRoot` set to `true`. | `false` |
### Command Line
You can pass the settings with helm command line parameters. Specify each
parameter using the `--set key=value[,key=value]` argument to `helm install`.
For example, the following command will install Crossplane with an image pull
policy of `IfNotPresent`.
```console
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane --set image.pullPolicy=IfNotPresent
```
### Settings File
Alternatively, a yaml file that specifies the values for the above parameters
(`values.yaml`) can be provided while installing the chart.
```console
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane -f values.yaml
```
Here are the sample settings to get you started.
```yaml
replicas: 1
deploymentStrategy: RollingUpdate
image:
repository: crossplane/crossplane
tag: master
pullPolicy: Always
imagePullSecrets:
- dockerhub
```
<!-- Named Links -->
[Kubernetes cluster]: https://kubernetes.io/docs/setup/
[Minikube]: https://kubernetes.io/docs/tasks/tools/install-minikube/
[Helm]: https://docs.helm.sh/using_helm/

View File

@ -1,39 +0,0 @@
---
title: Learn More
toc: true
weight: 304
indent: true
---
# Learn More
If you have any questions, please drop us a note on [Crossplane Slack][join-crossplane-slack] or [contact us][contact-us]!
***Learn more about using Crossplane***
- [GitLab deploys into multiple clouds from kubectl using Crossplane](https://about.gitlab.com/2019/05/20/gitlab-first-deployed-kubernetes-api-to-multiple-clouds/)
- [CNCF Talks & Community Presentations](https://www.youtube.com/playlist?list=PL510POnNVaaZJj9OG6PbgsZvgYbhwJRyE)
- [Software Engineering Daily - Intro Podcast](https://softwareengineeringdaily.com/2019/01/02/crossplane-multicloud-control-plane-with-bassam-tabbara/)
- [Crossplane Architecture](https://docs.google.com/document/d/1whncqdUeU2cATGEJhHvzXWC9xdK29Er45NJeoemxebo/edit?usp=sharing)
- [Latest Design Docs](https://github.com/crossplane/crossplane/tree/master/design)
- [Roadmap](https://github.com/crossplane/crossplane/blob/master/ROADMAP.md)
***Writing Kubernetes controllers to extend Crossplane***
- [Keep the Space Shuttle Flying: Writing Robust Operators](https://www.youtube.com/watch?v=uf97lOApOv8)
- [Best practices for building Kubernetes Operators](https://cloud.google.com/blog/products/containers-kubernetes/best-practices-for-building-kubernetes-operators-and-stateful-apps)
- [Programming Kubernetes Book](https://www.oreilly.com/library/view/programming-kubernetes/9781492047094/)
- [Crossplane Reconciler Patterns](https://github.com/crossplane/crossplane/blob/master/design/design-doc-reconciler-patterns.md)
- [Contributor Guide](https://github.com/crossplane/crossplane/blob/master/CONTRIBUTING.md)
***Join the growing Crossplane community and get involved!***
- Join our [Community Slack](https://slack.crossplane.io/)!
- Submit an issue on [GitHub](https://github.com/crossplane/crossplane)
- Attend our bi-weekly [Community Meeting](https://github.com/crossplane/crossplane#community-meeting)
- Join our bi-weekly live stream: [The Binding Status](https://github.com/crossplane/tbs)
- Subscribe to our [YouTube Channel](https://www.youtube.com/channel/UC19FgzMBMqBro361HbE46Fw)
- Drop us a note on Twitter: [@crossplane_io](https://twitter.com/crossplane_io)
- Email us: [info@crossplane.io](mailto:info@crossplane.io)
<!-- Named links -->
[join-crossplane-slack]: https://slack.crossplane.io
[contact-us]: https://github.com/crossplane/crossplane#contact

View File

@ -1,21 +0,0 @@
---
title: Reference
toc: true
weight: 300
---
# Overview
The reference documentation includes answers to frequently asked questions, information about similar projects, and links to resources that can help you learn more about Crossplane and Kubernetes. If you have additional information that you think would be valuable for the community, please feel free to [open a pull request]() and add it.
1. [Install]
1. [Configure]
1. [Troubleshoot]
1. [Learn More]
<!-- Named Links -->
[Install]: install.md
[Configure]: configure.md
[Troubleshoot]: troubleshoot.md
[Learn More]: learn_more.md

View File

@ -1,262 +0,0 @@
---
title: Troubleshoot
toc: true
weight: 303
indent: true
---
# Troubleshooting
* [Using the trace command]
* [Resource Status and Conditions]
* [Crossplane Logs]
* [Pausing Crossplane]
* [Deleting a Resource Hangs]
* [Host-Aware Resource Debugging]
## Using the trace command
The [Crossplane CLI] trace command provides a holistic view for a particular
object and related ones to ease debugging and troubleshooting process. It finds
the relevant Crossplane resources for a given one and provides detailed
information as well as an overview indicating what could be wrong.
Usage:
```
kubectl crossplane trace TYPE[.GROUP] NAME [-n| --namespace NAMESPACE] [--kubeconfig KUBECONFIG] [-o| --outputFormat dot]
```
Examples:
```
# Trace a KubernetesApplication
kubectl crossplane trace KubernetesApplication wordpress-app-83f04457-0b1b-4532-9691-f55cf6c0da6e -n app-project1-dev
# Trace a MySQLInstance
kubectl crossplane trace MySQLInstance wordpress-mysql-83f04457-0b1b-4532-9691-f55cf6c0da6e -n app-project1-dev
```
For more information, see [the trace command documentation].
## Resource Status and Conditions
Most Crossplane resources have a `status` section that can represent the current
state of that particular resource. Running `kubectl describe` against a
Crossplane resource will frequently give insightful information about its
condition. For example, to determine the status of a MySQLInstance resource
claim, run:
```shell
kubectl -n app-project1-dev describe mysqlinstance mysql-claim
```
This should produce output that includes:
```console
Status:
Conditions:
Last Transition Time: 2019-09-16T13:46:42Z
Reason: Managed claim is waiting for managed resource to become bindable
Status: False
Type: Ready
Last Transition Time: 2019-09-16T13:46:42Z
Reason: Successfully reconciled managed resource
Status: True
Type: Synced
```
Most Crossplane resources set exactly two condition types; `Ready` and `Synced`.
`Ready` represents the availability of the resource itself - whether it is
creating, deleting, available, unavailable, binding, etc. `Synced` represents
the success of the most recent attempt to 'reconcile' the _desired_ state of the
resource with its _actual_ state. The `Synced` condition is the first place you
should look when a Crossplane resource is not behaving as expected.
## Crossplane Logs
The next place to look to get more information or investigate a failure would be
in the Crossplane pod logs, which should be running in the `crossplane-system`
namespace. To get the current Crossplane logs, run the following:
```shell
kubectl -n crossplane-system logs -lapp=crossplane
```
Remember that much of Crossplane's functionality is provided by Stacks. You can
use `kubectl logs` to view Stack logs too, though Stacks may not run in the
`crossplane-system` namespace.
## Pausing Crossplane
Sometimes, for example when you encounter a bug. it can be useful to pause
Crossplane if you want to stop it from actively attempting to manage your
resources. To pause Crossplane without deleting all of its resources, run the
following command to simply scale down its deployment:
```bash
kubectl -n crossplane-system scale --replicas=0 deployment/crossplane
```
Once you have been able to rectify the problem or smooth things out, you can
unpause Crossplane simply by scaling its deployment back up:
```bash
kubectl -n crossplane-system scale --replicas=1 deployment/crossplane
```
Remember that much of Crossplane's functionality is provided by Stacks. You can
use `kubectl scale` to pause Stack pods too, though Stacks may not run in the
`crossplane-system` namespace.
## Deleting a Resource Hangs
The resources that Crossplane manages will automatically be cleaned up so as not
to leave anything running behind. This is accomplished by using finalizers, but
in certain scenarios the finalizer can prevent the Kubernetes object from
getting deleted.
To deal with this, we essentially want to patch the object to remove its
finalizer, which will then allow it to be deleted completely. Note that this
won't necessarily delete the external resource that Crossplane was managing, so
you will want to go to your cloud provider's console and look there for any
lingering resources to clean up.
In general, a finalizer can be removed from an object with this command:
```console
kubectl patch <resource-type> <resource-name> -p '{"metadata":{"finalizers": []}}' --type=merge
```
For example, for a Workload object (`workloads.compute.crossplane.io`) named
`test-workload`, you can remove its finalizer with:
```console
kubectl patch workloads.compute.crossplane.io test-workload -p '{"metadata":{"finalizers": []}}' --type=merge
```
## Host-Aware Resource Debugging
Stack resources (including the Stack, service accounts, deployments, and jobs)
are usually easy to identify by name. These resource names are based on the name
used in the StackInstall or Stack resource.
### Resource Location
In a host-aware configuration, these resources may be divided between the host
and the tenant.
The host, which runs the Stack controller, does not need (or get) the CRDs used
by the Stack controller. The Stack controller connects to the tenant Kubernetes
API to watch the owned types of the Stack (which is why the CRDs are only
installed on the Tenant).
Kind | Name | Place
---- | ----- | ------
pod | crossplane | Host (ns: tenantFoo-system)
pod | stack-manager | Host (ns: tenantFoo-system)
job | (stack installjob) | Host (ns: tenantFoo-controllers)
pod | (stack controller) | Host (ns: tenantFoo-controllers)
Kind | Name | Place
---- | ----- | ------
crd | Stack, SI, CSI | Tenant
Stack | wordpress | Tenant
StackInstall | wordpress | Tenant
crd | KubernetesEngine, MysqlInstance, ... | Tenant
crd | GKEInstance, CloudSQLInstance, ... | Tenant
(rbac) | (stack controller) | Tenant
(rbac) | (workspace owner, crossplane-admin) | Tenant
(rbac) | (stack:namespace:1.2.3:admin) | Tenant
crd | WordpressInstance | Tenant
WordpressInstance | wp-instance | Tenant
KubernetesApplication | wp-instance | Tenant
Kind | Name | Place
---- | ----- | ------
pod | wp-instance (from KubernetesAplication) | New Cluster
### Name Truncation
In some cases, the full name of a Stack resource, which could be up to 253
characters long, can not be represented in the created resources. For example,
jobs and deployment names may not exceed 63 characters because these names are
turned into resource label values which impose a 63 character limit. Stack
created resources whose names would otherwise not be permitted in the Kubernetes
API will be truncated with a unique suffix.
When running the Stack Manager in host-aware mode, tenant stack resources
created in the host controller namespace generally reuse the Stack names:
"{tenant namespace}.{tenant name}". In order to avoid the name length
restrictions, these resources may be truncated at either or both of the
namespace segment or over the entire name. In these cases resource labels,
owner references, and annotations should be consulted to identify the
responsible Stack.
* [Relationship Labels]
* [Owner References]
* Annotations: `tenant.crossplane.io/{singular}-name` and
`tenant.crossplane.io/{singular}-namespace` (_singular_ may be `stackinstall`,
`clusterstackinstall` or `stack`)
#### Example
Long resource names may be present on the tenant.
```console
$ name=stack-with-a-really-long-resource-name-so-long-that-it-will-be-truncated
$ ns=this-is-just-a-really-long-namespace-name-at-the-character-max
$ kubectl create ns $ns
$ kubectl crossplane stack install --namespace $ns crossplane/sample-stack-wordpress:0.1.1 $name
```
When used as host resource names, the stack namespace and stack are concatenated
to form host names, as well as labels. These resource names and label values
must be truncated to fit the 63 character limit on label values.
```console
$ kubectl --context=crossplane-host -n tenant-controllers get job -o yaml
apiVersion: v1
items:
- apiVersion: batch/v1
kind: Job
metadata:
annotations:
tenant.crossplane.io/stackinstall-name: stack-with-a-really-long-resource-name-so-long-that-it-will-be-truncated
tenant.crossplane.io/stackinstall-namespace: this-is-just-a-really-long-namespace-name-at-the-character-max
creationTimestamp: "2020-03-20T17:06:25Z"
labels:
core.crossplane.io/parent-group: stacks.crossplane.io
core.crossplane.io/parent-kind: StackInstall
core.crossplane.io/parent-name: stack-with-a-really-long-resource-name-so-long-that-it-wi-alqdw
core.crossplane.io/parent-namespace: this-is-just-a-really-long-namespace-name-at-the-character-max
core.crossplane.io/parent-uid: 596705e4-a28e-47c9-a907-d2732f07a85e
core.crossplane.io/parent-version: v1alpha1
name: this-is-just-a-really-long-namespace-name-at-the-characte-egoav
namespace: tenant-controllers
spec:
backoffLimit: 0
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 8f290bf2-8c91-494a-a76b-27c2ccb9e0a8
template:
metadata:
creationTimestamp: null
labels:
controller-uid: 8f290bf2-8c91-494a-a76b-27c2ccb9e0a8
job-name: this-is-just-a-really-long-namespace-name-at-the-characte-egoav
...
```
<!-- Named Links -->
[Using the trace command]: #using-the-trace-command
[Resource Status and Conditions]: #resource-status-and-conditions
[Crossplane Logs]: #crossplane-logs
[Pausing Crossplane]: #pausing-crossplane
[Deleting a Resource Hangs]: #deleting-a-resource-hangs
[Host-Aware Resource Debugging]: #host-aware-resource-debugging
[Crossplane CLI]: https://github.com/crossplane/crossplane-cli
[the trace command documentation]: https://github.com/crossplane/crossplane-cli/tree/master/docs/trace-command.md
[Relationship Labels]: https://github.com/crossplane/crossplane/blob/master/design/one-pager-stack-relationship-labels.md
[Owner References]: https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/#owners-and-dependents

View File

@ -1,485 +0,0 @@
# Release Process
This document is meant to be a complete end-to-end guide for how to release new
versions of software for Crossplane and its related projects.
## tl;dr Process Overview
All the details are available in the sections below, but we'll start this guide
with a very high level sequential overview for how to run the release process.
1. **feature freeze**: Merge all completed features into master branches of all
repos to begin "feature freeze" period
1. **API docs/user guides**: Regenerate API docs and update all user guides with
current content for scenarios included in the release
1. **release crossplane-runtime**: Tag and release a new version of
crossplane-runtime using the GitHub UI.
1. **pin crossplane dependencies**: Update the go modules of core crossplane in
master to depend on the newly released version of crossplane-runtime.
1. **pre-release tag crossplane**: Run tag pipeline to tag the start of
pre-releases in master in the crossplane repo
1. **branch crossplane**: Create a new release branch using the GitHub UI for
the crossplane repo
1. **crossplane release branch prep**: In Crossplane's release branch, update
all examples, docs, and integration tests to update references and versions,
including the yet to be released versions of providers and stacks.
1. **tag**: Run the tag pipeline to tag Crossplane's release branch with an
official semver
1. **release providers**: Run the release process for each **provider** that we
maintain
1. **pin dependencies**: Update the go modules of each provider repo to
depend on the new version of crossplane and crossplane-runtime.
1. **pre-release tag**: Run tag pipeline to tag the start of pre-releases in
**master** of each provider repo
1. **branch**: Create a new release branch using the GitHub UI for the
provider repo
1. **release branch prep**: In the release branch, update all examples,
docs, and integration tests to update references and versions
1. **test** Test builds from the release branch, fix any critical bugs that
are found
1. **tag**: Run the tag pipeline to tag the release branch with an official
semver
1. **build/publish**: Run build pipeline to publish build with official
semver
1. **release template stacks**: Run the release process for each **template
stack** that we maintain. Note that the process for template stacks is
slightly different from the stack release process.
1. **test** Test builds from the release branch (typically `master`), fix
any critical bugs that are found
1. **version**: Update all version information in the docs, as appropriate
1. **tag**: Run the tag pipeline to tag the release branch with an official
semver
1. **build/publish**: Run the publish pipeline to publish build with
official semver
1. **build/publish**: Run build pipeline to publish Crossplane build from
release branch with official semver
1. **verify**: Verify all artifacts have been published successfully, perform
sanity testing
1. **promote**: Run promote pipelines on all repos to promote releases to
desired channel(s)
1. **release notes**: Publish well authored and complete release notes on GitHub
1. **announce**: Announce the release on Twitter, Slack, etc.
## Detailed Process
This section will walk through the release process in more fine grained and
prescriptive detail.
### Scope
This document will cover the release process for all of the repositories that
the Crossplane team maintains and publishes regular versioned artifacts from.
This set of repositories covers both core Crossplane and the set of Providers,
Stacks, and Apps that Crossplane currently maintains:
* [`crossplane`](https://github.com/crossplane/crossplane)
* [`provider-gcp`](https://github.com/crossplane/provider-gcp)
* [`provider-aws`](https://github.com/crossplane/provider-aws)
* [`provider-azure`](https://github.com/crossplane/provider-azure)
* [`provider-rook`](https://github.com/crossplane/provider-rook)
* [`stack-minimal-gcp`](https://github.com/crossplane/stack-minimal-gcp)
* [`stack-minimal-aws`](https://github.com/crossplane/stack-minimal-aws)
* [`stack-minimal-azure`](https://github.com/crossplane/stack-minimal-azure)
* [`app-wordpress`](https://github.com/crossplane/app-wordpress)
The release process for Providers is almost identical to that of core Crossplane
because they use the same [shared build
logic](https://github.com/upbound/build/). The steps in this guide will apply
to all repositories listed above unless otherwise mentioned.
### Feature Freeze
Feature freeze should be performed on all repos. In order to start the feature
freeze period, the following conditions should be met:
* All expected features should be
["complete"](https://github.com/crossplane/crossplane/blob/master/design/one-pager-definition-of-done.md)
and merged into master. This includes user guides, examples, API documentation
via [crossdocs](https://github.com/negz/crossdocs/), and test updates.
* All issues in the
[milestone](https://github.com/crossplane/crossplane/milestones) should be
closed
* Sanity testing has been performed on `master`
After these conditions are met, the feature freeze begins by creating the RC tag
and the release branch.
### Pin Dependencies
It is a best practice to release Crossplane projects with "pinned" dependencies
to specific versions of other upstream Crossplane projects. For example, after
crossplane-runtime has been released, we want to update the main Crossplane repo
to use that specific released version.
To update a dependency to a specific version, simply edit the `go.mod` file to
point to the desired version, then run `go mod tidy`.
### Pre-release Tag
The next step is to create the pre-release tag for the `HEAD` commit in
`master`. This tag serves as an indication of when the release was branched
from master and is also important for generating future versions of `master`
builds since that [versioning
process](https://github.com/upbound/build/blob/master/makelib/common.mk#L182-L196)
is based on `git describe --tags`.
> **NOTE:** The `tag` pipeline does not yet support additional (pre-release)
tags in the version number, such as `v0.5.0-rc`.
[#330](https://github.com/crossplane/crossplane/issues/330) will be resolved
when this functionality is available. In the meantime, **manually tagging and
pushing to the repo is required**. Ignore the steps below about running the
pipeline because the pipeline won't work.
To accomplish this, run the `tag` pipeline for each repo on the `master` branch.
You will be prompted to enter the `version` for the tag and the `commit` hash to
tag. It's possible to leave the `commit` field blank to default to tagging
`HEAD`.
Since this tag will essentially be the start of pre-releases working towards the
**next** version, the `version` should be the **next** release number, plus a
trailing tag to indicate it is a pre-release. The current convention is to use
`*-rc`. For example, when we are releasing the `v0.9.0` release and we are
ready for master to start working towards the **next** release of `v0.10.0`, we
would make the tag `v0.10.0-rc`
After the tag pipeline has succeeded, verify in the [GitHub
UI](https://github.com/crossplane/crossplane/tags) that the tag was successfully
applied to the correct commit.
### Create Release Branch
Creating the release branch can be done within the [GitHub
UI](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/creating-and-deleting-branches-within-your-repository).
Basically, you just use the branch selector drop down and type in the name of
the new release branch, e.g. `release-0.5`. Release branch names always follow
the convention of `release-[minor-semver]`.
If this is the first ever release branch being created in a repo (uncommon), you
should also set up branch protection rules for the `release-*` pattern. You can
find existing examples in the [Crossplane repo
settings](https://github.com/crossplane/crossplane/settings/branches).
At this point, the `HEAD` commit in the release branch will be our release
candidate. The build pipeline will automatically be started due to the create
branch event, so we can start to perform testing on this build. Note that it
should be the exact same as what is currently in `master` since they are using
the same commit and have the same tag. Also note that this is not the official
release build since we have not made the official release tag yet (e.g.
`v0.5.0`).
The `master` branch can now be opened for new features since we have a safe
release branch to continue bug fixes and improvements for the release itself.
Essentially, `master` is free to now diverge from the release branch.
### Release Branch Prep
In the core Crossplane repository, we need to update the release branch docs and
examples to point to the new versions that we will be releasing soon.
* Documentation, such as [Installation
instructions](https://github.com/crossplane/crossplane/blob/release-0.9/docs/install.md#installing-infrastructure-providers),
and
[Stack](https://github.com/crossplane/crossplane/blob/release-0.9/docs/stack.md)
and
[App](https://github.com/crossplane/crossplane/blob/release-0.9/docs/app.md)
guides.
* searching for `:master` will help a lot here
* Examples, such as [`StackInstall` yaml
files](https://github.com/crossplane/crossplane/tree/release-0.9/cluster/examples/provider)
* [Helm chart
defaults](https://github.com/crossplane/crossplane/blob/release-0.9/cluster/charts/crossplane/values.yaml.tmpl),
ensure all `values.yaml.tmpl` files are updated.
* provider versions
* `templating-controller` version (if a new version is available and ready)
#### Bug Fixes in Release Branch
During our testing of the release candidate, we may find issues or bugs that we
triage and decide we want to fix before the release goes out. In order to fix a
bug in the release branch, the following process is recommended:
1. Make the bug fix into `master` first through the normal PR process
1. If the applicable code has already been removed from `master` then simply
fix the bug directly in the release branch by opening a PR directly
against the release branch
1. Backport the fix by performing a cherry-pick of the fix's commit hash
(**not** the merge commit) from `master` into the release branch. For
example, to backport a fix from master to `v0.5.0`, something like the
following should be used:
```console
git fetch --all
git checkout -b release-0.5 upstream/release-0.5
git cherry-pick -x <fix commit hash>
```
1. Open a PR with the cherry-pick commit targeting the release-branch
After all bugs have been fixed and backported to the release branch, we can move
on to tagging the final release commit.
### Tag Core Crossplane
Similar to running the `tag` pipelines for each stack, now it's time to run the
[`tag`
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fcrossplane%2Fcrossplane-tag/branches)
for core Crossplane. In fact, the [instructions](#stack-tag-pipeline) are
exactly the same:
Run the tag pipeline by clicking the Run button in the Jenkins UI in the correct
release branch's row. You will be prompted for the version you are tagging,
e.g., `v0.5.0` as well as the commit hash. The hash is optional and if you leave
it blank it will default to `HEAD` of the branch, which is what you want.
> **Note:** The first time you run a pipeline on a new branch, you won't get
> prompted for the values to input. The build will quickly fail and then you can
> run (not replay) it a second time to be prompted. This is a Jenkins bug that
> is tracked by [#41929](https://issues.jenkins-ci.org/browse/JENKINS-41929) and
> has been open for almost 3 years, so don't hold your breath.
### Draft Release Notes
We're getting close to starting the official release, so you should take this
opportunity to draft up the release notes. You can create a [new release draft
here](https://github.com/crossplane/crossplane/releases/new). Make sure you
select "This is a pre-release" and hit "Save draft" when you are ready to share
and collect feedback. Do **not** hit "Publish release" yet.
You can see and follow the template and structure from [previous
releases](https://github.com/crossplane/crossplane/releases).
### Provider Release Process
This section will walk through how to release the Providers and does not
directly apply to core Crossplane.
#### Pin Provider Dependencies
Similar to core crossplane, each provider should have its crossplane related
dependencies pinned to the versions that we are releasing. In the **master**
branch of each provider repo, update the `crossplane` and `crossplane-runtime`
dependencies to the versions we are releasing.
Simply edit `go.mod` with the new versions, then run `go mod tidy`.
The providers also depend on `crossplane-tools`, but that currently does not
have official releases, so in practice should be using the latest from master.
#### Provider Pre-release tag
Follow the same steps that we did for core crossplane to tag the **master**
branch of each provider repo with a pre-release tag for the **next** version.
These steps can be found in the [pre-release tag section](#pre-release-tag).
#### Create Provider Release Branches
Now create a release branch for each of the provider repos using the GitHub UI.
The steps are the same as what we did to [create the release
branch](#create-release-branch) for core crossplane.
#### Provider Release Branch Prep
In the **release branch** for each provider, you should update the version tags
and metadata in:
* `integration_tests.sh` - `STACK_IMAGE`
* `ClusterStackInstall` sample and example yaml files
* `*.resource.yaml` - docs links in markdown
* Not all of these `*.resource.yaml` files have links that need to be updated,
they are infrequent and inconsistent
Searching for `:master` will be a big help here.
#### Provider Tag, Build, and Publish
Now that the Providers are all tested and their version metadata has been
updated, it's time to tag the release branch with the official version tag. You
can do this by running the `tag` pipeline on the release branch of each
Provider:
* [`provider-gcp` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-gcp-pipelines%2Fprovider-gcp-tag/branches)
* [`provider-aws` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-aws-pipelines%2Fprovider-aws-tag/branches/)
* [`provider-azure` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-azure-pipelines%2Fprovider-azure-tag/branches/)
* [`provider-rook` tag
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fprovider-rook-pipelines%2Fprovider-rook-tag/branches/)
* Run the `tag` pipeline on the release branch
* Enter the version and commit hash (leave blank for `HEAD`)
* The first time you run on a new release branch, you won't be prompted and the
build will fail, just run (not replay) a second time
After the tag pipeline has been run and the release branch has been tagged, you
can run the normal build pipeline on the release branch. This will kick off the
official release build and upon success, all release artifacts will be
officially published.
After the release build succeeds, verify that the correctly versioned Provider
images have been pushed to Docker Hub.
### Template Stack Release Process
The Template Stacks we maintain are slightly different from the controller-based
stacks that we maintain. Their processes are similar but a little simpler. This
section will walk through how to release the Template Stacks themselves, and
does not directly apply to core Crossplane.
For Template Stacks, we do not use release branches unless we need to make a
patch release. In the future we may need a more robust branching strategy, but
for now we are not using branches because it is simpler.
Note that Template Stacks **do not** require any code changes to update their
version. A slight exception to this is for their `behavior.yaml` files, which
should have the `controllerImage` field updated if a new version of the
`templating-controller` is available and ready.
### Template Stack Tag And Publish Pipeline
Here is the list of all template stacks:
* [`stack-minimal-gcp`](https://github.com/crossplane/stack-minimal-gcp)
* [`stack-minimal-aws`](https://github.com/crossplane/stack-minimal-aws)
* [`stack-minimal-azure`](https://github.com/crossplane/stack-minimal-azure)
* [`app-wordpress`](https://github.com/crossplane/app-wordpress)
Each one should be released as part of a complete release, using the
instructions below. To read even more about the template stack release process,
see [the release section of this
document](https://github.com/crossplane/cicd/blob/master/docs/pipelines.md#how-do-i-cut-a-release)
Note that there's also the
[`templating-controller`](https://github.com/crossplane/templating-controller),
which supports template stacks. It is possible that it **may** need to be
released as well, but typically is released independently from Crossplane.
#### Tag the Template Stack
Once a template stack is tested and ready for cutting a semver release, we will
want to tag the repository with the new release version. In most cases, to get
the version, take a look at the most recent tag in the repo, and increment the
minor version. For example, if the most recent tag was `v0.2.0`, the new tag
should be `v0.3.0`.
Run the template stack's tag job on Jenkins, against the `master` branch. Enter
in the new tag to use. If the current release candidate is not the head of
`master`, enter in the commit to tag.
You can find the tag pipeline for the individual stack by going to the
[crossplane org in Jenkins](https://jenkinsci.upbound.io/job/crossplane/),
finding the folder with the same name as the template stack, and then going to
the `tag` job group. Then going to the `master` branch job under the group. For
example, here is [a link to the stack-minimal-gcp tag job for
master](https://jenkinsci.upbound.io/job/crossplane/job/stack-minimal-gcp/job/tag/job/master/).
> **Note:** The first time you run a pipeline on a new branch, you won't get
> prompted for the values to input and the build will fail. See details in the
> [tagging core crossplane section](#tag-core-crossplane).
#### Publish the Template Stack
After the tag pipeline has been run and the repository has been tagged, you can
run the `publish` job for the template stack. For example, here's a [link to the
stack-minimal-gcp publish
job](https://jenkinsci.upbound.io/job/crossplane/job/stack-minimal-gcp/job/publish/job/master/).
This will kick off the official release build and upon success, all release
artifacts will be officially published. This should also be run from the
`master` branch in most cases. Or, if a release branch was used, from the
release branch. The tag parameter should be used, and the tag for the current
release should be used. For example, if the new tag we created was `v0.3.0`, we
would want to provide `v0.3.0` for the `publish` job.
#### Verify the Template Stack was Published
After the publish build succeeds, verify that the correctly versioned template
stack images have been pushed to Docker Hub.
### Template Stack Patch Releases
To do a patch release with a template stack, create a release branch from the
minor version tag on the `master` branch, if a release branch doesn't already
exist. Then, the regular tagging and publishing process for template stacks can
be followed, incrementing the patch version to get the new release tag.
### Build and Release Core Crossplane
After the providers, stacks, and apps have all been released, ensure the [normal
build
pipeline](https://jenkinsci.upbound.io/blue/organizations/jenkins/crossplane%2Fcrossplane%2Fbuild/branches)
is run on the release branch for core crossplane. This will be the official
release build with an official version number and all of its release artifacts
will be published.
After the pipeline runs successfully, you should verify that all artifacts have
been published to:
* [Docker Hub](https://hub.docker.com/repository/docker/crossplane/crossplane)
* [S3 releases bucket](https://releases.crossplane.io/)
* [Helm chart repository](https://charts.crossplane.io/)
* [Docs website](https://crossplane.io/docs/latest)
### Promotion
If everything looks good with the official versioned release that we just
published, we can go ahead and run the `promote` pipeline for the core
crossplane and provider repos. This is a very quick pipeline that doesn't
rebuild anything, it simply makes metadata changes to the published release to
also include the release in the channel of your choice.
Currently, we only support the `master` and `alpha` channels.
For the core crossplane and each provider repo, run the `promote` pipeline on
the release branch and input the version you would like to promote (e.g.
`v0.5.0`) and the channel you'd like to promote it to. The first time you run
this pipeline on a new release branch, you will not be prompted for values, so
the pipeline will fail. Just run (not replay) it a second time to be prompted.
* Run `promote` pipeline for `master` channel
* Run `promote` pipeline for `alpha` channel
After the `promote` pipelines have succeeded, verify on DockerHub and the Helm
chart repository that the release has been promoted to the right channels.
### Publish Release Notes
Now that the release has been published and verified, you can publish the
[release notes](https://github.com/crossplane/crossplane/releases) that you
drafted earlier. After incorporating all feedback, you can now click on the
"Publish release" button.
This will send an email notification with the release notes to all watchers of
the repo.
### Announce Release
We have completed the entire release, so it's now time to announce it to the
world. Using the [@crossplane_io](https://twitter.com/crossplane_io) Twitter
account, tweet about the new release and blog. You'll see examples from the
previous releases, such as this tweet for
[v0.4](https://twitter.com/crossplane_io/status/1189307636350705664).
Post a link to this tweet on the Slack #announcements channel, then copy a link
to that and post it in the #general channel.
### Patch Releases
We also have the ability to run patch releases to update previous releases that
have already been published. These patch releases are always run from the last
release branch, we do **not** create a new release branch for a patch release.
The basic flow is **very** similar to a normal release, but with a few less
steps. Please refer to details for each step in the sections above.
* Fix any bugs in `master` first and then `cherry-pick -x` to the release branch
* If `master` has already removed the relevant code then make your fix
directly in the release branch
* After all testing on the release branch look good and any docs/examples/tests
have been updated with the new version number, run the `tag` pipeline on the
release branch with the new patch version (e.g. `v0.5.1`)
* Run the normal build pipeline on the release branch to build and publish the
release
* Publish release notes
* Run promote pipeline to promote the patch release to the `master` and `alpha`
channels

View File

@ -1,18 +0,0 @@
apiVersion: database.alibaba.crossplane.io/v1alpha1
kind: RDSInstance
metadata:
name: rdspostgresql
spec:
forProvider:
engine: PostgreSQL
engineVersion: "9.4"
dbInstanceClass: rds.pg.s1.small
dbInstanceStorageInGB: 20
securityIPList: "0.0.0.0/0"
masterUsername: "test123"
writeConnectionSecretToRef:
namespace: crossplane-system
name: alibaba-rdspostgresql-conn
providerRef:
name: alibaba-provider
reclaimPolicy: Delete

Some files were not shown because too many files have changed in this diff Show More