docs snapshot for crossplane version `master`

This commit is contained in:
Crossplane 2019-09-18 16:09:43 +00:00
parent c6571d5b24
commit 29de398f88
3 changed files with 1151 additions and 9 deletions

View File

@ -3,17 +3,19 @@ title: Concepts
toc: true
weight: 310
---
# Table of Contents
1. [Concepts](#concepts)
2. [Feature Areas](#feature-areas)
3. [Glossary](#glossary)
# Concepts
## Control Plane
Crossplane is an open source multicloud control plane that consists of smart controllers that can work across clouds to enable workload portability, provisioning and full-lifecycle management of infrastructure across a wide range of providers, vendors, regions, and offerings.
The control plane presents a declarative management style API that covers a wide range of portable abstractions that facilitate these goals across disparate environments, clusters, regions, and clouds.
Crossplane can be thought of as a higher-order orchestrator across cloud providers.
For convenience, Crossplane can run directly on-top of an existing Kubernetes cluster without requiring any changes, even though Crossplane does not necessarily schedule or run any containers on the host cluster.
## Resources and Workloads
In Crossplane, a *resource* represents an external piece of infrastructure ranging from low level services like clusters and servers, to higher level infrastructure like databases, message queues, buckets, and more.
Resources are represented as persistent object within the crossplane, and they typically manage one or more pieces of external infrastructure within a cloud provider or cloud offering.
Resources can also represent local or in-cluster services.
@ -26,9 +28,7 @@ For example, a container workload can include a set of objects that will be depl
A serverless workload could include a function that will run on a serverless managed service.
Workloads can contain requirements for where and how the workload can run, including regions, providers, affinity, cost, and others that the scheduler can use when assigning the workload.
## Resource Claims and Resource Classes
To support workload portability we expose the concept of a resource claim and a resource class.
A resource claim is a persistent object that captures the desired configuration of a resource from the perspective of a workload or application.
Its configuration is cloud-provider and cloud-offering independent and its free of implementation and/or environmental details.
@ -38,7 +38,6 @@ A resource class is configuration that contains implementation details specific
A ResourceClass acts as a template with implementation details and policy for resources that will be dynamically provisioned by the workload at deployment time.
A resource class is typically created by an admin or infrastructure owner.
## Dynamic and Static Provisioning
A resource can be statically or dynamically provisioned.
@ -51,8 +50,324 @@ The newly provisioned resource is automatically bound to the resource claim.
To enable dynamic provisioning the administrator needs to create one or more resource class objects.
## Connection Secrets
Workloads reference all the resources the consume in their `resources` section.
Workloads reference all the resources they consume in their `resources` section.
This helps Crossplane setup connectivity between the workload and resource, and create objects that hold connection information.
For example, for a database provisioned and managed by Crossplane, a secret will be created that contains a connection string, user and password.
This secret will be propagated to the target cluster so that it can be used by the workload.
## Secure Connectivity
To provide secure network connectivity between application deployments in a target cluster
and the managed services they are using, Crossplane supports
provisoining and life-cycle management of networks, subnets, peering, and firewall rules to
provide secure connectivity.
## Stacks
Stacks extend Crossplane with new functionality. Crossplane provides Stacks for GCP, AWS,
and Azure that are installed with a Stack Manager that can download packages,
resolve dependencies, and execute controllers.
Stacks are designed for simplified RBAC configuration and namespace
isolation for improved security in multi-team environments. Stacks are published to a registry
where they can be downloaded, explored, and organized.
Stacks enable the community to add support for more clouds providers and and managed services. Stacks support
out-of-tree extensibility so they can be released on their own schedule. A CLI can init,
build, publish, install, and uninstall Stacks from developer laptops or
in continous delivery pipelines.
Stacks for GCP, AWS, and Azure support provisioning managed services (database, cache, buckets),
managed clusters (GKE, EKS, AKS), and secure connectivity (networks, subnets, firewall rules).
Stacks for independent cloud offerings can be installed alongside the Stacks for GCP, AWS, and Azure
to customize Crossplane with the right mix of managed services for your organization.
# Feature Areas
Crossplane has four main feature areas: Services, Stacks, Clusters and Workloads.
## Crossplane Services
Crossplane supports provisioning managed services using `kubectl`. It applies
the Kubernetes pattern for Persistent Volume (PV)
claims and classes to managed service provisioning with support for a strong
separation of concern between app teams and cluster administrators.
App teams can choose between cloud-specific and portable services including
managed databases, message queues, buckets, data pipelines, and more to define
complete applications, build once, and deploy into multiple clouds using
continuous delivery pipelines or GitOps flows.
Cluster administrators can define self-service policies and best-practice
configurations to accelerate app delivery and improve security, so app teams can
focus on delivering their app instead of cloud-specific infrastructure details.
Secure connectivity between managed services and managed Kubernetes clusters is also supported
in Crossplane such that private networking can be established declaratively using
`kubectl`.
Crossplane is designed to support the following types of managed services.
### Managed Kubernetes Services
Managed Kubernetes currently supported for GKE, EKS, AKS.
Kubernetes clusters are another type of resource that can be dynamically provisioned using a
generic resource claim by the application developer and an environment specific resource
class by the cluster administrator.
Future support for additional managed services.
### Database Services
Support for PostgreSQL, MySQL, and Redis.
Database managed services can be statically or dynamically provisioned by Crossplane in AWS, GCP, and Azure.
An application developer simply has to specify their general need for a database such as MySQL,
without any specific knowledge of what environment that database will run in or even what
specific type of database it will be at runtime.
The cluster administrator specifies a resource class that acts as a template with the
implementation details and policy specific to the environment that the generic MySQL resource is being deployed to.
This enables the database to be dynamically provisioned at deployment time without the
application developer needing to know any of the details, which promotes portability and reusability.
Future support for additional managed services.
### Storage Services
Support for S3, Buckets, and Azure Blob storage.
Future support for additional managed services.
### Networking Services
Support for networks, subnets, and firewall rules.
Future support for additional managed services.
### Load Balancing Services
Future support.
### Cloud DNS Services
Future support.
### Advanced Networking Connectivity Services
Future support.
### Big Data Services
Future support.
### Machine Learning Services
Future support.
## Crossplane Stacks
Stacks extend Crossplane with new functionality.
See [Stacks](#stacks).
## Crossplane Workloads
Crossplane includes an extensible workload scheduler that observes application
policies to select a suitable target cluster from a pool of available clusters.
The workload scheduler can be customized to consider a number of criteria including
capabilities, availability, reliability, cost, regions, and performance while
deploying workloads and their resources. Complex workloads can be modeled as a `KubernetesApplication`.
## Crossplane Clusters
Crossplane supports dynamic provisioning of managed
Kubernetes clusters from a single control plane with consistent multi-cluster
best-practice configuration and secure connectivity between target Kubernetes
clusters and the managed services provisioned for applications. Managed
Kubernetes clusters can be dynamically provisioned with a `KubernetesCluster`.
# Glossary
## Kubernetes
Crossplane is built on the Kubernetes API machinery as a platform for declarative management.
We rely on common terminology from the [Kubernetes Glossary][kubernetes-glossary] where possible,
and we don't seek to reproduce that glossary here.
[kubernetes-glossary]: https://kubernetes.io/docs/reference/glossary/?all=true
However we'll summarize some commonly used concepts for convenience.
### CRD
A standard Kubernetes Custom Resource Definition (CRD), which defines a new type of resource that can be managed declaratively.
This serves as the unit of management in Crossplane.
The CRD is composed of spec and status sections and supports API level versioning (e.g., v1alpha1)
### Controller
A standard Kubernetes Custom Controller, providing active control loops that own or more CRDs.
Can be implemented in different ways, such as
golang code (controller-runtime), templates, functions/hooks, templates, a new DSL, etc.
The implementation itself is versioned using semantic versioning (e.g., v1.0.4)
### Namespace
Allows logical grouping of resources in Kubernetes that can be secured with RBAC rules.
## Crossplane
### Stack
The unit of extending Crossplane with new functionality. A stack is a Controller that
owns one or more CRDs and depends on zero or more CRDs.
See [Stacks](#stacks).
### Stack Registry
A registry where Stacks can be published, downloaded, explored, and categorized.
The registry understands a Stacks custom controller and its CRDs and indexes by both -- you could lookup a custom controller by the CRD name and vice versa.
### Stack Package Format
The package format for Stacks that contains the Stack definition, metadata, icons, CRDs, and other Stack specific files.
### Stack Manager
The component that is responsible for installing a Stacks custom controllers and resources in Crossplane. It can download packages, resolve dependencies, install resources and execute controllers. This component is also responsible for managing the complete life-cycle of Stacks, including upgrading them as new versions become available.
### Application Stack
App Stacks simplify operations for an app by moving app lifecycle management into a Kubernetes controller
that owns an app CRD with a handful of settings required to deploy a new app instance,
complete with the managed services it depends on.
Application Stacks depend on Infrastructure Stacks like stack-gcp, stack-aws,
and stack-azure to provide managed services via the Kubernetes API.
### Infrastructure Stack
Infrastructure Stacks like stack-gcp, stack-aws, and stack-azure extend Crossplane
to support managed service provisioning (DBaaS, cache, buckets), secure connectivity
(VPCs, subnets, peering, ACLs, secrets), and provisioning managed Kubernetes clusters
on demand to further isolate the blast radius of applications.
### Cloud Provider Stack
See [infrastructure-stack](#infrastructure-stack).
### Crossplane Instance
Crossplane is a multicloud control plane, that happens to use the Kubernetes API machinery
as a platform for declarative management. A Crossplane Instance is an instance of a
Kuberentes API server with the Crossplane Stacks Manager installed into it, capable of
installing cloud provider or application Stacks to build a custom control plane for one
or more environments.
### Dedicated Crossplane Instance
Crossplane instance running on a dedicated k8s API server with no Kubernetes worker nodes.
The Dedicated Crossplane Instance is separate from the target Kubernetes cluster(s) where
application deployments and pods are scheduled to run.
### Embedded Crossplane Instance
Crossplane instance running on the same Kubernetes API server as the Kubernetes target cluster
where app deployments and pods will run.
### Cloud Provider
Cloud provider such as GCP, AWS, Azure offering IaaS, cloud networking, and managed services.
### Managed Service Provider
Managed service provider such as Elastic Cloud, MLab, PKS that run on cloud provider IaaS.
### Provider
A Crossplane kind that connects Crossplane to a cloud provider or managed service provider.
### Infrastructure
Infrastructure ranging from low level services like clusters and servers,
to higher level infrastructure like databases, message queues, buckets,
secure connectivity, mnaged Kubernetes, and more
### Infrastructure Namespace
Crossplane supports connecting multiple cloud provider accounts from
a single control plane, so different environments (dev, staging, prod) can
use separate accounts, projects, and/or credentials.
The provider and resource classes for these environments can be kept separate
using an infrastructure namepace (gcp-infra-dev, aws-infra-dev, or azure-infra-dev)
for each environment. You can create as many as you like using whatever naming works best for your organization.
### Project Namespace
When running a shared control plane or cluster it's a common practice to
create separate project namespaces (app-project1-dev) for each app project or team so their resource
are kept separate and secure. Crossplane supports this model.
### App Project Namespace
See [project-namespace](#project-namespace)
### Dynamic Provisioning
Dynamic provisioning is when an resource claim does not find a matching resource and provisions
a new one instead. The newly provisioned resource is automatically bound to the resource claim.
To enable dynamic provisioning the administrator needs to create one or more resource class objects.
### Static Provisioning
Static provisioning is when an administrator creates the resource manually. They set the configuration required to
provision and manage the corresponding external resource within a cloud provider or cloud offering.
Once provisioned, resources are available to be bound to resource claims.
### Resource
A resource represents an external piece of infrastructure ranging from low level services like clusters and
servers, to higher level infrastructure like databases, message queues, buckets, and more
### External Resource
An actual resource that exists outside Kubernetes, typically in the cloud.
AWS RDS and GCP Cloud Memorystore instances are external resources.
### Managed Resource
The Crossplane representation of an external resource.
The `RDSInstance` and `CloudMemorystoreInstance` Kubernetes kinds are managed
resources. A managed resource models the satisfaction of a need; i.e. the need
for a Redis Cluster is satisfied by the allocation (aka binding) of a
`CloudMemoryStoreInstance`.
### Resource Claim
The Crossplane representation of a request for the
allocation of a managed resource. Resource claims typically represent the need
for a managed resource that implements a particular protocol. `MySQLInstance`
and `RedisCluster` are examples of resource claims.
### Resource Class
The Crossplane representation of the desired configuration
of a managed resource. Resource claims reference a resource class in order to
specify how they should be satisfied by a managed resource.
### Cloud-Specific Resource Class
Cloud-specific Resource Classes capture reusable, best-practice configurations for a specific managed service.
For example, Wordpress requires a MySQL database which can be satisfied by CloudSQL, RDS, or Azure DB, so
cloud-specific resource classes would be created for CloudSQL, RDS, and Azure DB.
### Non-Portable Resource Class
See [cloud-specific resource class](#cloud-specific-resource-class).
### Portable Resource Class
Portable Resource Classes define a named class of service that can be used by portable `Resource Claims`
in the same namespace. When used in a project namespace, this enables the
project to provision portable managed services using `kubectl`.
### Connection Secret
A Kubernetes `Secret` encoding all data required to
connect to (or consume) an external resource.
### Claimant
The Kubernetes representation of a process wishing
to connect to a managed resource, typically a `Pod` or some abstraction
thereupon such as a `Deployment` or `KubernetesApplication`.
### Consumer
See [claimant](#claimant).
### Workload
We model workloads as schedulable units of work that the user intends to run on a cloud provider.
Crossplane will support multiple types of workloads including container and serverless.
You can think of workloads as units that run your code and applications.
Every type of workload has a different kind of payload.
### Kubernetes Application
A `KubernetesApplication` is a type of workload, with a `KubernetesCluster` label selector
used for scheduling, and a series of resource templates representing resources
to be deployed to the scheduled cluster, and managed resources are provisoined
and securely connected to the application.
### Cluster
A Kubernetes cluster.
### Target Cluster
A Kubernetes cluster where application deployments and pods are scheduled to run.
### Managed Cluster
A Managed Kubernetes cluster from a service provider such as GKE, EKS, or AKS.
### In-Tree
In-tree means its source code lives in a core Crossplane git repository.
### Out-of-Tree
Out-of-tree means its source code lives outside of a core Crossplane git repository.
Often used to refer to Crossplane extensions, controllers or Stacks.
Out-of-tree extensibility enables to the community to build, release, publish,
and install Crossplane extensions separately from the core Crossplane repos.

View File

@ -352,7 +352,7 @@ portable class of service, so an app team can provision managed services using
## B) Onboard app projects in a shared cluster
### Offer Portable Classes of Service in App Project Namespaces
[Portable Resource Classes][concept-portable-stack]
[Portable Resource Classes][concept-portable-class]
define a named class of service that can be used by portable `Resource Claims`
in the same namespace. When used in a project namespace, this enables the
project to provision portable managed services using `kubectl`.

View File

@ -0,0 +1,827 @@
# Deploying Wordpress in Amazon Web Services (AWS)
This user guide will walk you through Wordpress application deployment using
Crossplane managed resources and the official Wordpress Docker image.
## Table of Contents
1. [Pre-requisites](#pre-requisites)
1. [Preparation](#preparation)
1. [Set Up an EKS Cluster](#set-up-an-eks-cluster)
1. [Set Up RDS Configurations](#set-up-rds-configurations)
1. [Set Up Crossplane](#set-up-crossplane)
1. [Install Wordpress](#install-wordpress)
1. [Uninstall](#uninstall)
1. [Conclusion and Next Steps](#conclusion-and-next-steps)
## Pre-requisites
These tools are required to complete this guide. They must be installed on your
local machine.
* [AWS CLI][aws-cli]
* [kubectl][install-kubectl]
* [Helm][using-helm], minimum version `v2.10.0+`.
* [jq][jq-docs] - commandline JSON processor `v1.5+`
## Preparation
This guide assumes that you have setup and configured the AWS CLI.
*Note: these session variables are used throughout this guide. You may use the
values below or create your own.*
### Set Up an EKS Cluster
We will create an EKS cluster, follwoing the steps provided in [AWS documentation] [aws-create-eks].
#### Create and Configure an EKS comatible VPN
First, we will need to create a VPC, and the related network resources. This can be done by creating a compatible **Cloud Formation stack**, provided in [EKS official documentation][sample-cf-stack]. This stack consumes a few parameters, that we will provide by the following variables:
```bash
# we give an arbitrary name to the stack.
# this name has to be unique within an aws account and region
EKS_STACK_NAME=crossplane-example
# any aws region that supports eks clusters
REGION=eu-west-1
# a sample stack that can be used to create an EKS
# compatible VPC and other network resources
STACK_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-vpc-sample.yaml
```
Once all these variables are set, create the stack:
```bash
# Generate and run a CloudFormation Stack and get the VPC, Subnet, and Security Group associated with it
aws cloudformation create-stack \
--template-url="${STACK_URL}" \
--region "${REGION}" \
--stack-name "${EKS_STACK_NAME}" \
--parameters \
ParameterKey=VpcBlock,ParameterValue=192.168.0.0/16 \
ParameterKey=Subnet01Block,ParameterValue=192.168.64.0/18 \
ParameterKey=Subnet02Block,ParameterValue=192.168.128.0/18 \
ParameterKey=Subnet03Block,ParameterValue=192.168.192.0/18
```
The output of this command will look like:
>```bash
>{
> "StackId": "arn:aws:cloudformation:eu-west-1:123456789012:stack/crossplane-example-workers/de1d4770-d9f0-11e9-a293-02673541b8d0"
>}
>```
Creating the stack continutes in the background and could take a few minutes to complete. You can check its status by running:
```bash
aws cloudformation describe-stacks --output json --stack-name ${EKS_STACK_NAME} --region $REGION | jq -r '.Stacks[0].StackStatus'
```
Once the output of the above command is `CREATE_COMPLETE`, the stack creation is completed, and you can retrieve some of the properties of the created resources. These properties will later be consumed in other resources.
```bash
VPC_ID=$(aws cloudformation describe-stacks --output json --stack-name ${EKS_STACK_NAME} --region ${REGION} | jq -r '.Stacks[0].Outputs[]|select(.OutputKey=="VpcId").OutputValue')
# comma separated list of Subnet IDs
SUBNET_IDS=$(aws cloudformation describe-stacks --output json --stack-name ${EKS_STACK_NAME} --region ${REGION} | jq -r '.Stacks[0].Outputs[]|select(.OutputKey=="SubnetIds").OutputValue')
# the ID of security group that later will be used for the EKS cluster
EKS_SECURITY_GROUP=$(aws cloudformation describe-stacks --output json --stack-name ${EKS_STACK_NAME} --region ${REGION} | jq -r '.Stacks[0].Outputs[]|select(.OutputKey=="SecurityGroups").OutputValue')
```
#### Create an IAM Role for the EKS cluster
For EKS cluster to be able to access different resources, it needs to be given the required permissions through an **IAM Role**. In this section we create a role and assign the required policies. Later we will make EKS to assume this role.
```bash
EKS_ROLE_NAME=crossplane-example-eks-role
# the policy that determines what principal can assume this role
ASSUME_POLICY='{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
}'
# Create a Role that can be assumed by EKS
aws iam create-role \
--role-name "${EKS_ROLE_NAME}" \
--region "${REGION}" \
--assume-role-policy-document "${ASSUME_POLICY}"
```
The output should be the created role in JSON. Next, you'll attach the required policies to this role:
```bash
# attach the required policies
aws iam attach-role-policy --role-name "${EKS_ROLE_NAME}" --policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy > /dev/null
aws iam attach-role-policy --role-name "${EKS_ROLE_NAME}" --policy-arn arn:aws:iam::aws:policy/AmazonEKSServicePolicy > /dev/null
```
Now lets retrieve the **arn** of this role and store it in a variable. We later assign this to the EKS.
```bash
EKS_ROLE_ARN=$(aws iam get-role --output json --role-name "${EKS_ROLE_NAME}" | jq -r .Role.Arn)
```
#### Creating the EKS cluster
At this step we create an EKS cluster:
```bash
CLUSTER_NAME=crossplane-example-cluster
K8S_VERSION=1.14
aws eks create-cluster \
--region "${REGION}" \
--name "${CLUSTER_NAME}" \
--kubernetes-version "${K8S_VERSION}" \
--role-arn "${EKS_ROLE_ARN}" \
--resources-vpc-config subnetIds="${SUBNET_IDS}",securityGroupIds="${EKS_SECURITY_GROUP}"
```
At this point, an EKS cluster should be started provisioning, which could take take up to 15 minutes. You can check the status of the cluster by running:
```bash
aws eks describe-cluster --name "${CLUSTER_NAME}" --region "${REGION}" | jq -r .cluster.status
```
Once the provisioning is completed, the above command will return `ACTIVE`.
#### Configuring `kubectl` to communicatie with the EKS
Once the cluster is created and is `ACTIVE`, we configure the `kubectl` to target this cluster:
```bash
# this environment variable tells kubectl what config file to use
# its value is an arbitrary name, which will be used to name
# the eks cluster configuration file
export KUBECONFIG=~/.kube/eks-config
# this command will populate the eks k8s config file
aws eks update-kubeconfig \
--name "${CLUSTER_NAME}"\
--region "${REGION}"\
--kubeconfig "${KUBECONFIG}"
```
The output will look like:
>```bash
>Added new context arn:aws:eks:eu-west-1:123456789012:cluster/crossplane-example-cluster to /path/to/.kube/eks-config
>```
At this point, `kubectl` should be configurd to talk to the EKS cluster. To verify this run:
```bash
kubectl cluster-info
```
Which should produce something like:
>```bash
>Kubernetes master is running at https://12E34567898A607F40B3C2FDDF42DC5.sk1.eu-west-1.eks.amazonaws.com
>```
#### Creating Worker Nodes for the EKS cluster
The worker noes of an EKS cluster are not managed by the cluster itself. Instead, a set of EC2 instances, along with other resources and configurations are needed to be provisioned. In this section, we will create another Cloud Formation stack to setup a worker node configuration, which is described in [EKS official documentation][sample-workernodes-stack].
Before creating the stack, we will need to create a [Key Pair][aws-key-pair]. This key pair will be used to log into the worker nodes. Even though we don't need to log into the worker nodes for the purpose of this guide, the worker stack would fail without providing it.
```bash
# an arbitrary name for the keypair
KEY_PAIR=crossplaine-example-kp
# we give an arbitrary name to the workers stack.
# this name has to be unique within an aws account and region
WORKERS_STACK_NAME=crossplane-example-workers
# the id for the AMI image that will be used for worker nodes
# there is a specific id for each region and k8s version
NODE_IMAGE_ID="ami-0497f6feb9d494baf"
# a sample stack that can be used to launch worker nodes
WORKERS_STACK_URL=https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-02-11/amazon-eks-nodegroup.yaml
# Generate a KeyPair. The output will be the RSA private key value
# which we do not need and can ignore
aws ec2 create-key-pair --key-name "${KEY_PAIR}" --region="${REGION}" > /dev/null
# create the workers stack
aws cloudformation create-stack \
--template-url="${WORKERS_STACK_URL}" \
--region "${REGION}" \
--stack-name "${WORKERS_STACK_NAME}" \
--capabilities CAPABILITY_IAM \
--parameters \
ParameterKey=ClusterName,ParameterValue="${CLUSTER_NAME}" \
ParameterKey=ClusterControlPlaneSecurityGroup,ParameterValue="${EKS_SECURITY_GROUP}" \
ParameterKey=NodeGroupName,ParameterValue="crossplane-eks-nodes" \
ParameterKey=NodeAutoScalingGroupMinSize,ParameterValue=1 \
ParameterKey=NodeAutoScalingGroupDesiredCapacity,ParameterValue=1 \
ParameterKey=NodeInstanceType,ParameterValue="m3.medium" \
ParameterKey=NodeImageId,ParameterValue="${NODE_IMAGE_ID}" \
ParameterKey=KeyName,ParameterValue="${KEY_PAIR}" \
ParameterKey=VpcId,ParameterValue="${VPC_ID}" \
ParameterKey=Subnets,ParameterValue="${SUBNET_IDS//,/\,}"
```
The output will look like:
>```bash
>{
> "StackId": "arn:aws:cloudformation:eu-west-1:123456789012:stack/aws-serivce-example-stack 5730d720-d9d9-11e9-8662-029e8e947a9c"
>}
>```
Similar to VPC stack, creating the workers stack continutes in the background and could take a few minutes to complete. You can check its status by running:
```bash
aws cloudformation describe-stacks --output json --stack-name ${WORKERS_STACK_NAME} --region ${REGION} | jq -r '.Stacks[0].StackStatus'
```
Once the output of the above command is `CREATE_COMPLETE`, all worker nodes are created. Now we will need to tell EKS to let worker nodes with a certain role join the cluster. First let's retrieve the worker node role from the stack we just created, and then add that role to the **aws-auth** config map:
```bash
NODE_INSTANCE_ROLE=$(aws cloudformation describe-stacks --output json --stack-name ${WORKERS_STACK_NAME} --region ${REGION} | jq -r '.Stacks[0].Outputs[]|select(.OutputKey=="NodeInstanceRole").OutputValue')
cat > aws-auth-cm.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: ${NODE_INSTANCE_ROLE}
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
EOF
```
Then, we will apply this config map to the EKS cluster:
```bash
kubectl apply -f aws-auth-cm.yaml
```
This should print the following output:
>```bash
> configmap/aws-auth created
>```
Now, you can monitor that worker nodes (in this case only a single node) join the cluster:
```bash
kubectl get nodes
```
>```bash
> ip-192-168-104-194.eu-west-1.compute.internal NotReady <none> 8s v1.14.6-eks-5047ed
>```
After a few minutes, the `NotReady` status above should become `Ready`.
Congratulations! You have successfully setup and configured your EKS cluster!
### Set Up RDS Configurations
In AWS an RDS database instance, will be provisioned to satisfy WordPress application's `MySQLInstanceClass` claim. In order to make an RDS instance accessible by the EKS cluster:
1. A security group should be created and assigned to the RDS, so certain traffic from the EKS is allowed
```bash
# an arbitrary name for the security group
RDS_SG_NAME="crossplane-example-rds-sg"
# Generate the Security Group and add MySQL ingress to it
aws ec2 create-security-group \
--vpc-id="${VPC_ID}" \
--region="${REGION}" \
--group-name="${RDS_SG_NAME}" \
--description="open mysql access for crossplane-example cluster"
# retrieve the ID for this security group
RDS_SECURITY_GROUP_ID=$(aws ec2 describe-security-groups --filter Name=group-name,Values="${RDS_SG_NAME}" --region="${REGION}" --output=text --query="SecurityGroups[0].GroupId")
```
After creating the security group, we add a rule to allow traffic at `MySQL` port
```bash
aws ec2 authorize-security-group-ingress \
--group-id="${RDS_SECURITY_GROUP_ID}" \
--protocol=tcp \
--port=3306 \
--region="${REGION}" \
--cidr=0.0.0.0/0 > /dev/null
```
1. A **DB Subnet Goup** needs to be created, so that the RDS instance is associated with different availability zones
```bash
DB_SUBNET_GROUP_NAME=crossplane-example-dbsubnetgroup
# convert subnets to a white space separated list
SUBNETS_LIST="${SUBNET_IDS//,/ }"
aws rds create-db-subnet-group \
--region="${REGION}" \
--db-subnet-group-name="${DB_SUBNET_GROUP_NAME}" \
--db-subnet-group-description="crossplane-example db subnet group" \
--subnet-ids $SUBNETS_LIST > /dev/null
```
These resources later will be used to create cloud-specitic MySQL resources.
### Set Up Crossplane
Using the newly provisioned cluster:
1. Install Crossplane from alpha channel. (See the [Crossplane Installation
Guide][crossplane-install] for more information.)
```bash
helm repo add crossplane-alpha https://charts.crossplane.io/alpha
helm install --name crossplane --namespace crossplane-system crossplane-alpha/crossplane
```
2. Install the AWS stack into Crossplane. (See the [AWS stack
section][aws-stack-install] of the install guide for more information.)
```yaml
cat > stack-aws.yaml <<EOF
apiVersion: v1
kind: Namespace
metadata:
name: aws
---
apiVersion: stacks.crossplane.io/v1alpha1
kind: StackRequest
metadata:
name: stack-aws
namespace: crossplane-system
spec:
package: "crossplane/stack-aws:master"
EOF
kubectl apply -f stack-aws.yaml
```
3. Obtain AWS credentials. (See the [Cloud Provider Credentials][cloud-creds]
docs for more information.)
#### Infrastructure Namespaces
Kubernetes namespaces allow for separation of environments within your cluster.
You may choose to use namespaces to group resources by team, application, or any
other logical distinction. For this guide, we will create a namespace called
`app-project1-dev`, which we will use to group our AWS infrastructure
components.
* Define a `Namespace` in `aws-infra-dev-namespace.yaml` and create it:
```yaml
cat > aws-infra-dev-namespace.yaml <<EOF
---
apiVersion: v1
kind: Namespace
metadata:
name: aws-infra-dev
EOF
kubectl apply -f aws-infra-dev-namespace.yam
```
* You should see the following output:
> namespace/aws-infra-dev.yaml created
#### AWS Provider
It is essential to make sure that the AWS user credentials is configured
in Crossplane as a provider. Please follow the steps [provider guide][aws-provider-guide]
for more information.
#### Cloud-Specific Resource Classes
Cloud-specific resource classes are used to define a reusable configuration for
a specific managed service. Wordpress requires a MySQL database, which can be
satisfied by an [AWS RDS][aws-rds] instance.
* Define an AWS RDS `RDSInstanceClass` in `aws-mysql-standard.yaml` and
create it:
```yaml
cat > aws-mysql-standard.yaml <<EOF
---
apiVersion: database.aws.crossplane.io/v1alpha2
kind: RDSInstanceClass
metadata:
name: aws-mysql-standard
namespace: aws-infra-dev
specTemplate:
class: db.t2.small
masterUsername: masteruser
securityGroups:
- "${RDS_SECURITY_GROUP_ID}"
subnetGroupName: "${RDS_SG_NAME}"
size: 20
engine: mysql
providerRef:
name: aws-provider
namespace: crossplane-system
reclaimPolicy: Delete
EOF
kubectl apply -f aws-mysql-standard.yaml
```
Note that we are using `RDS_SECURITY_GROUP_ID` and `RDS_SG_NAME` variables here, that we configured earlier.
* You should see the following output:
> rdsinstanceclass.database.aws.crossplane.io/aws-mysql-standard created
* You can verify creation with the following command and output:
```bash
$ kubectl get rdsinstanceclass -n aws-infra-dev
NAME PROVIDER-REF RECLAIM-POLICY AGE
aws-mysql-standard aws-provider Delete 11s
```
You are free to create more AWS `RDSInstanceClass` instances to define more
potential configurations. For instance, you may create `large-aws-rds` with
field `size: 100`.
#### Application Namespaces
Earlier, we created a namespace to group our AWS infrastructure resources.
Because our application resources may be satisfied by services from any cloud
provider, we want to separate them into their own namespace. For this demo, we
will create a namespace called `app-project1-dev`, which we will use to group
our Wordpress resources.
* Define a `Namespace` in `app-project1-dev-namespace.yaml` and create it:
```yaml
cat > app-project1-dev-namespace.yaml <<EOF
---
apiVersion: v1
kind: Namespace
metadata:
name: app-project1-dev
EOF
kubectl apply -f app-project1-dev-namespace.yaml
```
* You should see the following output:
> namespace/app-project1-dev created
#### Portable Resource Classes
Portable resource classes are used to define a class of service in a single
namespace for an abstract service type. We want to define our AWS
`RDSInstanceClass` as the standard MySQL class of service in the namespace that
our Wordpress resources will live in.
* Define a `MySQLInstanceClass` in `mysql-standard.yaml` for namespace
`app-project1-dev` and create it:
```yaml
cat > mysql-standard.yaml <<EOF
---
apiVersion: database.crossplane.io/v1alpha1
kind: MySQLInstanceClass
metadata:
name: mysql-standard
namespace: app-project1-dev
classRef:
kind: RDSInstanceClass
apiVersion: database.aws.crossplane.io/v1alpha2
name: aws-mysql-standard
namespace: aws-infra-dev
EOF
kubectl apply -f mysql-standard.yaml
```
* You should see the following output:
> mysqlinstanceclass.database.crossplane.io/mysql-standard created
* You can verify creation with the following command and output:
```bash
$ kubectl get mysqlinstanceclasses -n app-project1-dev
NAME AGE
mysql-standard 27s
```
Once again, you are free to create more `MySQLInstanceClass` instances in this
namespace to define more classes of service. For instance, if you created
`mysql-aws-large` above, you may want to create a `MySQLInstanceClass` named
`mysql-large` that references it. You may also choose to create MySQL resource
classes for other non-AWS providers, and reference them for a class of service
in the `app-project1-dev` namespace.
You may specify *one* instance of a portable class kind as *default* in each
namespace. This means that the portable resource class instance will be applied
to claims that do not directly reference a portable class. If we wanted to make
our `mysql-standard` instance the default `MySQLInstanceClass` for namespace
`app-project1-dev`, we could do so by adding a label:
```yaml
---
apiVersion: database.crossplane.io/v1alpha1
kind: MySQLInstanceClass
metadata:
name: mysql-standard
namespace: app-project1-dev
labels:
default: "true"
classRef:
kind: RDSInstanceClass
apiVersion: database.aws.crossplane.io/v1alpha2
name: aws-mysql-standard
namespace: aws-infra-dev
```
#### Resource Claims
Resource claims are used to create external resources by referencing a class of
service in the claim's namespace. When a claim is created, Crossplane uses the
referenced portable class to find a cloud-specific resource class to use as the
configuration for the external resource. We need a to create a claim to
provision the RDS database we will use with AWS.
* Define a `MySQLInstance` claim in `mysql-claim.yaml` and create it:
```yaml
cat > mysql-claim.yaml <<EOF
apiVersion: database.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: mysql-claim
namespace: app-project1-dev
spec:
classRef:
name: mysql-standard
writeConnectionSecretToRef:
name: wordpressmysql
engineVersion: "5.6"
EOF
kubectl apply -f mysql-claim.yaml
```
What we are looking for is for the `STATUS` value to become `Bound` which
indicates the managed resource was successfully provisioned and is ready for
consumption. You can see when claim is bound using the following:
```bash
$ kubectl get mysqlinstances -n app-project1-dev
NAME STATUS CLASS VERSION AGE
mysql-claim Bound mysql-standard 5.6 11m
```
If the `STATUS` is blank, we are still waiting for the claim to become bound.
You can observe resource creation progression using the following:
```bash
$ kubectl describe mysqlinstance mysql-claim -n app-project1-dev
Name: mysql-claim
Namespace: app-project1-dev
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"database.crossplane.io/v1alpha1","kind":"MySQLInstance","metadata":{"annotations":{},"name":"mysql-claim","namespace":"team..."}}
API Version: database.crossplane.io/v1alpha1
Kind: MySQLInstance
Metadata:
Creation Timestamp: 2019-09-16T13:46:42Z
Finalizers:
finalizer.resourceclaim.crossplane.io
Generation: 2
Resource Version: 4256
Self Link: /apis/database.crossplane.io/v1alpha1/namespaces/app-project1-dev/mysqlinstances/mysql-claim
UID: 6a7fe064-d888-11e9-ab90-42b6bb22213a
Spec:
Class Ref:
Name: mysql-standard
Engine Version: 5.6
Resource Ref:
API Version: database.aws.crossplane.io/v1alpha2
Kind: MysqlServer
Name: mysqlinstance-6a7fe064-d888-11e9-ab90-42b6bb22213a
Namespace: aws-infra-dev
Write Connection Secret To Ref:
Name: wordpressmysql
Status:
Conditions:
Last Transition Time: 2019-09-16T13:46:42Z
Reason: Managed claim is waiting for managed resource to become bindable
Status: False
Type: Ready
Last Transition Time: 2019-09-16T13:46:42Z
Reason: Successfully reconciled managed resource
Status: True
Type: Synced
Events: <none>
```
*Note: You must wait until the claim becomes bound before continuing with this
guide. It could take up to a few minutes.*
We referenced our portable `MySQLInstanceClass` directly in the claim above, but
if you specified that `mysql-standard` was the default `MySQLInstanceClass` for
namespace `app-project1-dev`, we could have omitted the claim's `classRef` and
it would automatically be assigned:
```yaml
apiVersion: database.crossplane.io/v1alpha1
kind: MySQLInstance
metadata:
name: mysql-claim
namespace: app-project1-dev
spec:
writeConnectionSecretToRef:
name: wordpressmysql
engineVersion: "5.6"
```
## Install Wordpress
Installing Wordpress requires creating a Kubernetes `Deployment` and load
balancer `Service`. We will point the deployment to the `wordpressmysql` secret
that we specified in our claim above for the Wordpress container environment
variables. It should have been populated with our MySQL connection details after
the claim became `Bound`.
* Check to make sure `wordpressmysql` exists and is populated:
```bash
$ kubectl describe secret wordpressmysql -n app-project1-dev
Name: wordpressmysql
Namespace: app-project1-dev
Labels: <none>
Annotations: <none>
Type: Opaque
Data
====
endpoint: 75 bytes
password: 27 bytes
username: 58 bytes
```
* Define the `Deployment` and `Service` in `wordpress-app.yaml` and create it:
```yaml
cat > wordpress-app.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: app-project1-dev
name: wordpress
labels:
app: wordpress
spec:
selector:
matchLabels:
app: wordpress
template:
metadata:
labels:
app: wordpress
spec:
containers:
- name: wordpress
image: wordpress:4.6.1-apache
env:
- name: WORDPRESS_DB_HOST
valueFrom:
secretKeyRef:
name: wordpressmysql
key: endpoint
- name: WORDPRESS_DB_USER
valueFrom:
secretKeyRef:
name: wordpressmysql
key: username
- name: WORDPRESS_DB_PASSWORD
valueFrom:
secretKeyRef:
name: wordpressmysql
key: password
ports:
- containerPort: 80
name: wordpress
---
apiVersion: v1
kind: Service
metadata:
namespace: app-project1-dev
name: wordpress
labels:
app: wordpress
spec:
ports:
- port: 80
selector:
app: wordpress
type: LoadBalancer
EOF
kubectl apply -f wordpress-app.yaml
```
* You can verify creation with the following command and output:
```bash
$ kubectl get -f wordpress-app.yaml
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 1/1 1 1 11m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/wordpress LoadBalancer 10.0.128.30 52.168.69.6 80:32587/TCP 11m
```
If the `EXTERNAL-IP` field of the `LoadBalancer` is `<pending>`, wait until it
becomes available, then navigate to the address. You should see the following:
![alt wordpress](wordpress-start.png)
## Uninstall
### Wordpress
All Wordpress components that we installed can be deleted with one command:
```bash
kubectl delete -f wordpress-app.yaml
```
### Crossplane Configuration
To delete all created resources, but leave Crossplane and the AWS stack
running, execute the following commands:
```bash
kubectl delete -f mysql-claim.yaml
kubectl delete -f mysql-standard.yaml
kubectl delete -f aws-mysql-standard.yaml
kubectl delete -f app-project1-dev-namespace.yaml
kubectl delete -f aws-infra-dev-namespace.yaml
```
## Conclusion and Next Steps
In this guide we:
* Setup an EKS Cluster using the AWS CLI
* Configured RDS to communicate with EKS
* Installed Crossplane from alpha channel
* Installed the AWS stack
* Created an infrastructure (`aws-infra-dev`) and application
(`app-project1-dev`) namespace
* Setup an AWS `Provider` with our account
* Created a `RDSInstanceClass` in the ` with configuration using RDS configuration earlier
* Created a `MySQLInstanceClass` that specified the `RDSInstanceClass` as
`mysql-standard` in the `app-project1-dev` namespace
* Created a `MySQLInstance` claim in the `app-project1-dev1` namespace that
referenced `mysql-standard`
* Created a `Deployment` and `Service` to run Wordpress on our EKS Cluster and
assign an external IP address to it
If you would like to try out a similar workflow using a different cloud
provider, take a look at the other [services guides][serives]. If you would like
to learn more about stacks, checkout the [stacks guide][stacks]
<!-- Named links -->
[aws-create-eks]: https://docs.aws.amazon.com/eks/latest/userguide/create-cluster.html
[sample-cf-stack]: https://docs.aws.amazon.com/eks/latest/userguide/create-public-private-vpc.html
[sample-workernodes-stack]: https://docs.aws.amazon.com/eks/latest/userguide/launch-workers.html
[aws-cli]: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
[aws-key-pair]: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html
[install-kubectl]: https://kubernetes.io/docs/tasks/tools/install-kubectl/
[using-helm]: https://docs.helm.sh/using_helm/
[jq-docs]: https://stedolan.github.io/jq/
[crossplane-install]: ../install-crossplane.md#alpha
[cloud-creds]: ../cloud-providers.md
[aws-provider-guide]: ../cloud-providers/aws/aws-provider.md
[aws-rds]: https://aws.amazon.com/rds/
[services]: ../services-guide.md
[stacks]: ../stacks-guide.md