mirror of https://github.com/crossplane/docs.git
docs snapshot for crossplane version `master`
This commit is contained in:
parent
dc6e533b39
commit
bbaca34a73
|
@ -4,6 +4,7 @@ toc: true
|
|||
weight: 330
|
||||
indent: true
|
||||
---
|
||||
|
||||
# Adding Your Cloud Providers
|
||||
|
||||
In order for Crossplane to be able to manage resources across all your clouds, you will need to add your cloud provider credentials to Crossplane.
|
||||
|
@ -12,3 +13,30 @@ Use the links below for specific instructions to add each of the following cloud
|
|||
* [Google Cloud Platform (GCP)](cloud-providers/gcp/gcp-provider.md)
|
||||
* [Microsoft Azure](cloud-providers/azure/azure-provider.md)
|
||||
* [Amazon Web Services (AWS)](cloud-providers/aws/aws-provider.md)
|
||||
|
||||
## Examining Cloud Provider Configuration
|
||||
|
||||
When Crossplane is installed, you can list all of the available providers.
|
||||
|
||||
```console
|
||||
$ kubectl api-resources | grep providers.*crossplane | awk '{print $2}'
|
||||
aws.crossplane.io
|
||||
azure.crossplane.io
|
||||
gcp.crossplane.io
|
||||
```
|
||||
|
||||
After credentials have put in place for some of the Cloud providers, you can list those configurations.
|
||||
|
||||
```console
|
||||
$ kubectl -n crossplane-system get providers.gcp.crossplane.io
|
||||
NAME PROJECT-ID AGE
|
||||
gcp-provider crossplane-example-10412 22h
|
||||
|
||||
$ kubectl -n crossplane-system get providers.aws.crossplane.io
|
||||
NAME REGION AGE
|
||||
aws-provider eu-west-1 22h
|
||||
|
||||
$ kubectl -n crossplane-system get providers.azure.crossplane.io
|
||||
NAME AGE
|
||||
azure-provider 22h
|
||||
```
|
||||
|
|
|
@ -8,6 +8,52 @@ In this guide, we will walk through the steps necessary to configure your AWS ac
|
|||
|
||||
If you have already installed and configured the [`aws` command line tool](https://aws.amazon.com/cli/), you can simply find your AWS credentials file in `~/.aws/credentials`.
|
||||
|
||||
#### Using `aws-credentials.sh`
|
||||
|
||||
In the `cluster/examples` directory you will find a helper script, `aws-credentials.sh`. This script will use the `aws` CLI to create the necessary AWS components for the Crossplane examples. Running the final output of this command will configure your Crossplane AWS provider, RDS, and EKS resource classes.
|
||||
|
||||
```console
|
||||
$ ./cluster/examples/aws-credentials.sh
|
||||
Waiting for 'CREATE_COMPLETE' from Cloudformation Stack crossplane-example-stack-25077.......................
|
||||
#
|
||||
# Run the following for the variables that are used throughout the AWS example projects
|
||||
#
|
||||
export BASE64ENCODED_AWS_PROVIDER_CREDS=$(base64 ~/.aws/credentials | tr -d "\n")
|
||||
export EKS_WORKER_KEY_NAME=crossplane-example-25077
|
||||
export EKS_ROLE_ARN=arn:aws:iam::987654321234:role/crossplane-example-role-25077
|
||||
export REGION=eu-west-1
|
||||
export EKS_VPC=vpc-085444e4ce26b55e8
|
||||
export EKS_SUBNETS=subnet-08ad61800a39c537a,subnet-0d05d23815bed79be,subnet-07adcb08485e186fc
|
||||
export EKS_SECURITY_GROUP=sg-09aaba94fe7050cf8
|
||||
export RDS_SUBNET_GROUP_NAME=crossplane-example-db-subnet-group-25077
|
||||
export RDS_SECURITY_GROUP=sg-0b586dbd763fb35ad
|
||||
|
||||
#
|
||||
# For example, to use this environment as an AWS Crossplane provider:
|
||||
#
|
||||
sed -e "s|BASE64ENCODED_AWS_PROVIDER_CREDS|$(base64 ~/.aws/credentials | tr -d "\n")|g" \
|
||||
-e "s|EKS_WORKER_KEY_NAME|$EKS_WORKER_KEY_NAME|g" \
|
||||
-e "s|EKS_ROLE_ARN|$EKS_ROLE_ARN|g" \
|
||||
-e "s|REGION|$REGION|g" \
|
||||
-e "s|EKS_VPC|$EKS_VPC|g" \
|
||||
-e "s|EKS_SUBNETS|$EKS_SUBNETS|g" \
|
||||
-e "s|EKS_SECURITY_GROUP|$EKS_SECURITY_GROUP|g" \
|
||||
-e "s|RDS_SUBNET_GROUP_NAME|$RDS_SUBNET_GROUP_NAME|g" \
|
||||
-e "s|RDS_SECURITY_GROUP|$RDS_SECURITY_GROUP|g" \
|
||||
cluster/examples/workloads/kubernetes/wordpress-aws/provider.yaml | kubectl apply -f -
|
||||
|
||||
# Clean up after this script by deleting everything it created:
|
||||
# ./cluster/examples/aws-credentials.sh delete 25077
|
||||
```
|
||||
|
||||
After running `gcp-credentials.sh`, a series of `export` commands will be shown. Copy and paste the `export` commands that are provided. These variable names will be referenced throughout the Crossplane examples, generally with a `sed` command.
|
||||
|
||||
You will also see a `sed` command. This command will configure the AWS Crossplane provider using the environment that was created by the `aws-credentials.sh` script.
|
||||
|
||||
When you are done with the examples and have deleted all of the AWS artifacts project artifacts you can use the delete command provided by the `aws-credentials.sh` script to remove the CloudFormation Stack, VPC, and other artifacts of the script.
|
||||
|
||||
*Note* The AWS artifacts should be removed first using Kubernetes commands (`kubectl delete ...`) as each example will explain.
|
||||
|
||||
### Option 2: AWS Console in Web Browser
|
||||
|
||||
If you do not have the `aws` tool installed, you can alternatively log into the [AWS console](https://aws.amazon.com/console/) and export the credentials.
|
||||
|
|
|
@ -17,10 +17,10 @@ You should also have an AWS credentials file at `~/.aws/credentials` already on
|
|||
|
||||
This section covers tasks performed by the cluster or cloud administrator. These include:
|
||||
|
||||
- Importing AWS provider credentials
|
||||
- Defining resource classes for cluster and database resources
|
||||
- Creating all EKS pre-requisite artifacts
|
||||
- Creating a target EKS cluster (using dynamic provisioning with the cluster resource class)
|
||||
* Importing AWS provider credentials
|
||||
* Defining resource classes for cluster and database resources
|
||||
* Creating all EKS pre-requisite artifacts
|
||||
* Creating a target EKS cluster (using dynamic provisioning with the cluster resource class)
|
||||
|
||||
> Note: All artifacts created by the administrator are stored/hosted in the `crossplane-system` namespace, which has
|
||||
restricted access, i.e. `Application Owner(s)` should not have access to them.
|
||||
|
@ -34,8 +34,10 @@ A number of artifacts and configurations need to be set up within the AWS consol
|
|||
We anticipate that AWS will make improvements on this user experience in the near future.
|
||||
|
||||
#### Create a named keypair
|
||||
|
||||
1. Use an existing ec2 key pair or create a new key pair by following [these steps](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html)
|
||||
1. Export the key pair name as the EKS_WORKER_KEY_NAME environment variable
|
||||
|
||||
```console
|
||||
export EKS_WORKER_KEY_NAME=replace-with-key-name
|
||||
```
|
||||
|
@ -50,6 +52,7 @@ We anticipate that AWS will make improvements on this user experience in the nea
|
|||
1. Choose Next: Review.
|
||||
1. For the Role name, enter a unique name for your role, such as eksServiceRole, then choose Create role.
|
||||
1. Set the EKS_ROLE_ARN environment variable to the name of your role ARN
|
||||
|
||||
```console
|
||||
export EKS_ROLE_ARN=replace-with-full-role-arn
|
||||
```
|
||||
|
@ -63,8 +66,9 @@ We anticipate that AWS will make improvements on this user experience in the nea
|
|||
> * US West (Oregon) (us-west-2)
|
||||
> * US East (N. Virginia) (us-east-1)
|
||||
> * EU (Ireland) (eu-west-1)
|
||||
|
||||
|
||||
1. Set the REGION environment variable to your region
|
||||
|
||||
```console
|
||||
export REGION=replace-with-region
|
||||
```
|
||||
|
@ -72,21 +76,26 @@ We anticipate that AWS will make improvements on this user experience in the nea
|
|||
1. Choose Create stack.
|
||||
1. For Choose a template, select Specify an Amazon S3 template URL.
|
||||
1. Paste the following URL into the text area and choose Next.
|
||||
```
|
||||
|
||||
```text
|
||||
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-11-07/amazon-eks-vpc-sample.yaml
|
||||
```
|
||||
|
||||
1. On the Specify Details page, fill out the parameters accordingly, and choose Next.
|
||||
```
|
||||
|
||||
```console
|
||||
* Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can call it eks-vpc.
|
||||
* VpcBlock: Choose a CIDR range for your VPC. You may leave the default value.
|
||||
* Subnet01Block: Choose a CIDR range for subnet 1. You may leave the default value.
|
||||
* Subnet02Block: Choose a CIDR range for subnet 2. You may leave the default value.
|
||||
* Subnet03Block: Choose a CIDR range for subnet 3. You may leave the default value.
|
||||
```
|
||||
|
||||
1. (Optional) On the Options page, tag your stack resources and choose Next.
|
||||
1. On the Review page, choose Create.
|
||||
1. When your stack is created, select it in the console and choose Outputs.
|
||||
1. Using values from outputs, export the following environment variables.
|
||||
|
||||
```console
|
||||
export EKS_VPC=replace-with-eks-vpcId
|
||||
export EKS_SUBNETS=replace-with-eks-subnetIds01,replace-with-eks-subnetIds02,replace-with-eks-subnetIds03
|
||||
|
@ -94,6 +103,7 @@ We anticipate that AWS will make improvements on this user experience in the nea
|
|||
```
|
||||
|
||||
#### Create an RDS subnet group
|
||||
|
||||
1. Navigate to the aws console in same region as the EKS clsuter
|
||||
1. Navigate to `RDS` service
|
||||
1. Navigate to `Subnet groups` in left hand pane
|
||||
|
@ -103,6 +113,7 @@ We anticipate that AWS will make improvements on this user experience in the nea
|
|||
1. Click `Add all subnets related to this VPC`
|
||||
1. Click Create
|
||||
1. Export the db subnet group name
|
||||
|
||||
```console
|
||||
export RDS_SUBNET_GROUP_NAME=replace-with-DBSubnetgroup-name
|
||||
```
|
||||
|
@ -119,90 +130,119 @@ This is for **EXAMPLE PURPOSES ONLY** and is **NOT RECOMMENDED** for production
|
|||
1. Give it a description
|
||||
1. Select the same VPC as the EKS cluster.
|
||||
1. On the Inbound Rules tab, choose Edit.
|
||||
- For Type, choose `MYSQL/Aurora`
|
||||
- For Port Range, type `3306`
|
||||
- For Source, choose `Anywhere` from drop down or type: `0.0.0.0/0`
|
||||
* For Type, choose `MYSQL/Aurora`
|
||||
* For Port Range, type `3306`
|
||||
* For Source, choose `Anywhere` from drop down or type: `0.0.0.0/0`
|
||||
1. Choose Add another rule if you need to add more IP addresses or different port ranges.
|
||||
1. Click: Create
|
||||
1. Export the security group id
|
||||
|
||||
```console
|
||||
export RDS_SECURITY_GROUP=replace-with-security-group-id
|
||||
```
|
||||
|
||||
|
||||
### Deploy all Workload Resources
|
||||
|
||||
Now deploy all the workload resources, including the RDS database and EKS cluster with the following commands:
|
||||
|
||||
Create provider:
|
||||
|
||||
```console
|
||||
sed -e "s|BASE64ENCODED_AWS_PROVIDER_CREDS|`cat ~/.aws/credentials|base64|tr -d '\n'`|g;s|EKS_WORKER_KEY_NAME|$EKS_WORKER_KEY_NAME|g;s|EKS_ROLE_ARN|$EKS_ROLE_ARN|g;s|REGION|$REGION|g;s|EKS_VPC|$EKS_VPC|g;s|EKS_SUBNETS|$EKS_SUBNETS|g;s|EKS_SECURITY_GROUP|$EKS_SECURITY_GROUP|g;s|RDS_SUBNET_GROUP_NAME|$RDS_SUBNET_GROUP_NAME|g;s|RDS_SECURITY_GROUP|$RDS_SECURITY_GROUP|g" cluster/examples/workloads/wordpress-aws/provider.yaml | kubectl create -f -
|
||||
sed -e "s|BASE64ENCODED_AWS_PROVIDER_CREDS|`base64 ~/.aws/credentials|tr -d '\n'`|g;s|EKS_WORKER_KEY_NAME|$EKS_WORKER_KEY_NAME|g;s|EKS_ROLE_ARN|$EKS_ROLE_ARN|g;s|REGION|$REGION|g;s|EKS_VPC|$EKS_VPC|g;s|EKS_SUBNETS|$EKS_SUBNETS|g;s|EKS_SECURITY_GROUP|$EKS_SECURITY_GROUP|g;s|RDS_SUBNET_GROUP_NAME|$RDS_SUBNET_GROUP_NAME|g;s|RDS_SECURITY_GROUP|$RDS_SECURITY_GROUP|g" cluster/examples/workloads/kubernetes/wordpress-aws/provider.yaml | kubectl create -f -
|
||||
```
|
||||
|
||||
Create cluster:
|
||||
|
||||
```console
|
||||
kubectl create -f cluster/examples/workloads/wordpress-aws/cluster.yaml
|
||||
kubectl create -f cluster/examples/workloads/kubernetes/wordpress-aws/cluster.yaml
|
||||
```
|
||||
|
||||
It will take a while (~15 minutes) for the EKS cluster to be deployed and become available.
|
||||
You can keep an eye on its status with the following command:
|
||||
|
||||
```console
|
||||
kubectl -n crossplane-system get ekscluster -o custom-columns=NAME:.metadata.name,STATE:.status.state,CLUSTERNAME:.status.clusterName,ENDPOINT:.status.endpoint,LOCATION:.spec.location,CLUSTERCLASS:.spec.classRef.name,RECLAIMPOLICY:.spec.reclaimPolicy
|
||||
kubectl -n crossplane-system get ekscluster
|
||||
```
|
||||
|
||||
Once the cluster is done provisioning, you should see output similar to the following
|
||||
> Note: the `STATE` field is `ACTIVE` and the `ENDPOINT` field has a value):
|
||||
|
||||
```console
|
||||
NAME STATE CLUSTERNAME ENDPOINT LOCATION CLUSTERCLASS RECLAIMPOLICY
|
||||
eks-8f1f32c7-f6b4-11e8-844c-025000000001 ACTIVE <none> https://B922855C944FC0567E9050FCD75B6AE5.yl4.us-west-2.eks.amazonaws.com <none> standard-cluster Delete
|
||||
NAME STATUS STATE CLUSTER-NAME ENDPOINT CLUSTER-CLASS LOCATION RECLAIM-POLICY AGE
|
||||
eks-825c1234-9697-11e9-8b05-080027550c17 Bound ACTIVE https://6A7981620931E720CE162F751C158A78.yl4.eu-west-1.eks.amazonaws.com standard-cluster Delete 51m
|
||||
```
|
||||
|
||||
## Application Developer Tasks
|
||||
|
||||
This section covers tasks performed by an application developer. These include:
|
||||
|
||||
- Defining a Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
|
||||
- Defining the resource's dependency requirements, in this case a `MySQL` database
|
||||
* Defining a Workload in terms of Resources and Payload (Deployment/Service) which will be deployed into the target Kubernetes Cluster
|
||||
* Defining the resource's dependency requirements, in this case a `MySQL` database
|
||||
|
||||
Now that the EKS cluster is ready, let's begin deploying the workload as the application developer:
|
||||
|
||||
```console
|
||||
kubectl create -f cluster/examples/workloads/wordpress-aws/workload.yaml
|
||||
kubectl create -f cluster/examples/workloads/kubernetes/wordpress-aws/app.yaml
|
||||
```
|
||||
|
||||
This will also take awhile to complete, since the MySQL database needs to be deployed before the WordPress pod can consume it.
|
||||
You can follow along with the MySQL database deployment with the following:
|
||||
|
||||
```console
|
||||
kubectl -n crossplane-system get rdsinstance -o custom-columns=NAME:.metadata.name,STATUS:.status.state,CLASS:.spec.classRef.name,VERSION:.spec.version
|
||||
kubectl -n crossplane-system get rdsinstance
|
||||
```
|
||||
|
||||
Once the `STATUS` column is `available` as seen below, the WordPress pod should be able to connect to it:
|
||||
|
||||
```console
|
||||
NAME STATUS CLASS VERSION
|
||||
mysql-2a0be04f-f748-11e8-844c-025000000001 available standard-mysql <none>
|
||||
NAME STATUS STATE CLASS VERSION AGE
|
||||
mysql-3f902b48-974f-11e9-8b05-080027550c17 Bound available standard-mysql 5.7 15m
|
||||
```
|
||||
|
||||
Now we can watch the WordPress pod come online and a public IP address will get assigned to it:
|
||||
As an administrator, we can examine the cluster directly.
|
||||
|
||||
```console
|
||||
kubectl -n demo get workload -o custom-columns=NAME:.metadata.name,CLUSTER:.spec.targetCluster.name,NAMESPACE:.spec.targetNamespace,DEPLOYMENT:.spec.targetDeployment.metadata.name,SERVICE-EXTERNAL-IP:.status.service.loadBalancer.ingress[0].ip
|
||||
$ CLUSTER=eks-$(kubectl get kubernetesclusters.compute.crossplane.io -n complex -o=jsonpath='{.items[0].spec.resourceName.uid}')
|
||||
$ KUBECONFIG=/tmp/$CLUSTER aws eks update-kubeconfig --name=$CLUSTER --region=$REGION
|
||||
$ KUBECONFIG=/tmp/$CLUSTER kubectl get all -lapp=wordpress -A
|
||||
NAMESPACE NAME READY STATUS RESTARTS AGE
|
||||
wordpress pod/wordpress-8545774bcf-8xj8j 1/1 Running 0 13m
|
||||
|
||||
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
||||
wordpress service/wordpress LoadBalancer 10.100.201.94 a4631fbfa974f11e9932a060b5ad3abc-1542130681.eu-west-1.elb.amazonaws.com 80:31832/TCP 13m
|
||||
|
||||
NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
|
||||
wordpress deployment.apps/wordpress 1 1 1 1 13m
|
||||
|
||||
NAMESPACE NAME DESIRED CURRENT READY AGE
|
||||
wordpress replicaset.apps/wordpress-8545774bcf 1 1 1 13m
|
||||
$ rm /tmp/$CLUSTER
|
||||
```
|
||||
|
||||
Continuing as the application developer, we can watch the WordPress pod come online and a public IP address will get assigned to it:
|
||||
|
||||
```console
|
||||
kubectl -n complex get kubernetesapplication,kubernetesapplicationresources
|
||||
```
|
||||
|
||||
When a public IP address has been assigned, you'll see output similar to the following:
|
||||
|
||||
```console
|
||||
NAME CLUSTER NAMESPACE DEPLOYMENT SERVICE-EXTERNAL-IP
|
||||
demo demo-cluster demo wordpress 104.43.240.15
|
||||
NAME CLUSTER STATUS DESIRED SUBMITTED
|
||||
kubernetesapplication.workload.crossplane.io/wordpress-demo wordpress-demo-cluster PartiallySubmitted 3 2
|
||||
|
||||
NAME TEMPLATE-KIND TEMPLATE-NAME CLUSTER STATUS
|
||||
kubernetesapplicationresource.workload.crossplane.io/wordpress-demo-deployment Deployment wordpress wordpress-demo-cluster Submitted
|
||||
kubernetesapplicationresource.workload.crossplane.io/wordpress-demo-namespace Namespace wordpress wordpress-demo-cluster Submitted
|
||||
kubernetesapplicationresource.workload.crossplane.io/wordpress-demo-service Service wordpress wordpress-demo-cluster Failed
|
||||
```
|
||||
|
||||
*Note* A Failed status on the Service may be attributable to issues [#428](https://github.com/crossplaneio/crossplane/issues/428) and [504](https://github.com/crossplaneio/crossplane/issues/504). The service should be running despite this status.
|
||||
|
||||
Once WordPress is running and has a public IP address through its service, we can get the URL with the following command:
|
||||
|
||||
```console
|
||||
echo "http://$(kubectl get workload test-workload -o jsonpath='{.status.service.loadBalancer.ingress[0].ip}')"
|
||||
echo "http://$(kubectl get kubernetesapplicationresource.workload.crossplane.io/wordpress-demo-service -n complex -o yaml -o jsonpath='{.status.remote.loadBalancer.ingress[0].hostname}')"
|
||||
```
|
||||
|
||||
Paste that URL into your browser and you should see WordPress running and ready for you to walk through its setup experience. You may need to wait a few minutes for this to become accessible via the AWS load balancer.
|
||||
|
@ -210,17 +250,20 @@ Paste that URL into your browser and you should see WordPress running and ready
|
|||
## Connecting to your EKSCluster (optional)
|
||||
|
||||
Requires:
|
||||
* awscli
|
||||
* aws-iam-authenticator
|
||||
|
||||
* awscli
|
||||
* aws-iam-authenticator
|
||||
|
||||
Please see [Install instructions](https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html) section: `Install and Configure kubectl for Amazon EKS`
|
||||
|
||||
When the EKSCluster is up and running, you can update your kubeconfig with:
|
||||
|
||||
```console
|
||||
aws eks update-kubeconfig --name <replace-me-eks-cluster-name>
|
||||
```
|
||||
|
||||
Node pool is created after the master is up, so expect a few more minutes to wait, but eventually you can see that nodes joined with:
|
||||
|
||||
```console
|
||||
kubectl config use-context <context-from-last-command>
|
||||
kubectl get nodes
|
||||
|
@ -231,19 +274,19 @@ kubectl get nodes
|
|||
First delete the workload, which will delete WordPress and the MySQL database:
|
||||
|
||||
```console
|
||||
kubectl delete -f cluster/examples/workloads/wordpress-aws/workload.yaml
|
||||
kubectl delete -f cluster/examples/workloads/kubernetes/wordpress-aws/app.yaml
|
||||
```
|
||||
|
||||
Then delete the EKS cluster:
|
||||
|
||||
```console
|
||||
kubectl delete -f cluster/examples/workloads/wordpress-aws/cluster.yaml
|
||||
kubectl delete -f cluster/examples/workloads/kubernetes/wordpress-aws/cluster.yaml
|
||||
```
|
||||
|
||||
Finally, delete the provider credentials:
|
||||
|
||||
```console
|
||||
kubectl delete -f cluster/examples/workloads/wordpress-aws/provider.yaml
|
||||
kubectl delete -f cluster/examples/workloads/kubernetes/wordpress-aws/provider.yaml
|
||||
```
|
||||
|
||||
> Note: There may still be an ELB that was not properly cleaned up, and you will need
|
||||
|
|
Loading…
Reference in New Issue