4.3 KiB
Getting Started with kops on Scaleway
WARNING: Scaleway support on kOps is currently in alpha, which means that it is in the early stages of development and subject to change, please use with caution.
Features
- Create, update and delete clusters
- Create, edit and delete instance groups
- Migrating from single to multi-master
Coming soon
- Terraform support
- Private network
Next features to implement
kops rolling-update- Autoscaler support
- BareMetal servers
Requirements
- kops version >= 1.26 installed
- kubectl installed
- Scaleway credentials : you will need at least an access key, a secret key and a project ID.
- S3 bucket and its credentials : the bucket's credentials may differ from the one used for provisioning the resources needed by the cluster. If you use a Scaleway bucket, you will need to prefix the bucket's name with
scw://in theKOPS_STATE_STOREenvironment variable. For more information about buckets, see here
Optional
- SSH key : creating a cluster can be done without an SSH key, but it is required to update it.
id_rsaandid_ed25519keys are supported - Domain name : if you want to host your cluster on your own domain, you will have to register it with Scaleway.
Environment Variables
It is important to set the following environment variables:
export SCW_ACCESS_KEY="my-access-key"
export SCW_SECRET_KEY="my-secret-key"
export SCW_DEFAULT_PROJECT_ID="my-project-id"
export SCW_DEFAULT_REGION="fr-par"
export SCW_DEFAULT_ZONE="fr-par-1"
# Configure the bucket name to store kops state
export KOPS_STATE_STORE=scw://<bucket-name> # where <bucket-name> is the name of the bucket you set earlier
# Scaleway Object Storage is S3 compatible so we just override some S3 configurations to talk to our bucket
export S3_REGION=fr-par # or another scaleway region providing Object Storage
export S3_ENDPOINT=s3.$S3_REGION.scw.cloud # define provider endpoint
export S3_ACCESS_KEY_ID="my-access-key" # where <my-access-key> is the S3 API Access Key for your bucket
export S3_SECRET_ACCESS_KEY="my-secret-key" # where <my-secret-key> is the S3 API Secret Key for your bucket
# this is required since Scaleway support is currently in alpha so it is feature gated
export KOPS_FEATURE_FLAGS="Scaleway"
-> Important: Until the next release of protokube, you will have to export the following environment variable to make sure that you pull the version from master and not from the latest release that contains an error.
KOPS_BASE_URL="$(curl -s https://storage.googleapis.com/kops-ci/bin/latest-ci-updown-green.txt)"
Creating a Single Master Cluster
Note that for now you can only create a kops cluster in a single availability zone (fr-par-1, fr-par-2, fr-par-3, nl-ams-1, nl-ams-2, pl-waw-1, pl-waw-2).
# The default cluster uses ubuntu images on DEV1-M machines with cilium as Container Network Interface
# This creates a cluster with the gossip DNS in zone fr-par-1
kops create cluster --cloud=scaleway --name=mycluster.k8s.local --zones=fr-par-1 --yes
# This creates a cluster with no DNS in zone nl-ams-2
kops create cluster --cloud=scaleway --name=my.cluster --zones=nl-ams-2 --yes
# This creates a cluster with the Scaleway DNS (on a domain name that you own and have registered with Scaleway) in zone pl-waw-1
kops create cluster --cloud=scaleway --name=mycluster.mydomain.com --zones=pl-waw-1 --yes
Editing your cluster
# Update a cluster
kops update cluster mycluster.k8s.local --yes
# Delete a cluster
kops delete cluster mycluster.k8s.local --yes
Next steps
Now that you have a working kops cluster, read through the recommendations for production setups guide to learn more about how to configure kops for production workloads.