cli-utils/examples/alphaTestExamples/pruneAndDelete.md

5.3 KiB

Demo: Lifecycle directives

This demo shows how it is possible to use a lifecycle directive to change the behavior of prune and delete for specific resources.

First define a place to work:

DEMO_HOME=$(mktemp -d)

Alternatively, use

DEMO_HOME=~/hello

Establish the base

BASE=$DEMO_HOME/base
mkdir -p $BASE
OUTPUT=$DEMO_HOME/output
mkdir -p $OUTPUT

function expectedOutputLine() {
  test 1 == \
  $(grep "$@" $OUTPUT/status | wc -l); \
  echo $?
}

In this example we will just use three ConfigMap resources for simplicity, but of course any type of resource can be used.

  • the first ConfigMap resource does not have any annotations;
  • the second ConfigMap resource has the cli-utils.sigs.k8s.io/on-remove annotation with the value of keep;
  • the third ConfigMap resource has the client.lifecycle.config.k8s.io/deletion annotation with the value of detach.

These two annotations tell the kapply tool that a resource should not be deleted, even if it would otherwise be pruned or deleted with the destroy command.

cat <<EOF >$BASE/configMap1.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: firstmap
data:
  artist: Ornette Coleman
  album: The shape of jazz to come
EOF

This ConfigMap includes the cli-utils.sigs.k8s.io/on-remove annotation

cat <<EOF >$BASE/configMap2.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: secondmap
  annotations:
    cli-utils.sigs.k8s.io/on-remove: keep
data:
  artist: Husker Du
  album: New Day Rising
EOF

This ConfigMap includes the client.lifecycle.config.k8s.io/deletion annotation

cat <<EOF >$BASE/configMap3.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: thirdmap
  annotations:
    client.lifecycle.config.k8s.io/deletion: detach
data:
  artist: Husker Du
  album: New Day Rising
EOF

Run end-to-end tests

The following requires installation of kind.

Delete any existing kind cluster and create a new one. By default the name of the cluster is "kind"

kind delete cluster
kind create cluster

Use the kapply init command to generate the inventory template. This contains the namespace and inventory id used by apply to create inventory objects.

kapply init $BASE > $OUTPUT/status
expectedOutputLine "namespace: default is used for inventory object"

Apply the three resources to the cluster.

kapply apply $BASE --reconcile-timeout=1m > $OUTPUT/status

Use the preview command to show what will happen if we run destroy. This should show that secondmap and thirdmap will not be deleted even when using the destroy command.

kapply preview --destroy $BASE > $OUTPUT/status

expectedOutputLine "configmap/firstmap deleted (preview)"

expectedOutputLine "configmap/secondmap delete skipped (preview)"

expectedOutputLine "configmap/thirdmap delete skipped (preview)"

We run the destroy command and see that the resource without the annotations (firstmap) has been deleted, while the resources with the annotations (secondmap and thirdmap) are still in the cluster.

kapply destroy $BASE > $OUTPUT/status

expectedOutputLine "configmap/firstmap deleted"

expectedOutputLine "configmap/secondmap delete skipped"

expectedOutputLine "configmap/thirdmap delete skipped"

expectedOutputLine "1 resource(s) deleted, 2 skipped"
expectedNotFound "resource(s) pruned"

kubectl get cm --no-headers | awk '{print $1}' > $OUTPUT/status
expectedOutputLine "secondmap"

kubectl get cm --no-headers | awk '{print $1}' > $OUTPUT/status
expectedOutputLine "thirdmap"

Apply the resources back to the cluster so we can demonstrate the lifecycle directive with pruning.

kapply apply $BASE --inventory-policy=adopt --reconcile-timeout=1m > $OUTPUT/status

Delete the manifest for secondmap and thirdmap

rm $BASE/configMap2.yaml

rm $BASE/configMap3.yaml

Run preview to see that while secondmap and thirdmap would normally be pruned, they will instead be skipped due to the lifecycle directive.

kapply preview $BASE > $OUTPUT/status

expectedOutputLine "configmap/secondmap prune skipped (preview)"

expectedOutputLine "configmap/thirdmap prune skipped (preview)"

Run apply and verify that secondmap and thirdmap are still in the cluster.

kapply apply $BASE > $OUTPUT/status

expectedOutputLine "configmap/secondmap prune skipped"

expectedOutputLine "configmap/thirdmap prune skipped"

kubectl get cm --no-headers | awk '{print $1}' > $OUTPUT/status
expectedOutputLine "secondmap"

kubectl get cm --no-headers | awk '{print $1}' > $OUTPUT/status
expectedOutputLine "thirdmap"

kind delete cluster;