# v0.10 to v1.0 Follow the [Regular Upgrading Process](./README.md). ## Upgrading Notable Changes ### Introduced `karmada-aggregated-apiserver` component In the releases before v1.0.0, we are using CRD to extend the [Cluster API](https://github.com/karmada-io/karmada/tree/24f586062e0cd7c9d8e6911e52ce399106f489aa/pkg/apis/cluster), and starts v1.0.0 we use [API Aggregation](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/)(AA) to extend it. Based on the above change, perform the following operations during the upgrade: #### Step 1: Stop `karmada-apiserver` You can stop `karmada-apiserver` by updating its replica to `0`. #### Step 2: Remove Cluster CRD from ETCD Remove the `Cluster CRD` from ETCD directly by running the following command. ``` etcdctl --cert="/etc/kubernetes/pki/etcd/karmada.crt" \ --key="/etc/kubernetes/pki/etcd/karmada.key" \ --cacert="/etc/kubernetes/pki/etcd/server-ca.crt" \ del /registry/apiextensions.k8s.io/customresourcedefinitions/clusters.cluster.karmada.io ``` > Note: This command only removed the `CRD` resource, all the `CR` (Cluster objects) not changed. > That's the reason why we don't remove CRD by `karmada-apiserver`. #### Step 3: Prepare the certificate for the `karmada-aggregated-apiserver` To avoid [CA Reusage and Conflicts](https://kubernetes.io/docs/tasks/extend-kubernetes/configure-aggregation-layer/#ca-reusage-and-conflicts), create CA signer and sign a certificate to enable the aggregation layer. Update `karmada-cert-secret` secret in `karmada-system` namespace: ```diff apiVersion: v1 kind: Secret metadata: name: karmada-cert-secret namespace: karmada-system type: Opaque data: ... + front-proxy-ca.crt: | + {{front_proxy_ca_crt}} + front-proxy-client.crt: | + {{front_proxy_client_crt}} + front-proxy-client.key: | + {{front_proxy_client_key}} ``` Then update `karmada-apiserver` deployment's container command: ```diff - - --proxy-client-cert-file=/etc/kubernetes/pki/karmada.crt - - --proxy-client-key-file=/etc/kubernetes/pki/karmada.key + - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt + - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - - --requestheader-client-ca-file=/etc/kubernetes/pki/server-ca.crt + - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt ``` After the update, restore the replicas of `karmada-apiserver` instances. #### Step 4: Deploy `karmada-aggregated-apiserver`: Deploy `karmada-aggregated-apiserver` instance to your `host cluster` by following manifests:
unfold me to see the yaml ```yaml --- apiVersion: apps/v1 kind: Deployment metadata: name: karmada-aggregated-apiserver namespace: karmada-system labels: app: karmada-aggregated-apiserver apiserver: "true" spec: selector: matchLabels: app: karmada-aggregated-apiserver apiserver: "true" replicas: 1 template: metadata: labels: app: karmada-aggregated-apiserver apiserver: "true" spec: automountServiceAccountToken: false containers: - name: karmada-aggregated-apiserver image: swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:v1.0.0 imagePullPolicy: IfNotPresent volumeMounts: - name: k8s-certs mountPath: /etc/kubernetes/pki readOnly: true - name: kubeconfig subPath: kubeconfig mountPath: /etc/kubeconfig command: - /bin/karmada-aggregated-apiserver - --kubeconfig=/etc/kubeconfig - --authentication-kubeconfig=/etc/kubeconfig - --authorization-kubeconfig=/etc/kubeconfig - --karmada-config=/etc/kubeconfig - --etcd-servers=https://etcd-client.karmada-system.svc.cluster.local:2379 - --etcd-cafile=/etc/kubernetes/pki/server-ca.crt - --etcd-certfile=/etc/kubernetes/pki/karmada.crt - --etcd-keyfile=/etc/kubernetes/pki/karmada.key - --tls-cert-file=/etc/kubernetes/pki/karmada.crt - --tls-private-key-file=/etc/kubernetes/pki/karmada.key - --audit-log-path=- - --feature-gates=APIPriorityAndFairness=false - --audit-log-maxage=0 - --audit-log-maxbackup=0 resources: requests: cpu: 100m volumes: - name: k8s-certs secret: secretName: karmada-cert-secret - name: kubeconfig secret: secretName: kubeconfig --- apiVersion: v1 kind: Service metadata: name: karmada-aggregated-apiserver namespace: karmada-system labels: app: karmada-aggregated-apiserver apiserver: "true" spec: ports: - port: 443 protocol: TCP targetPort: 443 selector: app: karmada-aggregated-apiserver ```
Then, deploy `APIService` to `karmada-apiserver` by following manifests.
unfold me to see the yaml ```yaml apiVersion: apiregistration.k8s.io/v1 kind: APIService metadata: name: v1alpha1.cluster.karmada.io labels: app: karmada-aggregated-apiserver apiserver: "true" spec: insecureSkipTLSVerify: true group: cluster.karmada.io groupPriorityMinimum: 2000 service: name: karmada-aggregated-apiserver namespace: karmada-system version: v1alpha1 versionPriority: 10 --- apiVersion: v1 kind: Service metadata: name: karmada-aggregated-apiserver namespace: karmada-system spec: type: ExternalName externalName: karmada-aggregated-apiserver.karmada-system.svc.cluster.local ```
#### Step 5: check clusters status If everything goes well, you can see all your clusters just as before the upgrading. ```yaml kubectl get clusters ``` ### `karmada-agent` requires an extra `impersonate` verb In order to proxy user's request, the `karmada-agent` now request an extra `impersonate` verb. Please check the `ClusterRole` configuration or apply the following manifest. ```yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: karmada-agent rules: - apiGroups: ['*'] resources: ['*'] verbs: ['*'] - nonResourceURLs: ['*'] verbs: ["get"] ``` ### MCS feature now supports `Kubernetes v1.21+` Since the `discovery.k8s.io/v1beta1` of `EndpointSlices` has been deprecated in favor of `discovery.k8s.io/v1`, in [Kubernetes v1.21](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md), Karmada adopt this change at release v1.0.0. Now the [MCS](https://github.com/karmada-io/karmada/blob/master/docs/multi-cluster-service.md) feature requires member cluster version no less than v1.21.