You might have stumbled upon this already, or the similar case of resizing a PVC.
This process can get tricky because:
- You cannot modify a PVC’s StorageClass on the fly.
- PVs hold important data and you must prevent any data loss.
- If you’re using a stateful set with a dynamic volume template, you want to keep your PVC name, and not create a new one to attach.
- You will have some downtime while applying the changes.
Before we start, backup all your important data, either at the application-level (sqldump, elasticsearch backups, etc) or snapshot the PVs using your cloud provider. Have the restore process in mind before starting (we won’t get into that now). A backup without a restore strategy is pointless.
In our example, we have:
- StatefulSet named
elasticsearch-data
,replicas=3
- Pods named
elasticsearch-data-1
,elasticsearch-data-2
,elasticsearch-data-3
- PVCs named
elasticsearch-data-1
,elasticsearch-data-2
,elasticsearch-data-3
and their respective PVs.
Let’s get to the steps required to change a StorageClass of a statefulSet PVC:
- Scale Down the StatefulSet:
- First, scale down your StatefulSet to 0 replicas. This ensures that no pods are actively using the PVCs during the migration process.
kubectl scale --replicas=0 statefulset elasticsearch-data
- Wait for all pods to be shut down
- Create a Temporary PVC:
- Create a new temporary PVC with a different name (e.g.
elasticsearch-data-1-temp
). This PVC will be used as an intermediate storage location. Here is an example PVC yaml; adapt the names, annotations, labels, etc to your liking:
- Create a new temporary PVC with a different name (e.g.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: faster
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/gce-pd
finalizers:
- kubernetes.io/pvc-protection
labels:
component: elasticsearch
role: data
name: elasticsearch-data-1-temp
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 14Gi
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 14Gi
- Copy Data from Existing PVC to Temporary PVC:
- Deploy a temporary Pod with the old and new temp PVC mounted:
---
apiVersion: v1
kind: Pod
metadata:
name: debian-temp-pod
spec:
containers:
- name: debian-container
image: debian:latest
command: ["sleep", "infinity"]
volumeMounts:
- name: old-pvc
mountPath: /mnt/old-pvc
- name: new-pvc
mountPath: /mnt/new-pvc
volumes:
- name: old-pvc
persistentVolumeClaim:
claimName: elasticsearch-data-1
- name: new-pvc
persistentVolumeClaim:
claimName: elasticsearch-data-1-temp
- and then…
- Access the pod with
kubectl exec -it debian-temp-pod -- bash
- Install rsync with
apt-get update && apt-get install -y rsync
- Use the
rsync
command to copy data from the old volume to the temporary volume:rsync -a --info=progress2 /mnt/old-pvc /mnt/new-pvc
- tip: Open a new terminal and exec into the debian pod. Monitor the progress with
while true; do du -sh /mnt/new-pvc/; sleep 10; done
- tip: Open a new terminal and exec into the debian pod. Monitor the progress with
- Ensure that all data is successfully copied before proceeding (using
ls
ordu
). - Delete the debian pod with
kubectl delete pod debian-temp-pod
- Access the pod with
- Delete the Old PVC:
- Delete the original PVC (the one with the old storage class). This will free up the PV associated with it.
kubectl delete pvc elasticsearch-data-1
- Ensure both PVC and PV were deleted:
kubectl get pvc elasticsearch-data-1
kubectl get pv | grep elasticsearch-data-1
- Create a New PVC with the Same Name:
- Create a new PVC with the same name as the original (e.g.,
elasticsearch-data-1
). - Make sure to use the desired storage class for the new PVC.
- Create a new PVC with the same name as the original (e.g.,
- Copy Data from Temporary PVC to New PVC:
- Deploy another temporary Pod (similar to step 3).
- Mount both the temporary PVC and the new PVC in the Pod.
- Use
rsync
again to copy data from the temporary volume to the new volume:rsync -a --info=progress2 /mnt/new-pvc/old-pvc/ /mnt/old-pvc/
- At this stage, you can test that the change worked by scaling up the StatefulSet and checking your cluster. If you feel brave, skip this step.
- Repeat steps 2-8 for all replicas.
- Scale Up the StatefulSet:
- Finally, scale up your StatefulSet to the desired number of replicas.
kubectl scale --replicas=3 statefulset elasticsearch-data
But wait! Now our StatefulSet’s volumeClaimTemplate
is out of sync with the new PVCs! This causes multiple issues, since Kubernetes does not allow this modification on the fly. We have to re-create the StatefulSet from scratch:
1) Save your current StatefulSet configuration to yaml file (skip if you already have the StatefulSet yaml somewhere, or using helm files for upgrades):kubectl get statefulset elasticsearch-data -o yaml > elasticsearch-data-statefulset.yaml
2) Change storageClassName
in volumeClaimTemplate
section of StatefulSet
configuration saved in your yaml file
3) Delete StatefulSet
without removing pods using:kubectl delete statefulset some-statefulset --cascade=orphan
4) Recreate StatefulSet
with changed StorageClass
:kubectl apply -f statefulset.yaml
By following these steps, you’ll achieve the goal of changing the storage class for your PVCs while keeping the same PVC names and preserving your data. Remember to test this process in a non-production environment first to ensure everything works as expected. Good luck!