How to expand PVC and PV ‐ Kubernetes - minio/wiki GitHub Wiki
Extend Volumes size is supported only if the CSI driver allows Volume Expansion, EBS CSI Driver actually allows it, but some storage classes like gp2 will requite to enable this capacity.
First, Edit the ecs storage class
kubectl edit storageclass gp2
Add (or modify the allowVolumeExpansion: true
property.
allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
...
name: gp2
resourceVersion: "29138"
uid: cedf08cf-f91e-404f-b79a-349701265b19
parameters:
fsType: ext4
type: gp2
provisioner: kubernetes.io/aws-ebs
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
List the storage class, notice the ALLOWVOLUMEEXPANSION
is true
.
kubectl get storageclass gp2
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gp2 (default) kubernetes.io/aws-ebs Delete WaitForFirstConsumer true 145m
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/step3-increase-size-of-data-volume.html
Identify the Pod volumes to increase size
Example: we will increase volumes on pod minio-tenant-1-pool-0-0
kubectl get pods -n <tenant namespace>
NAME READY STATUS RESTARTS AGE
minio-tenant-1-pool-0-0 2/2 Running 0 27m
minio-tenant-1-pool-0-1 2/2 Running 0 58m
minio-tenant-1-pool-0-2 2/2 Running 0 58m
Get the PVC's of the pod
kubectl get pod -n <tenant namespace> minio-tenant-1-pool-0-0 -ojson | yq ".spec.volumes.[] | select(.persistentVolumeClaim != null) | .persistentVolumeClaim.claimName"
data0-minio-tenant-1-pool-0-0
data1-minio-tenant-1-pool-0-0
Get PVC volume name
kubectl get pvc data0-minio-tenant-1-pool-0-0 -n minio-tenant-1 -ojson | yq .spec.volumeName
pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58
(optional) Identify the EBS volume in EC2 console, note current size (171 GiB)
Edit PVC, increase desired size modifying the .spec.resources.requests.storage
field.
In this example notice how the storage
field was set now to 300Gi
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
volume.beta.kubernetes.io/storage-provisioner: ebs.csi.aws.com
volume.kubernetes.io/selected-node: ip-192-168-171-82.ec2.internal
volume.kubernetes.io/storage-provisioner: ebs.csi.aws.com
creationTimestamp: "2023-10-27T00:04:39Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
v1.min.io/console: minio-tenant-1-console
v1.min.io/pool: pool-0
v1.min.io/tenant: minio-tenant-1
name: data0-minio-tenant-1-pool-0-0
namespace: minio-tenant-1
resourceVersion: "19501"
uid: 236fb796-7d8a-49f9-ab74-e2eacf554c58
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "300Gi"
storageClassName: gp2
volumeMode: Filesystem
volumeName: pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 171Gi
phase: Bound
If you list the PVC will notice size haven't changed (yet)
kubectl get pvc data0-minio-tenant-1-pool-0-0 -n minio-tenant-1
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data0-minio-tenant-1-pool-0-0 Bound pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58 171Gi RWO gp2 57m
Describe the pvc, you will notice in events the EBS drive is on OPTIMIZING
state, it means it's being already resized.
Normal Resizing 118s (x8 over 2m32s) external-resizer ebs.csi.aws.com External resizer is resizing volume pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58
Warning VolumeResizeFailed 116s (x7 over 2m20s) external-resizer ebs.csi.aws.com resize volume "pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58" by resizer "ebs.csi.aws.com" failed: rpc error: code = Internal desc = Could not resize volume "vol-03be92ce20e489e48": rpc error: code = Internal desc = Could not modify volume "vol-03be92ce20e489e48": volume "vol-03be92ce20e489e48" in OPTIMIZING state, cannot currently modify
If you see the EBS volume in EC2 console, will notice the EBS volume is already 300 GiB, in optimizing
state.

Wait for the volume to be In-use
state again before continue.
You will notice the PVC is already resized when listing the pvc and it lists already the 300Gi
CAPACITY
kubectl get pvc data0-minio-tenant-1-pool-0-0 -n minio-tenant-1
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data0-minio-tenant-1-pool-0-0 Bound pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58 300Gi RWO gp2 64m
Finally the volume will be already available with new size, EBS CSI will expand the volume and the filesystem as well, you can verify it on the events of the PVC.
ubectl describ pvc data0-minio-tenant-1-pool-0-0 -n minio-tenant-1
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ExternalExpanding 13m volume_expand CSI migration enabled for kubernetes.io/aws-ebs; waiting for external resizer to expand the pvc
Warning VolumeResizeFailed 13m external-resizer ebs.csi.aws.com resize volume "pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58" by resizer "ebs.csi.aws.com" failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Warning VolumeResizeFailed 10m (x9 over 12m) external-resizer ebs.csi.aws.com resize volume "pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58" by resizer "ebs.csi.aws.com" failed: rpc error: code = Internal desc = Could not resize volume "vol-03be92ce20e489e48": rpc error: code = Internal desc = Could not modify volume "vol-03be92ce20e489e48": volume "vol-03be92ce20e489e48" in OPTIMIZING state, cannot currently modify
Normal Resizing 6m55s (x11 over 13m) external-resizer ebs.csi.aws.com External resizer is resizing volume pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58
Normal FileSystemResizeRequired 6m52s external-resizer ebs.csi.aws.com Require file system resize of volume on node
Normal FileSystemResizeSuccessful 5m45s kubelet MountVolume.NodeExpandVolume succeeded for volume "pvc-236fb796-7d8a-49f9-ab74-e2eacf554c58" ip-192-168-171-82.ec2.internal
Is unlikelly that EBS would require que following, since EBS will take care of it. However other CSI with the capacity ALLOWEXPANSION could lack the step of growt the filesystem, for that hipotetikal scenario is this section.
To grown the partition and filesystem we will need the utilities to do so, the following are the instructions to install the utilities in the minio container.
Get container ID from the pod description
kubectl get pods/minio-tenant-1-pool-0-0 -n minio-tenant-1 -ojson | yq '.status.containerStatuses[] | select(.name == "minio") | .containerID'
containerd://6dd2eb22f485fc0463528acb1a409a0b211e344c412908c8db527519f2da9e54
Get a shell to the host machine that have the minio container running. Once in the host:
List containers in kubernetes
ctr -n k8s.io c ls
Run shell as root in container
runc --root /run/containerd/runc/k8s.io/ exec -t -u 0 <container id> sh
Create Centos repo to install utilities
cat <<EOF >> /etc/yum.repos.d/centos.repo
[base-8]
name=CentOS 8 - x86_64- Base
baseurl=http://mirrors.kernel.org/centos/8-stream/BaseOS/x86_64/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Official
[appstream-8]
name=CentOS 8 - x86_64- AppStream
baseurl=http://mirrors.kernel.org/centos/8-stream/AppStream/x86_64/os/
gpgcheck=1
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Official
Download GPG keys
curl -L https://www.centos.org/keys/RPM-GPG-KEY-CentOS-Official -o /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-Official
Update repos metadata
microdnf update
Install utilities
microdnf install cloud-utils-growpart xfsprogs --nodocs
List block devices
sudo lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 30G 0 disk /data
nvme0n1 259:1 0 16G 0 disk
└─nvme0n1p1 259:2 0 8G 0 part /
└─nvme0n1p128 259:3 0 1M 0 part
(Optional) If drive is partitioned, expand the partition
growpart /dev/nvme0n1 1
# see new partition size
lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 30G 0 disk /data
nvme0n1 259:1 0 16G 0 disk
└─nvme0n1p1 259:2 0 16G 0 part /
└─nvme0n1p128 259:3 0 1M 0 part
Show filesystem to see is already extended to new size
df -hT
Filesystem Type Size Used Avail Use% Mounted on
/dev/nvme0n1p1 xfs 8.0G 1.6G 6.5G 20% /
/dev/nvme1n1 xfs 8.0G 33M 8.0G 1% /data
...
Growth xfd filesystem
xfs_growfs -d /