Persistent Storage for K8s Deployment with CephFS - CloudCommandos/JohnChan GitHub Wiki
This tutorial uses Ceph's admin key and should not be used for production setups.
Assuming that your Ceph monitors are on these IPs and Ports:
10.1.1.1:6789
10.1.1.2:6789
10.1.1.3:6789
10.1.1.4:6789
Method 1: CephFS Volume Mount
This method is only for single replica deployments or deployments with containers that need to access the same volume. Note that applications should not be writing to the same files concurrently due to file-locking.
Obtain the ceph admin key from the ceph-node. Copy only the key.
ceph auth get client.admin
Store your CephFS password
kubectl create secret generic cephfs-pass -n YOUR_NAME_SPACE --from-literal=key=<THE_COPIED_KEY>
Mount the CephFS root directory first and create a directory for your deployment's persistent storage
mkdir /mnt/temp_cephfs
mount -t ceph 10.1.1.1:6789:/ /mnt/temp_cephfs -o name=admin,secret=<THE_COPIED_KEY>
cd /mnt/temp_cephfs
mkdir -p directory/in/cephfs_root
Optional: Unmount
cd /mnt
umount /mnt/temp_cephfs
Here is an example of what your k8s deployment would look like:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: mariadb
spec:
containers:
- image: mariadb:latest
name: mariadb
env:
# ...
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumes:
- name: mariadb-persistent-storage
cephfs:
monitors:
- 10.1.1.1:6789
- 10.1.1.2:6789
- 10.1.1.3:6789
user: admin
secretRef:
name: cephfs-pass
readOnly: false
path: "/directory/in/cephfs_root/"
Method 2: CephFS Automatic Persistent Volume Provisioning
This method is applicable for all types of deployments. However, the previous method is recommended if it's conditions are fulfilled.
Obtain the ceph admin key from the ceph-node. Copy only the key.
ceph auth get client.admin
Store your CephFS password
kubectl create secret generic cephfs-pass -n YOUR_NAME_SPACE --from-literal=key=<THE_COPIED_KEY>
Mount the CephFS root directory first and create a directory for your cephfs managed persistent volumes
mkdir /mnt/temp_cephfs
mount -t ceph 10.1.1.1:6789:/ /mnt/temp_cephfs -o name=admin,secret=<THE_COPIED_KEY>
cd /mnt/temp_cephfs
mkdir -p directory/in/cephfs_root
Optional: Unmount
cd /mnt
umount /mnt/temp_cephfs
Deploy the Cephfs-Provisioner
Create these files:
01-secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: admin-cephfs-pass
namespace: kube-system
type: Opaque
data:
key: <THE_COPIED_KEY_BASE64_ENCODED>
<THE_COPIED_KEY_BASE64_ENCODED> can be obtained from the following command:
echo "<THE_COPIED_KEY>" | base64
02-deployCephfsProvisioner.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: cephfs-provisioner
namespace: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: kube-system
roleRef:
kind: ClusterRole
name: cephfs-provisioner
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cephfs-provisioner
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["secrets"]
verbs: ["create", "get", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cephfs-provisioner
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cephfs-provisioner
subjects:
- kind: ServiceAccount
name: cephfs-provisioner
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cephfs-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: cephfs-provisioner
strategy:
type: Recreate
template:
metadata:
labels:
app: cephfs-provisioner
spec:
containers:
- name: cephfs-provisioner
image: "quay.io/external_storage/cephfs-provisioner:v2.0.0-k8s1.11"
env:
- name: PROVISIONER_NAME
value: ceph.com/cephfs
command:
- "/usr/local/bin/cephfs-provisioner"
args:
- "-id=cephfs-provisioner-1"
serviceAccount: cephfs-provisioner
03-storageClassCephfs.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: cephfs-pv
provisioner: ceph.com/cephfs
parameters:
monitors: 10.1.1.1:6789,10.1.1.2:6789,10.1.1.3:6789,10.1.1.4:6789
adminId: admin
adminSecretName: admin-cephfs-pass
adminSecretNamespace: "kube-system"
claimRoot: /directory/in/cephfs_root
reclaimPolicy: Retain
You may create multiple StorageClasses to organize your CephFS persistent volumes.
Apply the YAML files
kubectl apply -f 01-secrets.yaml
kubectl apply -f 02-deployCephfsProvisioner.yaml
kubectl apply -f 03-storageClassCephfs.yaml
Here is an example of what your k8s deployment would look like:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: mariadb
labels:
app: mariadb
spec:
replicas: 1
selector:
matchLabels:
app: mariadb
strategy:
type: Recreate
template:
metadata:
labels:
app: mariadb
spec:
containers:
- image: mariadb:latest
name: mariadb
env:
# ...
ports:
- containerPort: 3306
name: mariadb
volumeMounts:
- name: mariadb-persistent-storage
mountPath: /var/lib/mysql
volumeClaimTemplates:
- metadata:
name: mariadb-persistent-storage
spec:
storageClassName: cephfs-pv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "20Gi"