we continue to work with the Kubernetes cluster prepared in the article U1.33 Ubuntu Quick Start (QS): Kubernetes on premises and Docker and Kubespray.
our hyper-v service is running under MS server 2016
new ip addresses of the cluster:
192.168.100.8 u2004d01.cluster.local u2004d01
192.168.100.10 u2004d02.cluster.local u2004d02
192.168.100.13 u2004d03.cluster.local u2004d03
we continue to work with the Ceph cluster prepared in the article U1.39 Ubuntu Quick Start (QS): Ceph cluster
our hyper-v service is running under MS server 2016
ip addresses of the cluster:
192.168.100.15 u2004m01
192.168.100.18 u2004m03
192.168.100.16 u2004m02
for u2004m01 run the command
yury@u2004m01:~$ sudo ceph osd pool create kubepool 64 64
pool 'kubepool' created
for u2004m01 run the commands
yury@u2004m01:~$ sudo rbd pool init kubepool
yury@u2004m01:~$ sudo ceph osd pool ls
device_health_metrics
kubepool
for u2004m01 run the command
yury@u2004m01:~$ sudo ceph auth get-or-create client.kubeuser mon 'profile rbd' osd 'profile rbd pool=kubepool' mgr 'profile rbd pool=kubepool'
[client.kubeuser]
key = AQCP67hhUd+tFBAAtbQk8+K+BKlg/R/In/m8tg==
for u2004m01 run the command
yury@u2004m01:~$ sudo ceph mon dump
dumped monmap epoch 5
epoch 5
fsid f143dbb0-5839-11ec-a64a-09fdbae816c9
last_changed 2021-12-12T13:22:31.976843+0000
created 2021-12-08T15:19:06.685047+0000
min_mon_release 15 (octopus)
0: [v2:192.168.100.15:3300/0,v1:192.168.100.15:6789/0] mon.u2004m01
1: [v2:192.168.100.18:3300/0,v1:192.168.100.18:6789/0] mon.u2004m03
2: [v2:192.168.100.16:3300/0,v1:192.168.100.16:6789/0] mon.u2004m02
Prepare Kubernetes cluster
for u2004d01 run the command
yury@u2004d01:~$ mkdir ~/Documents/cephrdb
yury@u2004d01:~$ ls -l ~/Documents/
total 4
drwxrwxr-x 2 yury yury 4096 сне 14 22:16 cephrdb
Step 2: Create csi-config-map.yaml file
for u2004d01 run the command
nano ~/Documents/cephrdb/csi-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
[
{
"clusterID": "f143dbb0-5839-11ec-a64a-09fdbae816c9",
"monitors": [
"192.168.100.15:6789",
"192.168.100.18:6789",
"192.168.100.16:6789"
]
}
]
metadata:
name: ceph-csi-config
Step 3: Apply csi-config-map.yaml file
for u2004d01 run the command
sudo kubectl apply -f ~/Documents/cephrdb/csi-config-map.yaml
Step 4: Create csi-kms-config-map.yaml file
for u2004d01 run the command
nano ~/Documents/cephrdb/csi-kms-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
config.json: |-
{}
metadata:
name: ceph-csi-encryption-kms-config
Step 5: Apply csi-kms-config-map.yaml file
for u2004d01 run the command
sudo kubectl apply -f ~/Documents/cephrdb/csi-kms-config-map.yaml
Step 6: Create ceph-config-map.yaml file
for u2004d01 run the command
nano ~/Documents/cephrdb/ceph-config-map.yaml
---
apiVersion: v1
kind: ConfigMap
data:
ceph.conf: |
[global]
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
# keyring is a required key and its value should be empty
keyring: |
metadata:
name: ceph-config
Step 7: Apply ceph-config-map.yaml file
for u2004d01 run the command
sudo kubectl apply -f ~/Documents/cephrdb/ceph-config-map.yaml
Step 8: Create csi-rbd-secret.yaml file
for u2004d01 run the command
nano ~/Documents/cephrdb/csi-rbd-secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: default
stringData:
userID: kubeuser
userKey: AQCP67hhUd+tFBAAtbQk8+K+BKlg/R/In/m8tg==
Step 9: Apply csi-rbd-secret.yaml file
for u2004d01 run the command
sudo kubectl apply -f ~/Documents/cephrdb/csi-rbd-secret.yaml
Step 10: CONFIGURE CEPH-CSI PLUGINS
for u2004d01 run the command
sudo kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
sudo kubectl apply -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin-provisioner.yaml
sudo kubectl apply -f csi-rbdplugin-provisioner.yaml
wget https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-rbdplugin.yaml
sudo kubectl apply -f csi-rbdplugin.yaml
Step 11: CREATE A STORAGECLASS. Create csi-rbd-sc.yaml file
for u2004d01 run the command
nano ~/Documents/cephrdb/csi-rbd-sc.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: f143dbb0-5839-11ec-a64a-09fdbae816c9
pool: kubepool
imageFeatures: layering
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: default
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: default
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: default
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
Step 12: Apply csi-rbd-sc.yaml file
for u2004d01 run the command
yury@u2004d01:~$ sudo kubectl apply -f ~/Documents/cephrdb/csi-rbd-sc.yaml
storageclass.storage.k8s.io/csi-rbd-sc created
Step 13: Mark csi-rbd-sc as default
for u2004d01 run the command
yury@u2004d01:~$ sudo kubectl get storageClass --all-namespaces -o wide
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-rbd-sc rbd.csi.ceph.com Delete Immediate true 73s
for u2004d01 run the command
yury@u2004d01:~$ sudo kubectl patch storageclass csi-rbd-sc -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/csi-rbd-sc patched
yury@u2004d01:~$ sudo kubectl get storageClass -o wide
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
csi-rbd-sc (default) rbd.csi.ceph.com Delete Immediate true 4m44s
Step 14: Install Portainer (as a test)
for u2004d01 run the command
yury@u2004d01:~$ sudo kubectl apply -n portainer -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
namespace/portainer created
serviceaccount/portainer-sa-clusteradmin created
persistentvolumeclaim/portainer created
clusterrolebinding.rbac.authorization.k8s.io/portainer created
service/portainer created
deployment.apps/portainer created
yury@u2004d01:~$ sudo kubectl get pods -o wide -n portainer
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
portainer-dcd599f8f-fzvst 0/1 Running 0 72s 10.233.73.2 u2004d03 <none> <none>
Cleaning up the Kubernetes cluster
for u2004d01 run the command
sudo kubectl delete -f https://raw.githubusercontent.com/portainer/k8s/master/deploy/manifests/portainer/portainer.yaml
sudo kubectl delete -f ~/Documents/cephrdb/csi-rbd-sc.yaml
sudo kubectl delete -f csi-rbdplugin.yaml
sudo kubectl delete -f csi-rbdplugin-provisioner.yaml
sudo kubectl delete -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-nodeplugin-rbac.yaml
sudo kubectl delete -f https://raw.githubusercontent.com/ceph/ceph-csi/master/deploy/rbd/kubernetes/csi-provisioner-rbac.yaml
sudo kubectl delete -f ~/Documents/cephrdb/csi-rbd-secret.yaml
sudo kubectl delete -f ~/Documents/cephrdb/ceph-config-map.yaml
sudo kubectl delete -f ~/Documents/cephrdb/csi-kms-config-map.yaml
sudo kubectl delete -f ~/Documents/cephrdb/csi-config-map.yaml
Cleaning up the Ceph cluster
for u2004d01 run the command
sudo ceph auth del client.kubeuser
sudo ceph config set mon mon_allow_pool_delete true
sudo ceph osd pool rm kubepool kubepool --yes-i-really-really-mean-it
sudo ceph config set mon mon_allow_pool_delete false
yury@u2004m01:~$ sudo ceph osd pool ls
device_health_metrics