U1.39 Ubuntu Quick Start (QS): Ceph cluster - chempkovsky/CS2WPF-and-CS2XAMARIN GitHub Wiki

Content

Reading

Before we start

openssh for u2004m01 u2004m02 u2004m03

  • for each machine run the command
sudo apt install openssh-server

Prepare u2004m02 u2004m03

  • Step 1:
    • run the commands to set the password for root-user
sudo -i
passwd
  • Step 2:

    • with sudo nano /etc/ssh/sshd_config modify the file
      • set: PermitRootLogin yes
  • Step 3:

    • run the command
sudo service ssh restart

Install cephadm

  • read the article DISTRIBUTION-SPECIFIC INSTALLATIONS
  • Note: cephadm looks like kubeadm for Kubernetes. We think they took the ideas and some kubeadm code and created cephadm. Perhaps this is a good strategy as they are now independent of kubeadm (e.g., independent of existing and new kubeadm bugs, etc.).

for u2004m01 run the commands

  • run the command
sudo apt install -y cephadm

Bootstrap

for u2004m01 run the command

sudo cephadm bootstrap --mon-ip 192.168.100.15
Click to show the tail of the response
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Wrote config to /etc/ceph/ceph.conf
...
Wrote public SSH key to to /etc/ceph/ceph.pub
...


Ceph Dashboard is now available at:

             URL: https://u2004m01:8443/
            User: admin
        Password: pzrdnvjv0b

You can access the Ceph CLI with:

        sudo /usr/sbin/cephadm shell --fsid f143dbb0-5839-11ec-a64a-09fdbae816c9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring

Please consider enabling telemetry to help improve Ceph:

        ceph telemetry on

For more information see:

        https://docs.ceph.com/docs/master/mgr/telemetry/

Bootstrap complete.
sudo /usr/sbin/cephadm shell --fsid f143dbb0-5839-11ec-a64a-09fdbae816c9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
  • To check if the docker is ready
    • run the command
sudo docker version
  • ceph version
yury@u2004m01:~$ sudo /usr/sbin/cephadm shell --fsid f143dbb0-5839-11ec-a64a-09fdbae816c9 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
root@u2004m01:/# ceph -v
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
  • Follow the link in the browser
https://u2004m01:8443/
  • access CEPH CLI
    • run the command
sudo cephadm shell
  • to exit CEPH CLI
    • run the command
exit
  • make sure CEPH CLI - environment is not the same as virtual machine (or host) environment
    • we compare file folders
sudo cephadm shell
ls -l /etc/ceph
exit
ls -l /etc/ceph
Click to show the response
yury@u2004m01:~$ sudo cephadm shell
[sudo] password for yury:
Inferring fsid 0ef70e70-57fe-11ec-9596-efa9ed3232d3
Inferring config /var/lib/ceph/0ef70e70-57fe-11ec-9596-efa9ed3232d3/mon.ubuntuceph/config
Using recent ceph image ceph/ceph@sha256:056637972a107df4096f10951e4216b21fcd8ae0b9fb4552e628d35df3f61139
root@u2004m01:/# ls -l /etc/ceph
total 12
-rw------- 1 ceph ceph 177 Dec  8 08:17 ceph.conf
-rw------- 1 root root  63 Dec  8 08:11 ceph.keyring
-rw-r--r-- 1 root root  92 May 26  2021 rbdmap
root@u2004m01:/# exit
exit
yury@u2004m01:~$ ls -l /etc/ceph
total 12
-rw------- 1 root root  63 сне  8 11:11 ceph.client.admin.keyring
-rw-r--r-- 1 root root 177 сне  8 11:11 ceph.conf
-rw-r--r-- 1 root root 595 сне  8 11:12 ceph.pub

ADDING HOSTS

for u2004m01 run the command

 ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
Click to show the response
yury@u2004m01:~$ ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.100.16'"
and check to make sure that only the key(s) you wanted were added.

for u2004m01 run the command

ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
Click to show the response
yury@u2004m01:~$ ssh-copy-id -f -i /etc/ceph/ceph.pub [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/ceph/ceph.pub"
[email protected]'s password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.100.18'"
and check to make sure that only the key(s) you wanted were added.

for u2004m01 run the commands

sudo cephadm shell
ceph orch host add u2004m02 192.168.100.16
ceph orch host add u2004m03 192.168.100.18
Click to show the response
yury@u2004m01:~$ sudo cephadm shell
[sudo] password for yury:
Inferring fsid 87c24c82-568e-11ec-884d-95f515e01213
Inferring config /var/lib/ceph/87c24c82-568e-11ec-884d-95f515e01213/mon.u2004m01/config
Using recent ceph image ceph/ceph@sha256:056637972a107df4096f10951e4216b21fcd8ae0b9fb4552e628d35df3
61139
root@u2004m01:/# ceph orch host add u2004m02 192.168.100.16
Added host 'u2004m02'
root@u2004m01:/# ceph orch host add u2004m03 192.168.100.18
Added host 'u2004m03'
root@u2004m01:/#

SERVICE STATUS and DAEMON STATUS

for u2004m01 run the command

sudo cephadm shell
ceph orch ls
ceph orch ps
Click to show the response
root@u2004m01:/# ceph orch ls
NAME           RUNNING  REFRESHED  AGE  PLACEMENT  IMAGE NAME                            IMAGE ID
alertmanager       1/1  3m ago     4h   count:1    docker.io/prom/alertmanager:v0.20.0   0881eb8f169f
crash              3/3  3m ago     4h   *          docker.io/ceph/ceph:v15               2cf504fded39
grafana            1/1  3m ago     4h   count:1    docker.io/ceph/ceph-grafana:6.7.4     557c83e11646
mgr                2/2  3m ago     4h   count:2    docker.io/ceph/ceph:v15               2cf504fded39
mon                3/5  3m ago     4h   count:5    docker.io/ceph/ceph:v15               2cf504fded39
node-exporter      3/3  3m ago     4h   *          docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf
prometheus         1/1  3m ago     4h   count:1    docker.io/prom/prometheus:v2.18.1     de242295e225

root@u2004m01:/# ceph orch ps
NAME                    HOST      STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                            IMAGE ID      CONTAINER ID
alertmanager.u2004m01   u2004m01  running (2h)   8m ago     4h   0.20.0   docker.io/prom/alertmanager:v0.20.0   0881eb8f169f  adff3b35796c
crash.u2004m01          u2004m01  running (4h)   8m ago     4h   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  23edd41af0b0
crash.u2004m02          u2004m02  running (2h)   8m ago     2h   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  bd452fb1db64
crash.u2004m03          u2004m03  running (30m)  8m ago     30m  15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  681021d6ae47
grafana.u2004m01        u2004m01  running (4h)   8m ago     4h   6.7.4    docker.io/ceph/ceph-grafana:6.7.4     557c83e11646  458ad26558f3
mgr.u2004m01.yjirlb     u2004m01  running (4h)   8m ago     4h   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  04903b7a501f
mgr.u2004m02.vipcfi     u2004m02  running (2h)   8m ago     2h   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  a573b7a6daee
mon.u2004m01            u2004m01  running (4h)   8m ago     4h   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  5dc7da075590
mon.u2004m02            u2004m02  running (29m)  8m ago     29m  15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  91432efcc520
mon.u2004m03            u2004m03  running (30m)  8m ago     30m  15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  7eaede328091
node-exporter.u2004m01  u2004m01  running (4h)   8m ago     4h   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  a8d644979406
node-exporter.u2004m02  u2004m02  running (2h)   8m ago     2h   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  2471086f384f
node-exporter.u2004m03  u2004m03  running (29m)  8m ago     29m  0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  7dbf884b3e61
prometheus.u2004m01     u2004m01  running (29m)  8m ago     4h   2.18.1   docker.io/prom/prometheus:v2.18.1     de242295e225  8c0334da927d

OSD SERVICE

for u2004m01 run the command

sudo cephadm shell
ceph orch device ls --wide
Click to show the response
root@u2004m01:/# ceph orch device ls --wide
Hostname  Path      Type  Transport  RPM      Vendor  Model  Serial  Size   Health   Ident  Fault  Available  Reject Reasons
u2004m01  /dev/fd0  hdd   Unknown    Unknown  N/A     N/A            4096   Unknown  N/A    N/A    No         locked, Insufficient space (<5GB)
u2004m02  /dev/fd0  hdd   Unknown    Unknown  N/A     N/A            4096   Unknown  N/A    N/A    No         locked, Insufficient space (<5GB)
u2004m03  /dev/fd0  hdd   Unknown    Unknown  N/A     N/A            4096   Unknown  N/A    N/A    No         Insufficient space (<5GB), locked

Increasing space for u2004m01 u2004m02 u2004m03

Get block devices

  • run the command for u2004m01 (u2004m02, u2004m03)
lsblk 
Click to show the response
yury@u2004m01:~$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0       2:0    1     4K  0 disk
...
sda       8:0    0    20G  0 disk
├─sda1    8:1    0  19,9G  0 part /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi
sr0      11:0    1  1024M  0 rom

DEPLOY OSD with file as a storage

For educational purposes only as "hands on lab"

for u2004m01 run the commands

  • Step 1:
    • create a 5.5 GB /var/local/cephloopdevfile-file
sudo dd if=/dev/zero of=/var/local/cephloopdevfile bs=1000k count=5500
  • Step 2:
    • create loopdevice
      • in our case it returns /dev/loop14
sudo losetup -f /var/local/cephloopdevfile --show
sudo losetup -a
Click to show the response
yury@u2004m01:/$ sudo losetup -f /var/local/cephloopdevfile --show
/dev/loop14
yury@u2004m01:/$ sudo losetup -a
/dev/loop1: [2049]:804079 (/var/lib/snapd/snaps/core18_1705.snap)
/dev/loop8: [2049]:782801 (/var/lib/snapd/snaps/gtk-common-themes_1519.snap)
/dev/loop6: [2049]:786445 (/var/lib/snapd/snaps/gnome-3-38-2004_87.snap)
/dev/loop13: [2049]:790963 (/var/lib/snapd/snaps/core20_1270.snap)
/dev/loop4: [2049]:804078 (/var/lib/snapd/snaps/gnome-3-34-1804_24.snap)
/dev/loop11: [2049]:786952 (/var/lib/snapd/snaps/snapd_14066.snap)
/dev/loop2: [2049]:787005 (/var/lib/snapd/snaps/core18_2253.snap)
/dev/loop0: [2049]:790011 (/var/lib/snapd/snaps/bare_5.snap)
/dev/loop9: [2049]:804082 (/var/lib/snapd/snaps/snap-store_433.snap)
/dev/loop7: [2049]:804081 (/var/lib/snapd/snaps/gtk-common-themes_1506.snap)
/dev/loop14: [2049]:796538 (/var/local/cephloopdevfile)
/dev/loop5: [2049]:780683 (/var/lib/snapd/snaps/gnome-3-34-1804_77.snap)
/dev/loop12: [2049]:804080 (/var/lib/snapd/snaps/snapd_7264.snap)
/dev/loop3: [2049]:789770 (/var/lib/snapd/snaps/core20_1242.snap)
/dev/loop10: [2049]:787441 (/var/lib/snapd/snaps/snap-store_558.snap)

  • Note: to delete loop device run sudo losetup -d /dev/loop14 command

  • Step 3:

    • create physical volume (in our case the name of the loop device is /dev/loop14)
sudo pvcreate /dev/loop14
sudo pvs
sudo pvdisplay  /dev/loop14
Click to show the response
yury@u2004m01:/$ sudo pvcreate /dev/loop14
  Physical volume "/dev/loop14" successfully created.
  • Note: to remove physical volume run sudo pvremove /dev/loop14 command

  • Step 4:

    • create Volume Group (in our case the name of the loop device is /dev/loop14)
sudo vgcreate -s 5500M cephloopgroup /dev/loop14
sudo vgs
sudo vgdisplay cephloopgroup

or

sudo vgcreate cephloopgroup /dev/loop14
sudo vgs
sudo vgdisplay cephloopgroup
Click to show the response
yury@u2004m01:/$ sudo vgcreate -s 5500M cephloopgroup /dev/loop14
  Volume group "cephloopgroup" successfully created
  • Note: to remove Volume Group run sudo vgremove cephloopgroup command

  • Step 5:

    • create Logical Volume (in our case the name of the loop device is /dev/loop14)
sudo lvcreate -n cephloopvol -L 5.24G cephloopgroup
sudo lvs
sudo lvdisplay  cephloopgroup/cephloopvol
Click to show the response
yury@u2004m01:/$ sudo lvcreate -n cephloopvol -L 5.24G cephloopgroup
  Logical volume "cephloopvol" created.
yury@u2004m01:/$ sudo lvs
  LV                                             VG                                        Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  cephloopvol                                    cephloopgroup                             -wi-ao----   5,24g
yury@u2004m01:~$ sudo lvdisplay  cephloopgroup/cephloopvol
  --- Logical volume ---
  LV Path                /dev/cephloopgroup/cephloopvol
  LV Name                cephloopvol
  VG Name                cephloopgroup
  LV UUID                s0V0bu-w4nD-lLSQ-UMQX-BD4M-zXPo-YKVbMd
  LV Write Access        read/write
  LV Creation host, time ubuntuceph, 2021-12-08 14:58:24 +0300
  LV Status              available
  # open                 24
  LV Size                5,24 GiB
  Current LE             1342
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
  • Step 6:
    • with sudo lvdisplay cephloopgroup/cephloopvol command we get the path
/dev/cephloopgroup/cephloopvol
  • Note: to remove Logical volume run sudo lvremove cephloopgroup/cephloopvol command

  • Step 7:

    • Create an OSD from a specific device on a specific host
sudo cephadm shell
ceph orch daemon add osd u2004m01:/dev/cephloopgroup/cephloopvol

repeat the step 1 - step 7 for u2004m02 and u2004m03

picture

  • Note: to remove created LMV structures:
sudo cephadm shell


ceph osd out osd.0
ceph osd down osd.0
ceph osd crush remove osd.0
ceph auth del osd.0
ceph osd destroy 0 --yes-i-really-mean-it
ceph osd destroy 0 --force
ceph osd rm 0
ceph osd down osd.0; ceph osd rm 0
exit

sudo lvremove cephloopgroup/cephloopvol
sudo vgremove cephloopgroup
sudo pvremove /dev/loop14
sudo losetup -d /dev/loop14
sudo rm /var/local/cephloopdevfile

DEPLOY OSD with folder as a storage

It has errors.

for u2004m01 run the commands

  • Create the new OSD
sudo cephadm shell
ceph osd create
ceph osd create
ceph osd out 1
exit
Click to show the response
yury@u2004m01:~$ sudo cephadm shell
[sudo] password for yury:
Inferring fsid 87c24c82-568e-11ec-884d-95f515e01213
Inferring config /var/lib/ceph/87c24c82-568e-11ec-884d-95f515e01213/mon.u2004m01/config
Using recent ceph image ceph/ceph@sha256:056637972a107df4096f10951e4216b21fcd8ae0b9fb4552e628d35df3f61139
root@u2004m01:/# ceph osd create
0
root@u2004m01:/# ceph osd create
1
root@u2004m01:/# ceph osd out 1
osd.1 is already out.
  • Note:
    • according to article Red Hat Training: OSD Bootstrapping
      • they recommend using uuidgen, so you never create an extra copy
      • we run ceph osd create - command three times, but only one copy of OSD is created
Click to show details
root@u2004m01:/# uuidgen
97e4354a-de54-4715-8bc1-9bd511b759a0
root@u2004m01:/# ceph osd create 97e4354a-de54-4715-8bc1-9bd511b759a0
2
root@u2004m01:/# ceph osd create 97e4354a-de54-4715-8bc1-9bd511b759a0
2
root@u2004m01:/# ceph osd create 97e4354a-de54-4715-8bc1-9bd511b759a0
2
  • Create the default directory on new OSD.
sudo cephadm shell
mkdir /var/lib/ceph/osd/ceph-0
  • Initialize the OSD data directory.
sudo cephadm shell
ceph-osd -i 0 --mkfs --mkkey
Click to show the response
root@u2004m01:/# ceph-osd -i 0 --mkfs --mkkey
2021-12-07T17:18:27.349+0000 7f3aee66af00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
2021-12-07T17:18:27.349+0000 7f3aee66af00 -1 AuthRegistry(0x5582bd576940) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
2021-12-07T17:18:27.349+0000 7f3aee66af00 -1 auth: unable to find a keyring on /var/lib/ceph/osd/ceph-0/keyring: (2) No such file or directory
2021-12-07T17:18:27.349+0000 7f3aee66af00 -1 AuthRegistry(0x7ffee8ea9840) no keyring found at /var/lib/ceph/osd/ceph-0/keyring, disabling cephx
failed to fetch mon config (--no-mon-config to skip)
  • Register the OSD authentication key.
    • this will throw an error as ceph-osd -i 0 --mkfs --mkkey-command did not create keyring-file
ceph auth add osd.0 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd/ceph-0/keyring

DEPLOY OSD with disk as a storage

Note:

  • A storage device is considered available if
    • The device must have no partitions.
    • The device must not have any LVM state.
    • The device must not be mounted.
    • The device must not contain a file system.
    • The device must not contain a Ceph BlueStore OSD.
    • The device must be larger than 5 GB.

Step 1

  • stop u2004m01, u2004m02, u2004m03

Step 2

  • add vhdx file to u2004m01
    • with hyper-v
      • add second file of the fixed size (=10Gb)

Step 3

  • start u2004m01, u2004m02, u2004m03

Step 4

  • for each machine u2004m01, u2004m02, u2004m03
    • run the command to define the name of the added disk
      • in our case /dev/sdb is a name of the added vhdx file
sudo lsblk
sudo fdisk -l
Click to show the response
NAME              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0               2:0    1     4K  0 disk
loop0             7:0    0     4K  1 loop /snap/bare/5
...
sda               8:0    0    20G  0 disk
├─sda1            8:1    0  19,9G  0 part /
├─sda14           8:14   0     4M  0 part
└─sda15           8:15   0   106M  0 part /boot/efi
sdb               8:16   0    10G  0 disk
sr0               11:0   1  1024M  0 rom

...
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Disk model: Virtual Disk
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
...

Step 5

  • for the machine u2004m01
    • run the commands
ceph orch daemon add osd u2004m01:/dev/sdb

repeat the step 1 - step 5 for u2004m02 and u2004m03

picture

troubleshooting (hints)

  • after restarting all VMs OSDs created for /var/local/cephloopdevfile file are in DOWN state

  • the command below does not show cephloopgroup/cephloopvol

 sudo lvs
  • the command below does not show /dev/loop14
sudo losetup -a
  • let's not dig deeper, since creating an OSD on an operating system disk is not a good idea. (It was created solely for testing purposes.)
    • read the article REMOVING OSDS (MANUAL)
    • we just remove osd.0, osd.1, osd.2
      • ceph osd out 0 is not required
      • ceph osd out 1 is not required
      • ceph osd out 2 is not required
      • ceph -w is not required
      • for u2004m01 run sudo systemctl stop ceph-osd@0 command (It won't work as the daemon is in the container)
      • for u2004m02 run sudo systemctl stop ceph-osd@1 command (It won't work as the daemon is in the container)
      • for u2004m03 run sudo systemctl stop ceph-osd@2 command (It won't work as the daemon is in the container)
      • for u2004m01 run the commands:
        • Note: ceph osd down osd.X; ceph osd rm X is correct
        • Note: ceph osd purge {--yes-i-really-mean-it} command can be used instead of the following three ones [osd destroy, osd rm, osd crush remove]
sudo cephadm shell

ceph osd out osd.0
ceph osd down osd.0
ceph osd crush remove osd.0
ceph auth del osd.0
ceph osd destroy 0 --force
ceph osd down osd.0; ceph osd rm 0

ceph osd out osd.1
ceph osd down osd.1
ceph osd crush remove osd.1
ceph auth del osd.1
ceph osd destroy 1 --force
ceph osd down osd.1; ceph osd rm 1

ceph osd out osd.2
ceph osd down osd.2
ceph osd crush remove osd.2
ceph auth del osd.2
ceph osd destroy 2 --force
ceph osd down osd.2; ceph osd rm 2

ceph orch daemon rm osd.0 --force
ceph orch daemon rm osd.1 --force
ceph orch daemon rm osd.2 --force

exit
  • Step 1
    • for u2004m01 run the commands:
yury@u2004m01:~$ sudo losetup -f /var/local/cephloopdevfile --show
/dev/loop14
  • Step 2
    • Now for u2004m01 as usual:
yury@u2004m01:~$ sudo lvremove cephloopgroup/cephloopvol
Do you really want to remove and DISCARD active logical volume cephloopgroup/cephloopvol? [y/n]: y
  Logical volume "cephloopvol" successfully removed
yury@u2004m01:~$ sudo vgremove cephloopgroup
  Volume group "cephloopgroup" successfully removed
yury@u2004m01:~$ sudo pvremove /dev/loop14
  Labels on physical volume "/dev/loop14" successfully wiped.
yury@u2004m01:~$ sudo losetup -d /dev/loop14
yury@u2004m01:~$ sudo rm /var/local/cephloopdevfile

Repeat Step 1 and Step 2 for u2004m02 and u2004m03

Restart cluster

  • Step 1
    • for u2004m01
sudo cephadm shell
ceph osd set noout
ceph osd set norebalance
  • Step 2
    • for u2004m01, u2004m02, u2004m03
sudo poweroff
  • Step 3

    • start virtual machines
  • Step 4

    • for u2004m01
sudo cephadm shell
ceph osd unset noout
ceph osd unset norebalance
ceph status

Services and Daemons

sudo cephadm shell
ceph orch ls
  • all daemons
    • for u2004m01 run the commands
sudo cephadm shell
ceph orch ps

DEPLOY OSD with ceph-volume

  • read the article CEPH-VOLUME
  • on page 84 of the book Mastering Ceph they wrote
    • То create a BlueStore OSD using ceph-volume, you run the following command, ceparating the devices for the data and RocksDb storage. You can separate the DB and WAL parts of RocksDB if you so wish...
ceph-volume create --bluestore /dev/sda --block.wal /dev/sdb --block.db /dev/sdc (--dmcrypt)
  • We are going to remove osd.5 and create it again with ceph-volume-command, but without --bluestore flag.
    • first of all we can not use sudo systemctl stop ceph-osd@5-command. It will not work since we are using unmanaged architecture.

Step 1: deletion osd.5

  • Step 1.1
    • get the status
sudo cephadm shell
ceph status
Click to show the response
root@u2004m01:/# ceph status
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum u2004m01,u2004m02,u2004m03 (age 3m)
    mgr: u2004m01.jpcrnj(active, since 4m), standbys: u2004m02.nsqsny
    osd: 3 osds: 3 up (since 3m), 3 in (since 2d)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 27 GiB / 30 GiB avail
    pgs:     1 active+clean

  • Step 1.2
    • get daemons
sudo cephadm shell
ceph orch ps
Click to show the response
root@u2004m01:/# ceph orch ps
NAME                    HOST      STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                            IMAGE ID      CONTAINER ID
alertmanager.u2004m01   u2004m01  running (33m)  27s ago    2d   0.20.0   docker.io/prom/alertmanager:v0.20.0   0881eb8f169f  2a6f4fef9fef
crash.u2004m01          u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  0a81cca8e78e
crash.u2004m02          u2004m02  running (30m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  35b7a9cd890c
crash.u2004m03          u2004m03  running (33m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  3b933b3d80c3
grafana.u2004m01        u2004m01  running (33m)  27s ago    2d   6.7.4    docker.io/ceph/ceph-grafana:6.7.4     557c83e11646  16f93d09542b
mgr.u2004m01.jpcrnj     u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  658d13bee4e7
mgr.u2004m02.nsqsny     u2004m02  running (30m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  1746e84a9660
mon.u2004m01            u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  f3f41f3105e0
mon.u2004m02            u2004m02  running (30m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  93027592a64a
mon.u2004m03            u2004m03  running (33m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  fe70b82bbcfe
node-exporter.u2004m01  u2004m01  running (33m)  27s ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  d80727f74056
node-exporter.u2004m02  u2004m02  running (30m)  29s ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  a0231f03a0ee
node-exporter.u2004m03  u2004m03  running (33m)  29s ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  5efd3037177a
osd.3                   u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  8cf7a090f665
osd.4                   u2004m02  running (29m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  aa15c4cbad90
osd.5                   u2004m03  running (33m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  4d458c43b7c1
prometheus.u2004m01     u2004m01  running (33m)  27s ago    2d   2.18.1   docker.io/prom/prometheus:v2.18.1     de242295e225  78a1a1e02897
root@u2004m01:/# ceph osd out 5
marked out osd.5.
  • Step 1.4
    • status monitoring
ceph osd tree
Click to show the response
root@u2004m01:/# ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME          STATUS  REWEIGHT  PRI-AFF
-1         0.02939  root default
-3         0.00980      host u2004m01
 3    hdd  0.00980          osd.3          up   1.00000  1.00000
-5         0.00980      host u2004m02
 4    hdd  0.00980          osd.4          up   1.00000  1.00000
-7         0.00980      host u2004m03
 5    hdd  0.00980          osd.5          up         0  1.00000
root@u2004m01:/# ceph osd down osd.5;ceph osd purge 5 --yes-i-really-mean-it
marked down osd.5.
purged osd.5
  • Step 1.6
root@u2004m01:/# ceph osd stat
2 osds: 2 up (since 3m), 2 in (since 15m); epoch: e146
root@u2004m01:/# ceph osd safe-to-destroy 5
OSD(s) 5 are safe to destroy without reducing data durability.
  • Step 1.7
    • get daemons
ceph orch ps
Click to show the response
root@u2004m01:/# ceph orch ps
NAME                    HOST      STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                            IMAGE ID      CONTAINER ID
alertmanager.u2004m01   u2004m01  running (33m)  27s ago    2d   0.20.0   docker.io/prom/alertmanager:v0.20.0   0881eb8f169f  2a6f4fef9fef
crash.u2004m01          u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  0a81cca8e78e
crash.u2004m02          u2004m02  running (30m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  35b7a9cd890c
crash.u2004m03          u2004m03  running (33m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  3b933b3d80c3
grafana.u2004m01        u2004m01  running (33m)  27s ago    2d   6.7.4    docker.io/ceph/ceph-grafana:6.7.4     557c83e11646  16f93d09542b
mgr.u2004m01.jpcrnj     u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  658d13bee4e7
mgr.u2004m02.nsqsny     u2004m02  running (30m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  1746e84a9660
mon.u2004m01            u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  f3f41f3105e0
mon.u2004m02            u2004m02  running (30m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  93027592a64a
mon.u2004m03            u2004m03  running (33m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  fe70b82bbcfe
node-exporter.u2004m01  u2004m01  running (33m)  27s ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  d80727f74056
node-exporter.u2004m02  u2004m02  running (30m)  29s ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  a0231f03a0ee
node-exporter.u2004m03  u2004m03  running (33m)  29s ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  5efd3037177a
osd.3                   u2004m01  running (33m)  27s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  8cf7a090f665
osd.4                   u2004m02  running (29m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  aa15c4cbad90
osd.5                   u2004m03  running (33m)  29s ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  4d458c43b7c1
prometheus.u2004m01     u2004m01  running (33m)  27s ago    2d   2.18.1   docker.io/prom/prometheus:v2.18.1     de242295e225  78a1a1e02897
  • Step 1.8
    • remove daemon
root@u2004m01:/# ceph orch daemon rm osd.5 --force
Removed osd.5 from host 'u2004m03'
  • Step 1.9
    • get daemons
ceph orch ps
Click to show the response
root@u2004m01:/# ceph orch ps
NAME                    HOST      STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                            IMAGE ID      CONTAINER ID
alertmanager.u2004m01   u2004m01  running (98m)  14m ago    2d   0.20.0   docker.io/prom/alertmanager:v0.20.0   0881eb8f169f  2a6f4fef9fef
crash.u2004m01          u2004m01  running (98m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  0a81cca8e78e
crash.u2004m02          u2004m02  running (94m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  35b7a9cd890c
crash.u2004m03          u2004m03  running (98m)  9m ago     2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  3b933b3d80c3
grafana.u2004m01        u2004m01  running (98m)  14m ago    2d   6.7.4    docker.io/ceph/ceph-grafana:6.7.4     557c83e11646  16f93d09542b
mgr.u2004m01.jpcrnj     u2004m01  running (98m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  658d13bee4e7
mgr.u2004m02.nsqsny     u2004m02  running (94m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  1746e84a9660
mon.u2004m01            u2004m01  running (98m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  f3f41f3105e0
mon.u2004m02            u2004m02  running (94m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  93027592a64a
mon.u2004m03            u2004m03  running (98m)  9m ago     2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  fe70b82bbcfe
node-exporter.u2004m01  u2004m01  running (98m)  14m ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  d80727f74056
node-exporter.u2004m02  u2004m02  running (94m)  14m ago    2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  a0231f03a0ee
node-exporter.u2004m03  u2004m03  running (98m)  9m ago     2d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  5efd3037177a
osd.3                   u2004m01  running (97m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  8cf7a090f665
osd.4                   u2004m02  running (94m)  14m ago    2d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  aa15c4cbad90
prometheus.u2004m01     u2004m01  running (98m)  14m ago    2d   2.18.1   docker.io/prom/prometheus:v2.18.1     de242295e225  78a1a1e02897
  • Step 1.10
    • get status
ceph -s
Click to show the response
root@u2004m01:/# ceph -s
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_WARN
            Degraded data redundancy: 1 pg undersized
            OSD count 2 < osd_pool_default_size 3

  services:
    mon: 3 daemons, quorum u2004m01,u2004m02,u2004m03 (age 87m)
    mgr: u2004m01.jpcrnj(active, since 88m), standbys: u2004m02.nsqsny
    osd: 2 osds: 2 up (since 15m), 2 in (since 27m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   2.0 GiB used, 18 GiB / 20 GiB avail
    pgs:     1 active+undersized

Step 2: creating osd

  • Step 2.1: detect the device name
    • for u2004m03 run the command
      • Note: sdb is what we were looking for
 lsblk
Click to show the response
yury@u2004m03:~$ lsblk
NAME                                                                                                  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0                                                                                                     2:0    1     4K  0 disk
loop0                                                                                                   7:0    0    55M  1 loop /snap/core18/1705
loop1                                                                                                   7:1    0     4K  1 loop /snap/bare/5
loop2                                                                                                   7:2    0  55,5M  1 loop /snap/core18/2253
loop3                                                                                                   7:3    0  61,9M  1 loop /snap/core20/1242
loop4                                                                                                   7:4    0  61,9M  1 loop /snap/core20/1270
loop5                                                                                                   7:5    0 240,8M  1 loop /snap/gnome-3-34-1804/24
loop6                                                                                                   7:6    0   219M  1 loop /snap/gnome-3-34-1804/77
loop7                                                                                                   7:7    0 247,9M  1 loop /snap/gnome-3-38-2004/87
loop8                                                                                                   7:8    0  62,1M  1 loop /snap/gtk-common-themes/1506
loop9                                                                                                   7:9    0  65,2M  1 loop /snap/gtk-common-themes/1519
loop10                                                                                                  7:10   0  42,2M  1 loop /snap/snapd/14066
loop11                                                                                                  7:11   0  54,2M  1 loop /snap/snap-store/558
loop12                                                                                                  7:12   0  49,8M  1 loop /snap/snap-store/433
loop14                                                                                                  7:14   0  43,3M  1 loop /snap/snapd/14295
sda                                                                                                     8:0    0    20G  0 disk
├─sda1                                                                                                  8:1    0  19,9G  0 part /
├─sda14                                                                                                 8:14   0     4M  0 part
└─sda15                                                                                                 8:15   0   106M  0 part /boot/efi
sdb                                                                                                     8:16   0    10G  0 disk
└─ceph--f19ca9f2--1d65--4174--a5c8--0c609166b01f-osd--block--4a21739e--d647--471d--86b6--7474aa8f1eb3 253:0    0    10G  0 lvm
sr0                                                                                                    11:0    1  1024M  0 rom
  • Step 2.2: prepare the device to be reused
    • for u2004m01 run the command
sudo cephadm shell
ceph orch device zap u2004m03 /dev/sdb --force
Click to show the response
root@u2004m01:/# ceph orch device zap u2004m03 /dev/sdb --force
/usr/bin/docker: stderr --> Zapping: /dev/sdb
/usr/bin/docker: stderr --> Zapping lvm member /dev/sdb. lv_path is /dev/ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f/osd-block-4a21739e-d647-471d-86b6-7474aa8f1eb3
/usr/bin/docker: stderr Running command: /usr/bin/dd if=/dev/zero of=/dev/ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f/osd-block-4a21739e-d647-471d-86b6-7474aa8f1eb3 bs=1M count=10 conv=fsync
/usr/bin/docker: stderr  stderr: 10+0 records in
/usr/bin/docker: stderr 10+0 records out
/usr/bin/docker: stderr 10485760 bytes (10 MB, 10 MiB) copied, 0.166168 s, 63.1 MB/s
/usr/bin/docker: stderr --> Only 1 LV left in VG, will proceed to destroy volume group ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f
/usr/bin/docker: stderr Running command: /usr/sbin/vgremove -v -f ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f
/usr/bin/docker: stderr  stderr:
/usr/bin/docker: stderr  stderr: Removing ceph--f19ca9f2--1d65--4174--a5c8--0c609166b01f-osd--block--4a21739e--d647--471d--86b6--7474aa8f1eb3 (253:0)
/usr/bin/docker: stderr  stderr: Archiving volume group "ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f" metadata (seqno 5).
/usr/bin/docker: stderr  stderr: Releasing logical volume "osd-block-4a21739e-d647-471d-86b6-7474aa8f1eb3"
/usr/bin/docker: stderr  stderr: Creating volume group backup "/etc/lvm/backup/ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f" (seqno 6).
/usr/bin/docker: stderr  stdout: Logical volume "osd-block-4a21739e-d647-471d-86b6-7474aa8f1eb3" successfully removed
/usr/bin/docker: stderr  stderr: Removing physical volume "/dev/sdb" from volume group "ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f"
/usr/bin/docker: stderr  stdout: Volume group "ceph-f19ca9f2-1d65-4174-a5c8-0c609166b01f" successfully removed
/usr/bin/docker: stderr Running command: /usr/bin/dd if=/dev/zero of=/dev/sdb bs=1M count=10 conv=fsync
/usr/bin/docker: stderr  stderr: 10+0 records in
/usr/bin/docker: stderr 10+0 records out
/usr/bin/docker: stderr  stderr: 10485760 bytes (10 MB, 10 MiB) copied, 0.134957 s, 77.7 MB/s
/usr/bin/docker: stderr --> Zapping successful for: <Raw Device: /dev/sdb>
  • Step 2.3: take a look at "sdb" again
    • for u2004m03 run the command
 lsblk
Click to show the response
yury@u2004m03:~$  lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
fd0       2:0    1     4K  0 disk
loop0     7:0    0    55M  1 loop /snap/core18/1705
loop1     7:1    0     4K  1 loop /snap/bare/5
loop2     7:2    0  55,5M  1 loop /snap/core18/2253
loop3     7:3    0  61,9M  1 loop /snap/core20/1242
loop4     7:4    0  61,9M  1 loop /snap/core20/1270
loop5     7:5    0 240,8M  1 loop /snap/gnome-3-34-1804/24
loop6     7:6    0   219M  1 loop /snap/gnome-3-34-1804/77
loop7     7:7    0 247,9M  1 loop /snap/gnome-3-38-2004/87
loop8     7:8    0  62,1M  1 loop /snap/gtk-common-themes/1506
loop9     7:9    0  65,2M  1 loop /snap/gtk-common-themes/1519
loop10    7:10   0  42,2M  1 loop /snap/snapd/14066
loop11    7:11   0  54,2M  1 loop /snap/snap-store/558
loop12    7:12   0  49,8M  1 loop /snap/snap-store/433
loop14    7:14   0  43,3M  1 loop /snap/snapd/14295
sda       8:0    0    20G  0 disk
├─sda1    8:1    0  19,9G  0 part /
├─sda14   8:14   0     4M  0 part
└─sda15   8:15   0   106M  0 part /boot/efi
sdb       8:16   0    10G  0 disk
sr0      11:0    1  1024M  0 rom
  • Step 2.4: trying to create OSD
    • for u2004m03 run the command
      • having an error
sudo cephadm shell
ceph-volume lvm create --filestore --data u2004m03:/dev/sdb
Click to show the response
root@u2004m01:/# ceph-volume lvm create --filestore --data u2004m03:/dev/sdb
usage: ceph-volume lvm create [-h] --data DATA [--data-size DATA_SIZE]
                              [--data-slots DATA_SLOTS] [--osd-id OSD_ID]
                              [--osd-fsid OSD_FSID]
                              [--cluster-fsid CLUSTER_FSID]
                              [--crush-device-class CRUSH_DEVICE_CLASS]
                              [--dmcrypt] [--no-systemd] [--bluestore]
                              [--block.db BLOCK_DB]
                              [--block.db-size BLOCK_DB_SIZE]
                              [--block.db-slots BLOCK_DB_SLOTS]
                              [--block.wal BLOCK_WAL]
                              [--block.wal-size BLOCK_WAL_SIZE]
                              [--block.wal-slots BLOCK_WAL_SLOTS]
                              [--filestore] [--journal JOURNAL]
                              [--journal-size JOURNAL_SIZE]
                              [--journal-slots JOURNAL_SLOTS]
ceph-volume lvm create: error: argument --data: invalid <ceph_volume.util.arg_validators.ValidDevice object at 0x7f719a4e9588> value: 'u2004m03:/dev/sdb'
  • Step 2.5: trying to create OSD without ceph-volume
root@u2004m01:/# ceph orch daemon add osd u2004m03:/dev/sdb
Created osd(s) 0 on host 'u2004m03'
root@u2004m01:/# ceph -s
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_OK

  services:
    mon: 3 daemons, quorum u2004m01,u2004m02,u2004m03 (age 2h)
    mgr: u2004m01.jpcrnj(active, since 2h), standbys: u2004m02.nsqsny
    osd: 3 osds: 3 up (since 74s), 3 in (since 74s)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.1 GiB used, 27 GiB / 30 GiB avail
    pgs:     1 active+clean

Note: trying to run "ceph-volume"-command for u2004m01 also throws an error

  • we repeated destroing and preparation stepps for u2004m01
  • all is ready to run ceph-volume lvm create --filestore --data /dev/sdb command
Click to show the response
root@u2004m01:/# ceph-volume lvm create --filestore --data /dev/sdb
Running command: /usr/bin/ceph-authtool --gen-print-key
Running command: /usr/bin/ceph --cluster ceph --name client.bootstrap-osd --keyring /var/lib/ceph/bootstrap-osd/ceph.keyring -i - osd new 9dd324a5-1319-492c-83e0-8baa27502daf
 stderr: 2021-12-11T15:31:55.310+0000 7f517ffff700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
 stderr: 2021-12-11T15:31:55.310+0000 7f517ffff700 -1 AuthRegistry(0x7f5180059b30) no keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
 stderr: 2021-12-11T15:31:55.314+0000 7f517ffff700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
 stderr: 2021-12-11T15:31:55.314+0000 7f517ffff700 -1 AuthRegistry(0x7f518005b860) no keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
 stderr: 2021-12-11T15:31:55.314+0000 7f517ffff700 -1 auth: unable to find a keyring on /var/lib/ceph/bootstrap-osd/ceph.keyring: (2) No such file or directory
 stderr: 2021-12-11T15:31:55.314+0000 7f517ffff700 -1 AuthRegistry(0x7f517fffdf90) no keyring found at /var/lib/ceph/bootstrap-osd/ceph.keyring, disabling cephx
 stderr: 2021-12-11T15:31:55.314+0000 7f517e7fc700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
 stderr: 2021-12-11T15:31:55.314+0000 7f517f7fe700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
 stderr: 2021-12-11T15:31:55.314+0000 7f517effd700 -1 monclient(hunting): handle_auth_bad_method server allowed_methods [2] but i only support [1]
 stderr: 2021-12-11T15:31:55.314+0000 7f517ffff700 -1 monclient: authenticate NOTE: no keyring found; disabled cephx authentication
 stderr: [errno 13] RADOS permission denied (error connecting to the cluster)
-->  RuntimeError: Unable to create a new OSD id

ENABLE CEPH CLI

  • read the article ENABLE CEPH CLI
  • to run **ceph ..., ** commands without first running sudo cephadm shell command
sudo cephadm add-repo --release pacific
sudo cephadm install ceph-common
  • now we can run without first running sudo cephadm shell command:
yury@u2004m01:~$ sudo ceph version
ceph version 15.2.13 (c44bc49e7a57a87d84dfff2a077a2058aa2172e2) octopus (stable)
yury@u2004m01:~$ sudo ceph status
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_OK
...

ENABLE CEPH CLI on u2004m02

U2004m02 IP address: 192.168.100.16

  • Step 1
    • for u2004m02 create the folder
sudo mkdir  /etc/ceph
  • Step 2
    • for u2004m01
      • copy files (we need only two files: client.admin and ceph.conf)
      • add _admin label for u2004m02
sudo scp -r /etc/ceph/* [email protected]:/etc/ceph/
sudo ceph orch host label add u2004m02 _admin
  • Step 3
    • for u2004m02
      • install cephadm
      • make a test
yury@u2004m02:~$ sudo apt install -y cephadm
...
yury@u2004m02:~$ sudo cephadm shell
Inferring fsid f143dbb0-5839-11ec-a64a-09fdbae816c9
Inferring config /var/lib/ceph/f143dbb0-5839-11ec-a64a-09fdbae816c9/mon.u2004m02/config
Using recent ceph image ceph/ceph@sha256:056637972a107df4096f10951e4216b21fcd8ae0b9fb4552e628d35df3f61139
root@u2004m02:/# ceph -s
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_OK
...

REMOVING HOSTS

it doesn't work as they describe

  • read the article REMOVING HOSTS

  • let's remove u2004m03

  • Step 1

    • for u2004m01 run the commands
      • Note: All osds on the host will be scheduled to be removed
sudo ceph orch host drain u2004m03
sudo ceph orch osd rm status
sudo ceph orch ps u2004m03
sudo ceph orch host rm u2004m03
  • Note: drain is not valid command
yury@u2004m01:~$ sudo ceph orch host drain u2004m03
[sudo] password for yury:
no valid command found; 7 closest matches:
orch host add <hostname> [<addr>] [<labels>...]
orch host rm <hostname>
orch host set-addr <hostname> <addr>
orch host ls [plain|json|json-pretty|yaml]
orch host label add <hostname> <label>
orch host label rm <hostname> <label>
orch host ok-to-stop <hostname>
Error EINVAL: invalid command
  • Step 2
    • for u2004m01 run the commands
sudo orch host rm u2004m03
sudo ceph orch ps
Click to show the response
yury@u2004m01:~$ sudo ceph orch ps
NAME                    HOST      STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                            IMAGE ID      CONTAINER ID
alertmanager.u2004m01   u2004m01  running (76m)  55s ago    3d   0.20.0   docker.io/prom/alertmanager:v0.20.0   0881eb8f169f  fe41e90201e2
crash.u2004m01          u2004m01  running (76m)  55s ago    3d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  8bfa5f6816a0
crash.u2004m02          u2004m02  running (69m)  56s ago    3d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  ba0c63d8152b
grafana.u2004m01        u2004m01  running (76m)  55s ago    3d   6.7.4    docker.io/ceph/ceph-grafana:6.7.4     557c83e11646  e0eac8ed1ea9
mgr.u2004m01.jpcrnj     u2004m01  running (76m)  55s ago    3d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  004bbcd0b86f
mgr.u2004m02.nsqsny     u2004m02  running (69m)  56s ago    3d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  098d58509729
mon.u2004m01            u2004m01  running (76m)  55s ago    3d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  b71524a11c2f
node-exporter.u2004m01  u2004m01  running (76m)  55s ago    3d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  58d1ebdcfbd7
node-exporter.u2004m02  u2004m02  running (69m)  56s ago    3d   0.18.1   docker.io/prom/node-exporter:v0.18.1  e5a616e4b9cf  1a70a13614e4
osd.1                   u2004m01  running (56m)  55s ago    56m  15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  7f0febed918a
osd.4                   u2004m02  running (69m)  56s ago    3d   15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  53e758c29834
prometheus.u2004m01     u2004m01  running (11m)  55s ago    3d   2.18.1   docker.io/prom/prometheus:v2.18.1     de242295e225  de63417ce262
  • but
Click to show the response
yury@u2004m01:~$ sudo ceph status
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_WARN
            3 stray daemon(s) not managed by cephadm
            1 stray host(s) with 2 daemon(s) not managed by cephadm

  services:
    mon: 2 daemons, quorum u2004m01,u2004m03 (age 10m)
    mgr: u2004m01.jpcrnj(active, since 67m), standbys: u2004m02.nsqsny
    osd: 3 osds: 3 up (since 54m), 3 in (since 62m)

  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.1 GiB used, 27 GiB / 30 GiB avail
    pgs:     1 active+clean

yury@u2004m01:~$ sudo ceph osd tree
ID  CLASS  WEIGHT   TYPE NAME          STATUS  REWEIGHT  PRI-AFF
-1         0.02939  root default
-3         0.00980      host u2004m01
 1    hdd  0.00980          osd.1          up   1.00000  1.00000
-5         0.00980      host u2004m02
 4    hdd  0.00980          osd.4          up   1.00000  1.00000
-7         0.00980      host u2004m03
 0    hdd  0.00980          osd.0          up   1.00000  1.00000
yury@u2004m01:~$  sudo ceph osd status
ID  HOST       USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE
 0  u2004m03  1052M  9183M      0        0       0        0   exists,up
 1  u2004m01  1052M  9183M      0        0       0        0   exists,up
 4  u2004m02  1052M  9183M      0        0       0        0   exists,up
  • Step 3
    • for u2004m01 run the commands
yury@u2004m01:~$ sudo ceph orch host add u2004m03 192.168.100.18
Added host 'u2004m03'

Compare ceph config

  • for u2004m01 run the commands and compare the result
sudo cephadm shell
cat /etc/ceph/ceph.conf
exit
sudo cat /etc/ceph/ceph.conf
Click to show the response
root@u2004m01:/# cat /etc/ceph/ceph.conf
# minimal ceph.conf for f143dbb0-5839-11ec-a64a-09fdbae816c9
[global]
        fsid = f143dbb0-5839-11ec-a64a-09fdbae816c9
        mon_host = [v2:192.168.100.15:3300/0,v1:192.168.100.15:6789/0] [v2:192.168.100.16:3300/0,v1:192.168.100.16:6789/0] [v2:192.168.100.18:3300/0,v1:192.168.100.18:6789/0]
root@u2004m01:/# exit
exit
yury@u2004m01:~$ sudo cat /etc/ceph/ceph.conf
# minimal ceph.conf for f143dbb0-5839-11ec-a64a-09fdbae816c9
[global]
        fsid = f143dbb0-5839-11ec-a64a-09fdbae816c9
        mon_host = [v2:192.168.100.15:3300/0,v1:192.168.100.15:6789/0]

Create and delete replicated pool

sudo ceph osd pool ls
sudo ceph osd pool create TestPool 128 128 replicated
sudo ceph osd pool ls
sudo ceph osd pool stats TestPool
sudo ceph osd dump | grep 'replicated size'
sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
sudo ceph osd pool rm TestPool TestPool --yes-i-really-really-mean-it
sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'
sudo ceph osd pool ls
Click to show the response
yury@u2004m01:~$ sudo ceph osd pool ls
device_health_metrics
yury@u2004m01:~$ sudo ceph osd pool create TestPool 128 128 replicated
pool 'TestPool' created
yury@u2004m01:~$ sudo ceph osd pool ls
device_health_metrics
TestPool
yury@u2004m01:~$ sudo ceph osd pool stats TestPool
pool TestPool id 6
  nothing is going on
yury@u2004m01:~$ sudo ceph osd dump | grep 'replicated size'
pool 2 'device_health_metrics' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode on last_change 198 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth
pool 6 'TestPool' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 47 pgp_num 47 pg_num_target 32 pgp_num_target 32 autoscale_mode on last_change 720 lfor 0/720/718 flags hashpspool stripe_width 0
yury@u2004m01:~$ sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
mon.u2004m01: mon_allow_pool_delete = 'true'
mon.u2004m01: {}
mon.u2004m03: mon_allow_pool_delete = 'true'
mon.u2004m03: {}
mon.u2004m02: mon_allow_pool_delete = 'true'
mon.u2004m02: {}
yury@u2004m01:~$ sudo ceph osd pool rm TestPool TestPool --yes-i-really-really-mean-it
pool 'TestPool' removed
yury@u2004m01:~$ sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'
mon.u2004m01: mon_allow_pool_delete = 'false'
mon.u2004m01: {}
mon.u2004m03: mon_allow_pool_delete = 'false'
mon.u2004m03: {}
mon.u2004m02: mon_allow_pool_delete = 'false'
mon.u2004m02: {}
yury@u2004m01:~$ sudo ceph osd pool ls
device_health_metrics
  • Note: use the following replacements
# instead of
sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=true'
# you should use
sudo ceph config set mon mon_allow_pool_delete true

# instead of
sudo ceph tell mon.\* injectargs '--mon-allow-pool-delete=false'
# you should use
sudo ceph config set mon mon_allow_pool_delete false

Create and Remove CephFS

Note: the below commands are not valid for cephadm

sudo ceph osd pool create CephFsMetadataPool 10 10 replicated
sudo ceph osd pool create CephFsDataPool 30 30 replicated
sudo ceph fs create TestCephFs CephFsMetadataPool CephFsDataPool
Click to show the response
yury@u2004m01:~$ sudo ceph osd pool create CephFsMetadataPool 10 10 replicated
pool 'CephFsMetadataPool' created
yury@u2004m01:~$ sudo ceph osd pool create CephFsDataPool 30 30 replicated
pool 'CephFsDataPool' created
yury@u2004m01:~$ sudo ceph fs create TestCephFs CephFsMetadataPool CephFsDataPool
no valid command found; 10 closest matches:
fs status [<fs>]
fs volume ls
fs volume create <name> [<placement>]
fs volume rm <vol_name> [<yes-i-really-mean-it>]
fs subvolumegroup ls <vol_name>
fs subvolumegroup create <vol_name> <group_name> [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>]
fs subvolumegroup rm <vol_name> <group_name> [--force]
fs subvolume ls <vol_name> [<group_name>]
fs subvolume create <vol_name> <sub_name> [<size:int>] [<group_name>] [<pool_layout>] [<uid:int>] [<gid:int>] [<mode>] [--namespace-isolated]
fs subvolume rm <vol_name> <sub_name> [<group_name>] [--force] [--retain-snapshots]
Error EINVAL: invalid command

Create CephFS

  • run the command
sudo ceph fs volume create TestCephFs
  • it automatically creates two MDS (metadata servers)
Click to show the response of [sudo ceph -s] and [sudo ceph orch ps] and [sudo ceph fs volume ls]
yury@u2004m01:~$ sudo ceph -s
  cluster:
    id:     f143dbb0-5839-11ec-a64a-09fdbae816c9
    health: HEALTH_WARN
            1 pool(s) have non-power-of-two pg_num

  services:
    mon: 3 daemons, quorum u2004m01,u2004m03,u2004m02 (age 67m)
    mgr: u2004m01.jpcrnj(active, since 68m), standbys: u2004m02.nsqsny
    mds: TestCephFs:1 {0=TestCephFs.u2004m02.sqisfd=up:active} 1 up:standby
    osd: 3 osds: 3 up (since 67m), 3 in (since 23h)

  data:
    pools:   5 pools, 127 pgs
    objects: 22 objects, 2.2 KiB
    usage:   3.2 GiB used, 27 GiB / 30 GiB avail
    pgs:     127 active+clean
yury@u2004m01:~$ sudo ceph orch ps
NAME                            HOST      STATUS         REFRESHED  AGE  VERSION  IMAGE NAME                            IMAGE ID      CONTAINER ID
...
mds.TestCephFs.u2004m01.psguqq  u2004m01  running (23m)  2m ago     23m  15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  396ae7444444
mds.TestCephFs.u2004m02.sqisfd  u2004m02  running (23m)  2m ago     23m  15.2.13  docker.io/ceph/ceph:v15               2cf504fded39  9e4bcece9060
...

yury@u2004m01:~$ sudo ceph fs volume ls
[
    {
        "name": "TestCephFs"
    }
]

Remove CephFS

  • run the commands
sudo ceph config set mon mon_allow_pool_delete true
sudo ceph fs volume rm TestCephFs --yes-i-really-mean-it
sudo ceph config set mon mon_allow_pool_delete false
  • check the result
sudo ceph -s
sudo ceph orch ps
sudo ceph fs volume ls

Create and Delete User

sudo ceph auth ls
sudo ceph auth get client.admin
  • Step 2: create keyring file for the user
sudo ceph auth get-or-create client.testcephfs mon 'allow r' mgr 'allow rw' -o /etc/ceph/ceph.client.testcephfs.keyring
  • Step 3: add user to cluster
sudo ceph-authtool -n client.testcephfs --cap mon 'allow r' --cap mgr 'allow rw' /etc/ceph/ceph.client.testcephfs.keyring
sudo ceph auth get client.testcephfs
  • Step 4: delete user
sudo ceph auth del client.testcephfs
sudo ceph auth ls
Click to show the response
yury@u2004m01:~$ sudo ceph auth get-or-create client.testcephfs mon 'allow r' mgr 'allow rw' -o /etc/ceph/ceph.client.testcephfs.keyring
yury@u2004m01:~$ sudo ceph-authtool -n client.testcephfs --cap mon 'allow r' --cap mgr 'allow rw' /etc/ceph/ceph.client.testcephfs.keyring
yury@u2004m01:~$ sudo ceph auth get client.testcephfs
exported keyring for client.testcephfs
[client.testcephfs]
        key = AQDfVbdhqZVlCRAAMNpNh3xsCD1SlJhszVWBnw==
        caps mgr = "allow rw"
        caps mon = "allow r"
yury@u2004m01:~$ sudo ceph auth del client.testcephfs
updated

Create and Delete User for CephFS

yury@u2004m01:~$ sudo ceph fs volume ls
[
    {
        "name": "TestCephFs"
    }
]
  • Step 1: create new user
yury@u2004m01:~$ sudo ceph fs authorize TestCephFs client.testcephfsnewuser / r /temp rw
[client.testcephfsnewuser]
        key = AQB7cLdhmUToKBAAxkBuTn+L4VndL692gu0nPw==
yury@u2004m01:~$ sudo ceph auth get client.testcephfsnewuser
exported keyring for client.testcephfsnewuser
[client.testcephfsnewuser]
        key = AQB7cLdhmUToKBAAxkBuTn+L4VndL692gu0nPw==
        caps mds = "allow r, allow rw path=/temp"
        caps mon = "allow r"
        caps osd = "allow rw tag cephfs data=TestCephFs"
  • Step 2: delete created new user
yury@u2004m01:~$ sudo ceph auth del client.testcephfsnewuser
updated

Mount CephFS using kernel drive

We have a Ceph cluster with CephFS deployed (CephFS instance named TestCephFs)

We need an additional virtual machine to deploy the CephFS client

  • let's call such a virtual machine UbuntuAnsible

  • Step 1: check if mount.ceph is ready

    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ stat /sbin/mount.ceph
stat: cannot stat '/sbin/mount.ceph': No such file or directory
  • Step 2: install ceph-common.
    • for UbuntuAnsible run the command
sudo apt-get install -y ceph-common
  • Step 3: run stat /sbin/mount.ceph command again
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo stat /sbin/mount.ceph
[sudo] password for yury:
  File: /sbin/mount.ceph
  Size: 190888          Blocks: 376        IO Block: 4096   regular file
Device: 801h/2049d      Inode: 17558       Links: 1
Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2021-12-13 22:01:59.000000000 +0300
Modify: 2021-09-06 11:41:31.000000000 +0300
Change: 2021-12-13 22:02:28.556479761 +0300
 Birth: -
  • Step 4: create cephFS user
    • for u2004m01 run the command
yury@u2004m01:~$ sudo ceph fs authorize TestCephFs client.testcephfsuser / rw
[client.testcephfsuser]
        key = AQBPrbdh0I9gMRAA2Qrv9/DNCNIr5m+klcV+IQ==
  • Step 5: prepare keyring file
    • for u2004m01 run the command
yury@u2004m01:~$ sudo ceph auth get client.testcephfsuser -o /etc/ceph/ceph.client.testcephfsuser.keyring
exported keyring for client.testcephfsuser
  • Step 6: copy config file
    • for UbuntuAnsible run the command
      • Note: Device name = u2004m01, ip = 192.168.100.15
yury@UbuntuAnsible:~$ sudo mkdir -p -m 755 /etc/ceph
yury@UbuntuAnsible:~$ ssh [email protected] "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
The authenticity of host '192.168.100.15 (192.168.100.15)' can't be established.
ECDSA key fingerprint is SHA256:jpijUO/6Je+Ad/+DWEoem+tQySpvq1DlNQaurMORaVU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.15' (ECDSA) to the list of known hosts.
[email protected]'s password:
# minimal ceph.conf for f143dbb0-5839-11ec-a64a-09fdbae816c9
[global]
        fsid = f143dbb0-5839-11ec-a64a-09fdbae816c9
        mon_host = [v2:192.168.100.15:3300/0,v1:192.168.100.15:6789/0] [v2:192.168.100.16:3300/0,v1:192.168.100.16:6789/0] [v2:192.168.100.18:3300/0,v1:192.168.100.18:6789/0]
  • Step 7: copy keyring file
    • for UbuntuAnsible run the command
      • Note: Device name = u2004m01, ip = 192.168.100.15
yury@UbuntuAnsible:~$ sudo scp -r [email protected]:/etc/ceph/ceph.client.testcephfsuser.keyring  /etc/ceph/ceph.client.testcephfsuser.keyring
The authenticity of host '192.168.100.15 (192.168.100.15)' can't be established.
ECDSA key fingerprint is SHA256:jpijUO/6Je+Ad/+DWEoem+tQySpvq1DlNQaurMORaVU.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '192.168.100.15' (ECDSA) to the list of known hosts.
[email protected]'s password:
ceph.client.testcephfsuser.keyring                                                                                  100%  167   110.7KB/s   00:00
  • Step 8: Ensure that the keyring has appropriate permissions
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo chmod 600 /etc/ceph/ceph.client.testcephfsuser.keyring
  • Step 9.1: mount (it does not work)
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo mkdir /mnt/testcephfolder
[client.testcephfsuser]
        key = AQBPrbdh0I9gMRAA2Qrv9/DNCNIr5m+klcV+IQ==
        caps mds = "allow rw"
        caps mon = "allow r"
        caps osd = "allow rw tag cephfs data=TestCephFs"

yury@UbuntuAnsible:~$ sudo cat /etc/ceph/ceph.conf
# minimal ceph.conf for f143dbb0-5839-11ec-a64a-09fdbae816c9
[global]
        fsid = f143dbb0-5839-11ec-a64a-09fdbae816c9
        mon_host = [v2:192.168.100.15:3300/0,v1:192.168.100.15:6789/0] [v2:192.168.100.16:3300/0,v1:192.168.100.16:6789/0] [v2:192.168.100.18:3300/0,v1:192.168.100.18:6789/0]

yury@UbuntuAnsible:~$ sudo mount -t ceph client.testcephfsuser@f143dbb0-5839-11ec-a64a-09fdbae816c9.TestCephFs=/ /mnt/testcephfolder
source mount path was not specified
unable to parse mount source: -22
yury@UbuntuAnsible:~$ sudo mount -t ceph client.testcephfsuser@f143dbb0-5839-11ec-a64a-09fdbae816c9.TestCephFs=/ /mnt/testcephfolder -o mon_addr=192.168.100.15:6789,secret=AQBPrbdh0I9gMRAA2Qrv9/DNCNIr5m+klcV+IQ==
source mount path was not specified
unable to parse mount source: -22
yury@UbuntuAnsible:~$ sudo mount -t ceph [email protected]=/ /mnt/testcephfolder -o mon_addr=192.168.100.15:6789,secret=AQBPrbdh0I9gMRAA2Qrv9/DNCNIr5m+klcV+IQ==
source mount path was not specified
unable to parse mount source: -22
yury@UbuntuAnsible:~$ sudo mount -t ceph 192.168.100.15:6789,192.168.100.16:6789,192.168.100.18:6789:/ /mnt/testcephfolder -o name=testcephfsuser,fs=TestCephFs
[sudo] password for yury:
yury@UbuntuAnsible:~$ df -h
Filesystem                                                     Size  Used Avail Use% Mounted on
/dev/sda1                                                       12G  9,9G  1,6G  87% /
...
192.168.100.15:6789,192.168.100.16:6789,192.168.100.18:6789:/  8,5G     0  8,5G   0% /mnt/testcephfolder
  • Step 10: unmount
    • for UbuntuAnsible run the command
 sudo umount /mnt/testcephfolder

Mount CephFS using FUSE client

sudo apt install ceph-fuse
  • follow the instruction of the article

Mount RDB

We have a Ceph cluster with CephFS deployed (CephFS instance named TestCephFs)

We need an additional virtual machine to deploy the CephFS client

  • let's call such a virtual machine UbuntuAnsible

  • Step 1: check if ceph is ready

    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo ceph -h
Command 'ceph' not found, but can be installed with:
sudo apt install ceph-common
  • Step 2: install ceph-common.
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo apt update
yury@UbuntuAnsible:~$ sudo apt install ceph-common
  • Step 3: create a pool
    • for u2004m01 run the command (recommended pg to be power of 2)
yury@u2004m01:~$ sudo ceph osd pool ls
yury@u2004m01:~$ sudo ceph osd pool create rdbpool 32 32 replicated
yury@u2004m01:~$ sudo ceph osd pool ls
yury@u2004m01:~$ sudo rbd pool init rdbpool
  • Step 5: create user with read/write access
    • for u2004m01 run the command
yury@u2004m01:~$ sudo ceph auth get-or-create client.rdbpoolusr mon 'profile rbd' osd 'profile rbd pool=rdbpool' mgr 'profile rbd pool=rdbpool'
[client.rdbpoolusr]
        key = AQD3hrhhX+fdAhAAbAfWStzI1+Abz5qyB9wrVw==
yury@u2004m01:~$ sudo ceph auth get client.rdbpoolusr
exported keyring for client.rdbpoolusr
[client.rdbpoolusr]
        key = AQD3hrhhX+fdAhAAbAfWStzI1+Abz5qyB9wrVw==
        caps mgr = "profile rbd pool=rdbpool"
        caps mon = "profile rbd"
        caps osd = "profile rbd pool=rdbpool"
yury@u2004m01:~$ sudo rbd create --size 2048 rdbpool/rdbpoolimg
yury@u2004m01:~$ sudo rbd ls rdbpool
rdbpoolimg
yury@u2004m01:~$ sudo rbd info rdbpool/rdbpoolimg
rbd image 'rdbpoolimg':
        size 2 GiB in 512 objects
        order 22 (4 MiB objects)
        snapshot_count: 0
        id: 36c3fbfacbabf
        block_name_prefix: rbd_data.36c3fbfacbabf
        format: 2
        features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
        op_features:
        flags:
        create_timestamp: Tue Dec 14 15:05:40 2021
        access_timestamp: Tue Dec 14 15:05:40 2021
        modify_timestamp: Tue Dec 14 15:05:40 2021
  • Step 7: resize block device
    • for u2004m01 run the command
yury@u2004m01:~$ sudo rbd resize --size 3072 rdbpool/rdbpoolimg
Resizing image: 100% complete...done.
yury@u2004m01:~$ sudo rbd resize --size 2048 rdbpool/rdbpoolimg --allow-shrink
Resizing image: 100% complete...done.
sudo rbd rm rdbpool/rdbpoolimg
sudo rbd trash mv rdbpool/rdbpoolimg
  • Step 9: prepare keyring file
    • for u2004m01 run the command
yury@u2004m01:~$ sudo ceph auth get client.rdbpoolusr -o /etc/ceph/ceph.client.rdbpoolusr.keyring
exported keyring for client.rdbpoolusr
  • Step 9: copy config file
    • for UbuntuAnsible run the command
      • Note: Device name = u2004m01, ip = 192.168.100.15
yury@UbuntuAnsible:~$ sudo ssh-keygen
yury@UbuntuAnsible:~$ ssh-keygen
yury@UbuntuAnsible:~$ ssh [email protected] "sudo ceph config generate-minimal-conf" | sudo tee /etc/ceph/ceph.conf
[email protected]'s password:
Permission denied, please try again.
[email protected]'s password:
# minimal ceph.conf for f143dbb0-5839-11ec-a64a-09fdbae816c9
[global]
        fsid = f143dbb0-5839-11ec-a64a-09fdbae816c9
        mon_host = [v2:192.168.100.15:3300/0,v1:192.168.100.15:6789/0] [v2:192.168.100.16:3300/0,v1:192.168.100.16:6789/0] [v2:192.168.100.18:3300/0,v1:192.168.100.18:6789/0]

  • Step 10: copy keyring file
    • for UbuntuAnsible run the command
      • Note: Device name = u2004m01, ip = 192.168.100.15
yury@UbuntuAnsible:~$ sudo scp -r [email protected]:/etc/ceph/ceph.client.rdbpoolusr.keyring  /etc/ceph/ceph.client.rdbpoolusr.keyring
[email protected]'s password:
ceph.client.rdbpoolusr.keyring                                                                             100%  172    99.1KB/s   00:00
  • Step 11: modify /etc/ceph/rbdmap
    • for UbuntuAnsible with sudo nano /etc/ceph/rbdmap command modify the file as below
yury@UbuntuAnsible:~$ sudo cat /etc/ceph/rbdmap
# RbdDevice             Parameters
#poolname/imagename     id=client,keyring=/etc/ceph/ceph.client.keyring
rdbpool/rdbpoolimg      id=rdbpoolusr,keyring=/etc/ceph/ceph.client.rdbpoolusr.keyring
  • Step 12: map
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo rbdmap map
  • Step 13: enable rbdmap
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo systemctl enable rbdmap.service
Synchronizing state of rbdmap.service with SysV service script with /lib/systemd/systemd-sysv-install.
Executing: /lib/systemd/systemd-sysv-install enable rbdmap


yury@UbuntuAnsible:~$ sudo rbd showmapped
id  pool     namespace  image       snap  device
0   rdbpool             rdbpoolimg  -     /dev/rbd0
  • Step 14: format /dev/rbd0
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo mkfs -t ext4  /dev/rbd0
mke2fs 1.45.5 (07-Jan-2020)
Discarding device blocks: done
Creating filesystem with 524288 4k blocks and 131072 inodes
Filesystem UUID: c11ee4f8-8c1f-4c08-8b1f-359ac0fb235f
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
  • Step 15: mount /dev/rbd0
    • for UbuntuAnsible run the command
yury@UbuntuAnsible:~$ sudo mount /dev/rbd0 /mnt
yury@UbuntuAnsible:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            924M     0  924M   0% /dev
tmpfs           192M  1,4M  191M   1% /run
/dev/sda1        12G  9,4G  2,1G  82% /
tmpfs           959M     0  959M   0% /dev/shm
tmpfs           5,0M     0  5,0M   0% /run/lock
tmpfs           959M     0  959M   0% /sys/fs/cgroup
/dev/loop0      128K  128K     0 100% /snap/bare/5
/dev/loop2       56M   56M     0 100% /snap/core18/2253
/dev/loop4      241M  241M     0 100% /snap/gnome-3-34-1804/24
/dev/loop3       62M   62M     0 100% /snap/core20/1270
/dev/loop1       55M   55M     0 100% /snap/core18/1705
/dev/loop6      219M  219M     0 100% /snap/gnome-3-34-1804/77
/dev/loop7      248M  248M     0 100% /snap/gnome-3-38-2004/87
/dev/loop8       63M   63M     0 100% /snap/gtk-common-themes/1506
/dev/loop5       62M   62M     0 100% /snap/core20/1242
/dev/loop9       66M   66M     0 100% /snap/gtk-common-themes/1519
/dev/loop10      50M   50M     0 100% /snap/snap-store/433
/dev/loop11      55M   55M     0 100% /snap/snap-store/558
/dev/loop13      44M   44M     0 100% /snap/snapd/14295
/dev/loop12      43M   43M     0 100% /snap/snapd/14066
/dev/sda15      105M  6,6M   98M   7% /boot/efi
tmpfs           192M   20K  192M   1% /run/user/125
tmpfs           192M  8,0K  192M   1% /run/user/1000
/dev/rbd0       2,0G  6,0M  1,8G   1% /mnt
  • Step 16.1: Remove all
    • for UbuntuAnsible run the command
sudo umount /dev/rbd0 /mnt
sudo rbdmap unmap
sudo systemctl disable rbdmap.service
  • Step 16.1: Remove all
    • for u2004m01 run the commands
sudo ceph auth del client.rdbpoolusr
sudo rbd rm rdbpool/rdbpoolimg
sudo rbd trash mv rdbpool/rdbpoolimg
sudo ceph config set mon mon_allow_pool_delete true
sudo ceph osd pool rm rdbpool rdbpool --yes-i-really-really-mean-it
sudo ceph config set mon mon_allow_pool_delete false

Deploy NFS

⚠️ **GitHub.com Fallback** ⚠️