Ceph and KVM - stanislawbartkowski/wikis GitHub Wiki
Ceph
Ceph storage platform requires a bunch of nodes and disk to run but for testing, it is possible to set up a small training cluster using KVM and single machine.
KVM host machine: 16GB, 8 cores.
KVM cluster
https://computingforgeeks.com/install-and-configure-ceph-storage-cluster-on-centos-linux/
Every KVM guest node is installed as minimal CentOS-8 and 20 GB local disk. Follow design described there but obviously scale down the size of the nodes.
Node | RAM | CPU |
---|---|---|
cephadmin | 2GB | 1 |
cephmon01 | 2GB | 1 |
cephmon02 | 2GB | 1 |
cephmon03 | 2GB | 1 |
cephosd01 | 4GB | 2 |
cephosd02 | 4GB | 2 |
cephosd02 | 4GB | 2 |
For every osd node, create and attach an additional disk. The disk can be created either manually or from KVM GUI.
qemu-img create -f qcow2 DISKOSD01.qcow2 20G
Installation
Follow the instruction in the article.
https://computingforgeeks.com/install-and-configure-ceph-storage-cluster-on-centos-linux/
Steps in summary.
- Run installation on cepgadmin node.
- Setup passwordless root connection from cephadmin node to all other nodes.
- Share the same copy of /etc/hosts on all nodes.
- Install epel, ansible and git.
- Test passwordless connection: ansible all -m ping
pip3 install -r requirements.txt git clone https://github.com/ceph/ceph-ansible.git cd ceph-ansible
Important: use the latest stable version, do not use current master
git checkout stable-5.0
Discover network name and storage device on osd nodes. In my environment:
- enp1s0
- /dev/vdb
- Copy and paste group_vars/all.yml from web page. Replace interface name with the appropriate network name (enp1s0).
vi group_vars/osds.yml
copy_admin_key: true
devices:
- /dev/vdb
- Copy and paste the content of hosts file from web page.
cp site.yml.sample site.yml
Run Ansible playbook.It takes 0.5 - 1 hours to complete in a tiny environment.
ansible-playbook -i hosts site.yml
If something fails, just rerun the playbook again.
Verify
When completed, log on the any Ceph nodes and run the command. Pay attention that all 3 osds are up.
ceph -s
cluster:
id: 41c5e1a7-b1ec-45fd-868c-04a6d95df58d
health: HEALTH_OK
services:
mon: 3 daemons, quorum cephmon03,cephmon02,cephmon01 (age 86m)
mgr: cephmon03(active, since 43m), standbys: cephmon01, cephmon02
mds: cephfs:1 {0=cephmon02=up:active} 2 up:standby
osd: 3 osds: 3 up (since 47m), 3 in (since 3h)
rgw: 3 daemons active (cephmon01.rgw0, cephmon02.rgw0, cephmon03.rgw0)
task status:
data:
pools: 7 pools, 169 pgs
objects: 215 objects, 12 KiB
usage: 3.1 GiB used, 87 GiB / 90 GiB avail
pgs: 169 active+clean
Ceph Dashboard and Grafana
(HTTP on non-secure port)
URL: http://cephmon03.sb.com:8443/ U/P: admin/St0ngAdminp@ass
Grafana:
Client
https://www.server-world.info/en/note?os=CentOS_8&p=ceph15&f=4
Client machine
Install packages.
CentOS 8
dnf install centos-release-ceph-octopus dnf install ceph-common
CentOS 7
yum install centos-release-ceph-nautilus.noarch yum install ceph-fuse yum install ceph-common
Make sure that mgr node (here cephmon01.sb.com) is listening on port 6789.
nc -zv cephmon01.sb.com 6789
Transfer configuration files from mgr machine.
cd /etc/ceph scp root@cephmon01:/etc/ceph/ceph.conf . scp root@cephmon01:/etc/ceph/ceph.client.admin.keyring .
Mount Ceph
ceph-authtool -p /etc/ceph/ceph.client.admin.keyring > admin.key chmod 600 admin.key mkdir /mnt/ceph mount -t ceph cephmon01.sb.com:6789:/ /mnt/ceph -o name=admin,secretfile=/etc/ceph/admin.key df -h
devtmpfs 2,8G 0 2,8G 0% /dev
tmpfs 2,8G 0 2,8G 0% /dev/shm
tmpfs 2,8G 8,6M 2,8G 1% /run
tmpfs 2,8G 0 2,8G 0% /sys/fs/cgroup
/dev/mapper/cl-root 22G 2,2G 20G 10% /
/dev/vda1 976M 189M 721M 21% /boot
tmpfs 161M 0 161M 0% /run/user/0
192.168.0.227:6789:/ 28G 0 28G 0% /mnt/ceph
Add corresponding entry in /etc/fstab file:
cephmon01.sb.com:6789:/ /mnt/ceph ceph name=admin,secretfile=/etc/ceph/admin.key