How to install Ceph cluster (version Infernalis) - infn-bari-school/cloud-storage-tutorials GitHub Wiki

Ceph cluster installation (infernalis)

Requirements: 4 nodes:

  • node-0: deploy,mon,mds, rgw
  • node-[1-3]: osd (device /dev/vdb)

On each node

  1. create the user ceph-deploy

  2. edit the file /etc/hosts:

    x.x.x.x node-1

    y.y.y.y node-2

    ....

  3. configure passwordless login for the user ceph-deploy

On every cluster node create a "ceph" user and set to it a new password:

sudo useradd -d /home/ceph-deploy -m ceph-deploy

sudo passwd ceph-deploy

To provide full privileges to the user, on every cluster node add the following to /etc/sudoers.d/ceph:

echo "ceph-deploy ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph

And change permissions in this way:

sudo chmod 0440 /etc/sudoers.d/ceph

Configure your admin node with password-less SSH access to each node running Ceph daemons (leave the passphrase empty). On your admin node node01, become ceph user and generate the ssh key:

# su - ceph-deploy
$ /bin/bash
$ ssh-keygen -t dsa

You will have output like this:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph-deploy/.ssh/id_rsa):
Created directory '/home/ceph-deploy/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph-deploy/.ssh/id_rsa.
Your public key has been saved in /home/ceph-deploy/.ssh/id_rsa.pub.

Copy the key to each cluster node and test the password-less access:

ssh-copy-id ceph-deploy@node02

ssh-copy-id ceph-deploy@node03

ssh-copy-id ceph-deploy@node04

On the admin node:

wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
echo deb http://download.ceph.com/debian-infernalis/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
echo deb http://download.ceph.com/debian-giant/ $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/ceph.list
sudo apt-get -qqy update && sudo apt-get install -qqy ntp ceph-deploy
mkdir cluster-ceph
cd cluster-ceph
ceph-deploy new <mon>

Add the following line in ceph.conf [global]:

osd pool default size = 3

Install ceph:

for i in {0..3}; do ceph-deploy install --release infernalis ceph-node0$i; done

Add the initial monitor(s) and gather the keys:

ceph-deploy mon create-initial
ceph-deploy@ceph-node-0:~/cluster-ceph$ ll
total 188
drwxrwxr-x 2 ceph-deploy ceph-deploy   4096 Dec  7 23:23 ./
drwxr-xr-x 5 ceph-deploy ceph-deploy   4096 Dec  7 22:41 ../
-rw------- 1 ceph-deploy ceph-deploy     71 Dec  7 23:23 ceph.bootstrap-mds.keyring
-rw------- 1 ceph-deploy ceph-deploy     71 Dec  7 23:23 ceph.bootstrap-osd.keyring
-rw------- 1 ceph-deploy ceph-deploy     71 Dec  7 23:23 ceph.bootstrap-rgw.keyring
-rw------- 1 ceph-deploy ceph-deploy     63 Dec  7 23:21 ceph.client.admin.keyring
-rw-rw-r-- 1 ceph-deploy ceph-deploy    260 Dec  7 22:29 ceph.conf
-rw-rw-r-- 1 ceph-deploy ceph-deploy 151132 Dec  7 23:23 ceph.log
-rw------- 1 ceph-deploy ceph-deploy     73 Dec  7 22:28 ceph.mon.keyring
-rw-r--r-- 1 root        root          1645 Dec  7 22:30 release.asc
for i in {1..3}; do ceph-deploy disk list ceph-node0$i; done
for i in {1..3}; do ceph-deploy osd create ceph-node0$i:vdb; done
for i in {0..3}; do ceph-deploy admin ceph-node0$i; done

Ensure that you have the correct permissions for the ceph.client.admin.keyring.

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

Add the RADOS GW:

ceph-deploy rgw create ceph-node-0

Add the metadata server:

ceph-deploy mds create node-0

Test it!

Try to store one file into "data" pool (create it if it does not exist) using a command like this:

rados put {object-name} {file-path} --pool=data

Do this command to check if the file has been stored into the pool "data":

rados ls -p data

You can identify the object location with: ceph osd map {pool-name} {object-name}

To mount ceph FS with FUSE, first install ceph-fuse with the command

apt-get install ceph-fuse

and then mount it, running:

ceph-fuse -m {monitor_hostname:6789} {mount_point_path}

Note that the mount_point_path must exist before you can mount the ceph filesystem. In our case the mountpoint is the directory /ceph-fs that we create with

sudo mkdir /ceph-fs

⚠️ **GitHub.com Fallback** ⚠️