Ceph OSD Encryption - infn-bari-school/cloud-storage-tutorials GitHub Wiki

Prepare the encrypted pool on ceph

For this tutorial we will use a running ceph cluster like that shown in the following figure: ceph-cluster-1.png

To create an encrypted pool we will execute the following steps:

  1. add new OSDs to the cluster using the --dmcrypt option;
  2. modify the crush map in order to create a root bucket containing the new OSDs and a new rule "encrypted_ruleset";
  3. create the pool "encrypted" and associate it with the rule "encrypted_ruleset".

The final setup of the cluster is shown in the following figure: ceph-cluster-2.png

Step1: add the new encrypted OSDs

From the admin node (node-0), into the working directory (cd cluster-ceph), run these commands:

ceph-deploy osd create --dmcrypt ceph-node1-$GN:vdc ceph-node2-$GN:vdc ceph-node3-$GN:vdc 

Note: by default, ceph stores the encryption keys in the folder /etc/ceph/dmcrypt-keys.

You can check the cluster status and the OSD tree map:

ceph status

ceph osd tree

Step2: Modify the crush map

Modify the crush map by the admin node (node-0) as root

Create the root bucket for the encrypted node

ceph osd crush add-bucket encrypted root

Create host bucket for enc osd and move to encrypted root

ceph osd crush add-bucket node-1-enc host
ceph osd crush move node-1-enc root=encrypted

ceph osd crush add-bucket node2-enc host
ceph osd crush move node2-enc root=encrypted

ceph osd crush add-bucket node3-enc host
ceph osd crush move node3-enc root=encrypted

Move encrypted osd in the correct host bucket

ceph osd crush create-or-move osd.3 1 host=node-1-enc
ceph osd crush create-or-move osd.4 1 host=node-2-enc
ceph osd crush create-or-move osd.5 1 host=node-3-enc

Check the new tree

ceph osd tree

List the rule

ceph osd crush rule list

Create the rule to use encrypted bucket as the target for pool

ceph osd crush rule create-simple encrypted_ruleset encrypted host

List the rule

ceph osd crush rule list

ALTERNATIVE: Manual crush edit

Alternative to previous instruction

Get the crush map and decompile it using the following commands:

#!bash
ceph osd getcrushmap -o crushmap.compiled
crushtool -d crushmap.compiled -o crushmap.decompiled 

Edit the map crushmap.decompiled adding the following buckets:

host vm02-encr {
        id -5           # do not change unnecessarily
        # weight 0.080
        alg straw
        hash 0  # rjenkins1
        item osd.3 weight 0.040
}
host vm03-encr {
        id -6           # do not change unnecessarily
        # weight 0.080
        alg straw
        hash 0  # rjenkins1
        item osd.4 weight 0.040
}
host vm04-encr {
        id -7           # do not change unnecessarily
        # weight 0.080
        alg straw
        hash 0  # rjenkins1
        item osd.5 weight 0.040
}

root encrypted {
        id -8           # do not change unnecessarily
        # weight 0.120
        alg straw
        hash 0  # rjenkins1
        item vm02-encr weight 0.040
        item vm03-encr weight 0.040
        item vm04-encr weight 0.040
}

Moreover add the following rule:

rule encrypted_ruleset {
        ruleset 1
        type replicated
        min_size 1
        max_size 10
        step take encrypted
        step chooseleaf firstn 0 type host
        step emit
}

Note: the ids of the added entities may be different - take care of replacing them in order to use unique ids inside the map.

Now, you can apply the modified crush map:

#!bash
crushtool -c crushmap.decompiled -o crushmap.compiled
ceph osd setcrushmap -i crushmap.compiled

Note: you can disable updating the crushmap on start of the daemon:

[osd]
osd crush update on start = false

Check the cluster status:

#!bash
ceph status

ceph osd tree

Step3: Create the pool 'encrypted'

Create the pool "encrypted" and assign the rule "encrypted_ruleset" to the pool:

#!bash
ceph osd pool create encrypted 128
ceph osd pool set encrypted crush_ruleset 1

Check the results:

#!bash
ceph osd dump

Create a dedicated cinder backend

Once we have created an encrypted pool we can configure cinder to use it. Edit the cinder-volume configuration file adding the new backend:

[rbdencrypted]
volume_backend_name=RBD-ENCR
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=encrypted
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
glance_api_version=2
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337

Modify the 'cinder' user capabilities to allow access also to the new pool 'encrypted' in order to read and write data:

#!bash
ceph auth caps client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images, allow rwx pool=encrypted'

Restart the cinder-volume service:

#!bash
service cinder-volume restart

Create the volume-type "encrypted":

cinder type-create encrypted
cinder type-key encrypted set volume_backend_name=RBD-ENCR

Create a volume on the encrypted backend:

cinder create --volume_type encrypted --display_name vol-test-encrypted 1

Check the status and the details of the volume:

cinder show vol-test-encrypted