4. Block Storage Service - marcosaletta/Juno-CentOS7-Guide GitHub Wiki
-
Install the appropriate packages for the Block Storage service, called
Cinder
:
# yum install openstack-cinder python-cinderclient python-oslo-db
-
Configure Block Storage to use your database.
In the
/etc/cinder/cinder.conf
file, set the connection option in the[database]
section and replace$CINDER_DBPASS
with the password for the Block Storage database that you will create in a later step:[database] ... connection = mysql://cinder:$CINDER_DBPASS@$MYSQL_IP/cinder
Note In some distributions, the
/etc/cinder/cinder.conf
file does not include the[database]
section header. You must add this section header to the end of the file before you proceed.Use the password that you set to log in as root to create a cinder database:
# mysql -u root -p mysql> CREATE DATABASE cinder; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '$CINDER_DBPASS'; mysql> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '$CINDER_DBPASS';
-
Create the database tables for the Block Storage service:
# su -s /bin/sh -c "cinder-manage db sync" cinder
-
Create a cinder user.
The Block Storage service uses this user to authenticate with the Identity service.
Use the service tenant and give the user the
admin
role (replace $CINDER_EMAIL with the email address you want to associate to the Cinder service/user):$ keystone user-create --name=cinder --pass=$CINDER_PASS --email=$CINDER_EMAIL $ keystone user-role-add --user=cinder --tenant=service --role=admin
-
Edit the
/etc/cinder/cinder.conf
configuration file and add this section for keystone credentials:[keystone_authtoken] auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = cinder admin_password = $CINDER_PASS
Configure Block Storage to use the RabbitMQ message broker.
In the
[DEFAULT]
section in the/etc/cinder/cinder.conf
file, set these configuration keys and replace$RABBIT_PASS
with the password you chose for RabbitMQ:[DEFAULT] ... rpc_backend = cinder.openstack.common.rpc.impl_kombu rabbit_host = controller rabbit_port = 5672 rabbit_userid = guest rabbit_password = $RABBIT_PASS
-
Register the Block Storage service with the Identity service so that other OpenStack services can locate it:
$ keystone service-create --name=cinder --type=volume --description="OpenStack Block Storage" $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ volume / {print $2}') --publicurl=http://controller:8776/v1/%\(tenant_id\)s --internalurl=http://controller:8776/v1/%\(tenant_id\)s --adminurl=http://controller:8776/v1/%\(tenant_id\)s
Register a service and endpoint for version 2 of the Block Storage service API:
$ keystone service-create --name=cinderv2 --type=volumev2 --description="OpenStack Block Storage v2" $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ volumev2 / {print $2}') --publicurl=http://controller:8776/v2/%\(tenant_id\)s --internalurl=http://controller:8776/v2/%\(tenant_id\)s --adminurl=http://controller:8776/v2/%\(tenant_id\)s
-
Restart the Block Storage services with the new settings:
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service # systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
For the purpose of this guide you can install the cinder-volume
service on one of the compute nodes of the testbed.
This section describes how to install and configure storage nodes for the Block Storage service. For simplicity, this configuration references one storage node with an empty local block storage device (for example) /dev/sdb that contains a suitable partition table with one partition /dev/sdb1 occupying the entire device. The service provisions logical volumes on this device using the LVM driver and provides them to instances via iSCSI transport. You can follow these instructions with minor modifications to horizontally scale your environment with additional storage nodes.
-
Install LVM package:
# yum install lvm2
-
Enable LVM at the boot time and start it
# systemctl enable lvm2-lvmetad.service # systemctl start lvm2-lvmetad.service
-
Create the LVM physical volume:
# pvcreate /dev/sdb1 Physical volume "/dev/sdb1" successfully created
If necessary change the name of the device name according to your set up.
-
Create LVM volume group cinder-volumes:
# vgcreate cinder-volumes /dev/sdb1 Volume group "cinder-volumes" successfully created
IMPORTANT Unlike others operating systems, Centos 7 make use of LVM to create the logical volume for root (aka "/", called dev/mapper/centos-root), home (dev/mapper/centos-home) and swap (dev/mapper/centos-swap). You can see this structure by doing
lsblk
:# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 74,5G 0 disk ??sda1 8:1 0 500M 0 part /boot ??sda2 8:2 0 74G 0 part ??centos-root 253:0 0 44,7G 0 lvm / ??centos-swap 253:1 0 7,5G 0 lvm [SWAP] ??centos-home 253:2 0 21,8G 0 lvm /home
So it's better not to change the configuration of LVM. If you want to restrictive on the devices scan by LVM, edit the /etc/lvm/lvm.conf file and search in the
devices
section for filters. Set it to accept the device on witch centos logical volumes are, for examplesda
as shown above, with the device used by cinder, supposesdb
, and rejects all other devices:devices { ... filter = [ "a/sda/", "a/sdb/", "r/.*/"]
Again change the device name accordingly with the set up.
-
After you have set the prerequisite, install the appropriate packages for the Block Storage service (on node02 in the present guide):
# yum install openstack-cinder targetcli python-oslo-db MySQL-python
-
Edit the
/etc/cinder/cinder.conf
configuration file and add this section for keystone credentials:[DEFAULT] ... auth_strategy = keystone [keystone_authtoken] auth_uri = http://controller:5000/v2.0 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = cinder admin_password = $CINDER_PASS
Configure Block Storage to use the RabbitMQ message broker.
In the [DEFAULT] configuration section of the
/etc/cinder/cinder.conf
file, set these configuration keys and replace$RABBIT_PASS
with the password you chose for RabbitMQ:[DEFAULT] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = $RABBIT_PASS
Configure Block Storage to use your MySQL database. Edit the
/etc/cinder/cinder.conf
file and add the following key to the[database]
section. Replace$CINDER_DBPASS
with the password you chose for the Block Storage database:[database] ... connection = mysql://cinder:$CINDER_DBPASS@$MYSQL_IP/cinder
Note In some distributions, the
/etc/cinder/cinder.conf
file does not include the[database]
section header. You must add this section header to the end of the file before you proceed.Configure Block Storage to use the Image Service. Block Storage needs access to images to create bootable volumes. Edit the
/etc/cinder/cinder.conf
file and update the glance_host option in the[DEFAULT]
section:[DEFAULT] ... glance_host = controller
In the [DEFAULT] section, configure the
my_ip
option:[DEFAULT] ... my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
In the [DEFAULT] section, configure Block Storage to use the lioadm iSCSI service:
[DEFAULT] ... iscsi_helper = lioadm
(Optional) To assist with troubleshooting, enable verbose logging in the [DEFAULT] section:
[DEFAULT] ... verbose = True
-
Enable the services at boot time and start them:
# systemctl enable openstack-cinder-volume.service target.service # systemctl start openstack-cinder-volume.service target.service
List service components to verify successful launch of each process:
cinder service-list
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
| cinder-scheduler | controller | nova | enabled | up | 2014-10-18T01:30:54.000000 | None |
| cinder-volume | block1 | nova | enabled | up | 2014-10-18T01:30:57.000000 | None |
+------------------+------------+------+---------+-------+----------------------------+-----------------+
Create a 1GB volume called demo-volume1:
$ cinder create --display-name demo-volume1 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-10-14T23:11:50.870239 |
| display_description | None |
| display_name | demo-volume1 |
| encrypted | False |
| id | 158bea89-07db-4ac2-8115-66c0d6a4bb48 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
Verify the creating
$ cinder list
--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 158bea89-07db-4ac2-8115-66c0d6a4bb48 | available | demo-volume1 | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
NOTE Using admin
credential the volume will be visible in the dashboard for the admin user only.
Preparation: Log onto the admin node of the ceph cluster and run the following steps:
Create the pools:
# ceph osd pool create volumes 128
The nodes running cinder-volume
, nova-compute
and cinder-backup
act as Ceph clients. Each requires the ceph.conf
file:
# ssh {your-openstack-server} sudo tee /etc/ceph/ceph.conf </etc/ceph/ceph.conf
On the nova-compute
, cinder-backup
and on the cinder-volume
node, use both the Python bindings and the client command line tools:
# sudo yum install ceph-common
# sudo yum install ceph
If you have cephx authentication enabled, create a new user for cinder. Execute the following:
#ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rx pool=images'
Add the keyrings for client.cinder to the cinder-volume nodes and change their ownership:
# ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring
# chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
Nodes running nova-compute need the keyring file for the nova-compute process. They also need to store the secret key of the client.cinder user in libvirt. The libvirt process needs it to access the cluster while attaching a block device from Cinder.
Create a temporary copy of the secret key on the nodes running nova-compute:
# ceph auth get-key client.cinder | tee client.cinder.key
Then, on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key:
# uuidgen
457eb676-33da-42ec-9a8c-9293d545c337
# cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
<uuid>457eb676-33da-42ec-9a8c-9293d545c337</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF
# sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
# sudo virsh secret-set-value --secret 457eb676-33da-42ec-9a8c-9293d545c337 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
On every compute node, edit /etc/nova/nova.conf
and add the following lines in the DEFAULT section:
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
Edit /etc/cinder/cinder.conf enabling the new backend rbddriver
in the [DEFAULT] section:
enabled_backends=rbddriver
[rbddriver]
volume_backend_name=RBD
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_pool=volumes
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_max_clone_depth=5
glance_api_version=2
rbd_user=cinder
rbd_secret_uuid=457eb676-33da-42ec-9a8c-9293d545c337
Note the rbd_secret_uuid
has to be the same as the uuid
set in the nova.conf
file.
Restart service:
# service openstack-cinder-volume restart
Show the block storage services, executing the following command on the controller node:
# cinder service-list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-scheduler | node01 | nova | enabled | up | 2014-06-22T01:13:23.000000 |
| cinder-volume | node02@rbddriver | nova | enabled | up | 2014-06-22T01:13:19.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+
Create the volume:
# cinder create --display-name test 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-11-19T16:04:28.775357 |
| display_description | None |
| display_name | test |
| encrypted | False |
| id | 20b3ce26-0e95-4b71-9313-e0b36d8f9173 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| 20b3ce26-0e95-4b71-9313-e0b36d8f9173 | available | test | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
Check the logs in /var/log/cinder/
2014-11-19 16:04:29.214 30736 INFO cinder.volume.flows.manager.create_volume [req-01d4571b-2f39-48a0-b872-79ed53f625dd 35b3434407544ddb886ea868921081f3 ca4de7364b18433e842b841f2af03521 - - -] Volume 20b3ce26-0e95-4b71-9313-e0b36d8f9173: being created using CreateVolumeFromSpecTask._create_raw_volume with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-20b3ce26-0e95-4b71-9313-e0b36d8f9173'}
2014-11-19 16:04:29.448 30736 INFO cinder.volume.flows.manager.create_volume [req-01d4571b-2f39-48a0-b872-79ed53f625dd 35b3434407544ddb886ea868921081f3 ca4de7364b18433e842b841f2af03521 - - -] Volume volume-20b3ce26-0e95-4b71-9313-e0b36d8f9173 (20b3ce26-0e95-4b71-9313-e0b36d8f9173): created successfully
Preparation: Install the required LVM packages, if they are not already installed:
# sudo yum install lvm2
Create the LVM physical and logical volumes. This guide assumes an additional disk /dev/sdd
that is used for this purpose:
# pvcreate /dev/sdd
# vgcreate cinder-volumes /dev/sdd
Add a filter entry to the devices section in the /etc/lvm/lvm.conf
file to keep LVM from scanning devices used by virtual machines:
devices {
...
filter = [ "a/sda1/", "a/sdd/", "r/.*/"]
...
}
Each item in the filter array starts with either an a
for accept, or an r
for reject. The physical volumes that are required on the Block Storage host have names that begin with a
. The array must end with "r/.*/" to reject any device not listed. In this example, /dev/sda1
is the volume where the volumes for the operating system for the node reside, while /dev/sdd
is the volume reserved for cinder-volumes
In the same file also enable lvmetad
by setting
use_lvmetad = 1
Edit /etc/cinder/cinder.conf
enabling the backend lvmdriver
in the [DEFAULT] section:
enabled_backends=lvmdriver
my_ip=$MANAGEMENT_IP_ADDRESS
iscsi_helper=tgtadm
[lvmdriver]
volume_group=cinder-volumes
volume_driver=cinder.volume.drivers.lvm.LVMISCSIDriver
volume_backend_name=LVM_iSCSI
Edit /etc/tgt/targets.conf
and add, if not present yet
include /etc/cinder/volumes/*
Restart the Block Storage services with the new settings:
# service openstack-cinder-volume restart
# service tgtd restart
# service lvm2-lvmetad restart
# service lvm2-monitor restart
It's not a bad idea also to restart cinder
services on the controller node:
# service openstack-cinder-api restart
# service openstack-cinder-scheduler restart
Show the block storage services, executing the following command on the controller node:
# cinder service-list
+------------------+--------------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+--------------------+------+---------+-------+----------------------------+
| cinder-scheduler | node01 | nova | enabled | up | 2014-06-22T01:13:23.000000 |
| cinder-volume | node0X@lvmdriver | nova | enabled | up | 2014-06-22T01:13:19.000000 |
+------------------+--------------------+------+---------+-------+----------------------------+
Create the volume:
# cinder create --display-name test 1
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| created_at | 2014-06-22T01:14:02.705154 |
| display_description | None |
| display_name | test |
| encrypted | False |
| id | ad2f9004-3939-4b1c-a234-8ab26b8fe961 |
| metadata | {} |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| volume_type | None |
+---------------------+--------------------------------------+
# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ad2f9004-3939-4b1c-a234-8ab26b8fe961 | available | test | 1 | None | false | |
| cfe55712-5933-42fe-b9a2-aacaa8620cd6 | creating | test | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
#
Check the logs in /var/log/cinder/
2014-06-22 01:14:03.336 23430 INFO cinder.volume.flows.manager.create_volume [req-c34ca714-e272-4fce-adc8-bc22dc85e74b 0ab4996b97ce4869896023f526ac0e30 5d2a076b4cbd463a95408a461772612e - - -] Volume ad2f9004-3939-4b1c-a234-8ab26b8fe961: being created using CreateVolumeFromSpecTask._create_raw_volume with specification: {'status': u'creating', 'volume_size': 1, 'volume_name': u'volume-ad2f9004-3939-4b1c-a234-8ab26b8fe961'}
2014-06-22 01:14:04.126 23430 INFO cinder.volume.flows.manager.create_volume [req-c34ca714-e272-4fce-adc8-bc22dc85e74b 0ab4996b97ce4869896023f526ac0e30 5d2a076b4cbd463a95408a461772612e - - -] Volume volume-ad2f9004-3939-4b1c-a234-8ab26b8fe961 (ad2f9004-3939-4b1c-a234-8ab26b8fe961): created successfully
Check the volume status:
# cinder list
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ID | Status | Display Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+
| ad2f9004-3939-4b1c-a234-8ab26b8fe961 | available | test | 1 | None | false | |
| cfe55712-5933-42fe-b9a2-aacaa8620cd6 | creating | test | 1 | None | false | |
+--------------------------------------+-----------+--------------+------+-------------+----------+-------------+