1. General Configuration - marcosaletta/Juno-CentOS7-Guide GitHub Wiki
In this guide, the step-by-step installation of OpenStack Juno is illustrated.
The commands after a #
must be executed as the root
user, while the ones after a $
can be executed with any user (including root
).
Features like IP addresses or passwords are kept general, with environment variables used instead (for example, $MYSQL_IP
is used to replace 10.10.10.7
).
NOTE For the rest of this guide remember to add all the services that are used on each node at the list that is started during the boot using
systemctl enable $SERVICE_NAME
The $SERVICE_NAME
is the name used to start/stop/restart the service.
To check the setting for a given service
systemctl list-unit-files | grep $SERVICE_NAME
Following the Openstack official installation guide, the environment will have the following nodes with the relative configuration:
-
node01 - controller node on which Keystone, Nova, and Glance services run. Monitor node for Ceph
-
node02 - network node on which Neutron runs. It also works as gateway for public network access and has 2 volumes necessary to implement the Cluster Storage (Swift).
- sdb1 100G for Swift
- sdc1 100G for Swift
-
node03 - compute node 1 (Nova-Compute) with a volume used as OSD for Ceph and a volume the for Cinder
- sdb1 100G for Cinder
- sdc1 100G for Ceph
-
node04 - compute node 2 (Nova-Compute) with 2 volumes necessary to implement the Cluster Storage (Swift) and a volume used as OSD for Ceph
- sdb1 100G for Swift
- sdc1 100G for Swift
- sdd1 100G for Ceph
Note: in the present guide, sdb/c volumes have been considered as an example. Check the names of the corresponding devices in your system.
The architecture to be deployed is sketched in next figures.
The installation guide is organised as follows:
-
Pre-requirements:
- Step 1: Network interfaces configuration
- Step 2: Install Network Time Protocol (NTP)
- Step 3: Install Openstack Packages on all nodes
- Step 4: Install the MySQL Python library
- Step 5: Modify the /etc/hosts file
- Step 6: Install the distributed filesystem (Ceph)
-
Basic services installation:
-
Controller and Network node installation:
- Step 1: MySQL installation/configuration
- Step 2: Install the message broker service (RabbitMQ)
- Step 3: Install Identity service (Keystone)
- Step 4: Install Image service (Glance)
- Step 5: Install Compute service (Nova)
- Step 6: Install Networking service (Neutron)
- Step 7: Install the dashboard (Horizon)
-
Compute node installation:
- Step 1: Create nova-related groups and users with specific id
- Step 2: Install Compute packages
- Step 3: Install Networking packages
- Step 4: Configuring Nova with Ceph-FS
- Step 5: Configure Live Migration
-
Controller and Network node installation:
-
Advanced services installation:
- Swift
- Cinder
- Ceilometer
- Heat
For the sake of this guide, we tried to reduce the number of network interfaces and IP required, especially public ones. In general, two NIC are required:
- PUBLIC network (in this guide 10.10.10.0/24)
- PRIVATE network (in this guide 10.10.20.0/24)
In more detail:
-
All the hosts need an interface and an IP address on the PRIVATE network.
-
Controller and Network nodes need an interface on the PUBLIC network, but only the Controller node needs an IP.
NOTE You need some more IPs on the PUBLIC network that will be used as floating ip to attach at VMs, in order to grant access from the outside world. This is not strictly required and the number of public address to use as floating IP depends on the your local availability. We suggest to have at least one to test this feature.
In CentOS the configuration files for the network interfaces are in
/etc/sysconfig/network-scripts/ifcfg-$INT-NAME
where $INT-NAME
is the name of the network interface, for example eth0 or eno1.
An example for a static configuration looks like this:
TYPE=Ethernet
BOOTPROTO=static
DEFROUTE=yes
NAME=eno1
UUID=dcdf0399-e8f3-4e91-a95b-09f4e6a2bf9e
DEVICE=eno1
ONBOOT=yes
IPADDR=
NETMASK=
GATEWAY=
NM_CONTROLLED=no
DNS1=
set NETMASK
, IPADDR
and DNS1
relatively to your environment.
More in detail:
- for the two compute node use the configuration above. Set
GATEWAY=$CONTROLLER_PRIVATE_IP
- for the network node use the above configuration for the private interface, again with
GATEWAY=$CONTROLLER_PRIVATE_IP
. For the public interface reply it, for example on eth1 or eno2, with the appropriate value for the public net but setDEFROUTE=no
and comment out theIPADDR=
field. In this way the node has an interface on the public network but no public ip. His traffic will route trough the private address. SetGATEWAY
to the address relative to your infrastructure. - For the controller node set as above both public an private interface. This time both will have IP and the default route is the public, so set
DEFROUTE=yes
fot it andDEFROUTE=no
for the public one. Set bothGATEWAY
to the addresses relative to your infrastructure.
This mixed configuration of GATEWAY=
fields is needed for the for the steps below.
IMPORTANT In this guide we will use the firewall to ensure connectivity to the outside world for the network and compute nodes. This is done by enabling masquerading and opening the necessary ports.
To implement this configuration follow this steps:
- Open a graphic consolle on the controller by using
`ssh -x root@$CONTROLLER_IP`
- Start firewall service (if needed)
`systemctl start firewalld.service `
- Open the graphic interface for the firewall
`firewall-config`
- Enable masqueraded zone in the masquerading menu

- Add some ports to the list in the Ports menu in order to allow access to them

NOTE The list in the image above should cover what is needed for this guide. Remember anyway to check the ports if you modify something or are getting some errors.
After the configurations of the interfaces you can restart the network by doing
sudo service network restart
or only one interface by doing
ifdown $INT-NAME; ifup $INT-NAME
Note that if you turn off the interface you are connected to, e.g. via ssh, you loose connectivity
Install ntp
(if not present yet)
$ sudo yum install -y ntp
Make sure also that ntp
has the right servers addresses for your environment in the /etc/ntp.conf
file. In case of changes, restart the ntp service:
$ systemctl restart ntpd.service
Check using date
command on all nodes if the clocks are synchronized.
Before going on update the system:
$ sudo yum update
Install epel
repository (code for CentOS 7):
$ sudo yum install http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
If the version of CentOS is different please refer to
EPEL FAQ
Install Juno
repository:
$ sudo yum install -y yum-plugin-priorities
$ sudo yum install -y http://rdo.fedorapeople.org/openstack-juno/rdo-release-juno.rpm
Also install the openstack-selinux
package to manage the security policies in Openstack:
$ sudo yum install openstack-selinux
NOTE In order to avoid errors during the installation of the testbed it's recomended to set selinux from enforcing
to disabled
in /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=disabled
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted
At the end you can enable it again with permissive
setting.
Upgrade then the system:
$ sudo yum upgrade
Reboot the system.
On all nodes other than the controller node, install the mariadb
sql database and MySQL-Python
library:
$ yum install mariadb mariadb-server MySQL-python
Modify the /etc/hosts
file on all nodes, as done in the sample file hosts.sample
NOTE To avoid problems in the installation of Ceph
that will follow, edit the HOSTNAME
field in the /etc/sysconfig/network
file using the name of the node in the public network. The modification will be set at the next reboot.
In this guide we make use of a distributed filesystem based on CEPH. With this procedure you will deploy a Ceph (version Giant 0.94) cluster made by one node running a metadata server, a monitor and acts also as admin node and 2 OSD nodes. The admin node will be the "node01" (controller) meanwhile the 2 OSDs will be "node02" and "node03" (compute01 and compute02) (NOTE: you need a proper kernel version and the node clocks have to be synchronized, follow the Ceph official documentation for more details).
The following image shows the cluster architecture that you are going to deploy for the testing environment:
On every cluster node create a "ceph" user and set to it a new password:
$ sudo useradd -d /home/ceph -m ceph
$ sudo passwd ceph
To provide full privileges to the user, on every cluster node add the following to /etc/sudoers.d/ceph
:
$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
And change permissions in this way:
$ sudo chmod 0440 /etc/sudoers.d/ceph
Configure your admin node with password-less SSH access to each node running Ceph daemons (leave the passphrase empty). On your admin node node01, become ceph user and generate the ssh key:
$ su - ceph
$ /bin/bash
$ ssh-keygen -t dsa
You will have output like this:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/ceph/.ssh/id_rsa):
Created directory '/home/ceph/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/ceph/.ssh/id_rsa.
Your public key has been saved in /home/ceph/.ssh/id_rsa.pub.
Copy the key to each cluster node and test the password-less access:
$ ssh-copy-id ceph@compute01
$ ssh-copy-id ceph@compute02
$ ssh ceph@compute01
$ ssh ceph@compute02
Now you need to edit (or create) the file /etc/yum.repos.d/ceph.repo
as ceph
user
$ sudo vi /etc/yum.repos.d/ceph.repo
to contain
[ceph-noarch]
name=Ceph noarch packages
baseurl=http://ceph.com/rpm-{ceph-release}/{distro}/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://ceph.com/git/?p=ceph.git;a=blob_plain;f=keys/release.asc
for add the package to your repository, you have to replace ceph-release
with giant
and distro
with el7
(if using CentOS 7).
These instructions and those that follow are taken from the Ceph-Preflight guide.
Update your repository and install ceph-deploy
:
$ sudo yum update && sudo yum install ceph-deploy
Check also that openssh-server
in installed.
Edit (or create) on the controller node, as ceph user, the ~/.ssh/config
file as follow:
Host node01
Hostname controller
User ceph
Host node02
Hostname compute01
User ceph
Host node03
Hostname compute02
User ceph
and set the permissions as chmod 600 ~/.ssh/config
.
Make sure that the names of the nodes in this file are consistent with your environment.
Before moving on with the ceph's installation is required to disabled reuiretty
on all nodes.
To disable requiretty
you must use
$ sudo visudo
and comment with #
the line
Defaults requiretty
On the admin node make a directory (a kind of working directory) into ceph user's home directory, this directory will be useful because will contain all the files generated by ceph-deploy tool:
Using the ceph user, run the following commands:
$ mkdir ceph-cluster
$ cd ceph-cluster/
From the admin node, into the working directory, run this command to install ceph on all the cluster nodes:
# ceph-deploy install controller compute01 compute02
NOTE if you are getting some problems with dependencies for some packages during the installation try to disable epel
reposistory. Go to /etc/yum.repos.d/
and edit in epel.repo
and epel-testing.repo
the lines
enabled=1
to
enabled=0
You may also to have to install manually leveldb
like
yum install leveldb.x86_64 --enablerepo=epel
if the relative error message is showing up. At the moment Ceph is having some troubles handling epel
repository by it self.
Check the installed version on every node:
$ ceph -v
From the admin node, into the working directory, run this command in order to indicate to ceph-deploy which nodes will be the initial monitors:
$ ceph-deploy new controller
Always from the working directory run the following command to create the initial monitors:
$ ceph-deploy mon create-initial
If you get a warning message like this: "[ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on ['node01']" run the following command:
$ ceph-deploy gatherkeys controller
From the working directory run this command to let node01 to be the admin node (you can add more admin nodes)
$ ceph-deploy admin controller
On each admin node run:
$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring
On the admin node, into the working directory, for every future OSD node list the available disks:
$ ceph-deploy disk list compute01
$ ceph-deploy disk list compute02
NOTE As first instance, as also suggested in Ceph-Preflight, it's easiest to use a directory instead of a full volume for the OSD
daemons. The description of how to use a full volume is shown below.
In each OSD node create the directory /var/local/osd#
, where #
is the number relative to the OSD.
To prepare the OSDs use
$ ceph-deploy osd prepare {ceph-node}:/path/to/directory
as, for example
$ ceph-deploy osd prepare compute01:/var/local/osd0 compute02:/var/local/osd1
Finally, to activate the OSDs
$ ceph-deploy osd activate {ceph-node}:/path/to/directory
For example:
$ ceph-deploy osd activate compute01:/var/local/osd0 compute02:/var/local/osd1
If you are getting some authentication problems try first stopping iptables and then copy two keys from the admin node to the OSD nodes:
$ scp <cluster_name>.client.admin.keyring osd-node-name:/etc/ceph (or your osd-node installation directory)
$ scp /var/lib/ceph/bootstrap-osd/ceph.keyring osd-node-name:/var/lib/ceph/bootstrap-osd (create bootstrap-osd direcrory on osd-node if not there)
NOTE You must use the syntax
$ sudo /etc/init.d/ceph command
instead of (for example)
$ sudo restart ceph-osd-all
.
So, to start, stop an restart the OSD daemons
$ sudo /etc/init.d/ceph stop osd
$ sudo /etc/init.d/ceph start osd
$ sudo /etc/init.d/ceph restart osd
To use a complete volume instead of a folder the steps are the same.
So, to delete its partitions table (NOTE You will lose all the data on the disk)
$ ceph-deploy disk zap {ceph-node}:device-name
for example
$ ceph-deploy disk zap compute01:sdc
and to create the OSD
$ ceph-deploy osd create {ceph-node}:device-name
for example
$ ceph-deploy osd create compute02:sdd
With ceph-deploy osd create
you can prepare OSDs, deploy them to the OSD node(s) and activate them in one step. This command is a convenience method for executing the prepare and activate command sequentially.
From the admin node check the cluster status and the OSD tree map:
$ ceph status
$ ceph osd tree
Check the size replica with:
$ ceph osd dump | grep replica
If the size is not set on 2 and min_size is not set on value 1 do the following commands to set replica 2 for each pool and min_size (minimum amount of active PG to do r/w operations):
IMPORTANT NOTE In previous version of Ceph 3 osd pools were created during the installation by default. In latest versions ONLY rdb
pool is created by default. Later in the steps we will create also data
and metadata
pools.
$ ceph osd pool set rbd size 2
$ ceph osd pool set rbd min_size 1
Note: Here, only 2 OSDs have been configured, in two different nodes, with replica 2. However, replica 3 (with an additional node) is usually preferred and strongly recommended for a production infrastructure. To set replica 3, one has to require size 3 in previous parameters (instead of 2), and min_size 2 (instead of 1).
At this point check the cluster status with the following command:
$ ceph status
If you have a warning like this: "health HEALTH_WARN 192 pgs stuck unclean" you have to restart the Ceph OSD daemons, so on every OSD node run:
$ sudo restart ceph-osd-all
Wait a bit and then check if the status is "active+clean" with the usual ceph status
command.
Create the MDS, from the admin node run:
$ ceph-deploy mds create controller
Now check if the MDS has been deployed correctly:
$ ceph mds dump
Ceph prepares some default storage pools named "rbd", "data" and "metadata". You have to indicate the proper number of placement group to use for every pool. To compute a good amount of placement group for one pool use the following formula (NOTE: #replica is the desired number of replicas for one pool):
(#OSD daemons * 100) / #replica
Round the result to the next power of 2: in our case will be (2*100)/2 = 100, rounded to 128. So run the following commands from the admin node to set the right amount of placement groups for the default pools (NOTE: you also have to indicate for each pool how many available placement groups to use for storing purposes setting the pgp_num variable). Usually the creation of new PGs needs time, therefore wait a bit if you receive some error messages:
$ ceph osd pool set rbd pg_num 128
$ ceph osd pool set rbd pgp_num 128
Finally check the cluster status:
$ ceph status
If it's "active+clean" you can start using your new ceph cluster, otherwise is preferable to debug and get the active+clean status.
To try to store one file into "data" pool use a command like this (in this case in our admin node):
$ rados put {object-name} {file-path} --pool=data
Do this command to check if the file has been stored into the pool "data":
$ rados ls -p data
Before moving on with the installation of the distribute file system (ceph-fuse
) we need to creare data
, metadata
pools and the actual file system.
To create the two pools
ceph osd pool create data 128 128
ceph osd pool create metadata 128 128
where 128 is the number placement groups, specified directly during the creation. Now set the numbers of replica as done above:
$ ceph osd pool set data size 2
$ ceph osd pool set data min_size 1
$ ceph osd pool set metadata size 2
$ ceph osd pool set metadata min_size 1
Check the polls list:
$ceph osd lspools
0 rbd,1 data,2 metadata,
Using the pools just created we can now create also the file system using ceph fs new <fs_name> <metadata> <data>
:
ceph fs new cephfs metadata data
Check the file system creation by doing
ceph mds stat
e5: 1/1/1 up {0=a=up:active}
$ sudo yum install ceph-fuse.x86_64
and then mount it, running:
$ ceph-fuse -m {monitor_hostname:6789} {mount_point_path}
Note that the mount_point_path
must exist before you can mount the ceph filesystem.
In our case the mountpoint is the directory /ceph-fs
that we create with
$ sudo mkdir /ceph-fs