2. Controller and Network Node Installation - marcosaletta/Juno-CentOS7-Guide GitHub Wiki
Controller node installation
In this section is described the installation of all the services for the controller node.
Step 1: MariaDB installation/configuration
It is possible to install MariaDB (MySQL) on the controller node (as done in this guide) or on a separate node, depending on the desired architecture. In the following, replace $MYSQL_IP with the IP address of the node hosting the database.
- To install
mysql-pythonandmariadb:
# sudo yum install mariadb mariadb-server MySQL-python
Note: When you install the server package, you are prompted for the root password for the database (which can, and should, be different from the password of the root system user). Choose a strong password and remember it.
If the request for the password of root doesn't start during the installation use
# sudo systemctl start mariadb.service
to start the service and then
# mysql_secure_installation
to set the password. The first time you use this command, obviously, there wont be any password set, so press enter at the first question.
The next question is Set root password?, so replay yes and chose the root's password for the database.
Replay "yes" to all the others.
-
Edit the
/etc/my.cnffile: Under the[mysqld]section, set the bind-address key to the management IP address of the controller node to enable access by other nodes via the management network:[mysqld] ... bind-address = $MYSQL_IPUnder the [mysqld] section, set the following keys to enable InnoDB, UTF-8 character set, and UTF-8 collation by default:
[mysqld] ... default-storage-engine = innodb innodb_file_per_table collation-server = utf8_general_ci init-connect = 'SET NAMES utf8' character-set-server = utf8 -
Restart the MariaDB service to apply the changes: Remember that in CentOS 7 you have to use this layout of command in order to start/stop/restart the service
# sudo sudo systemctl restart mariadb.service
Enable the service to be started at the boot
# sudo sudo systemctl enable mariadb.service
Step 2: Install the message broker service (RabbitMQ)
-
Install RabbitMQ
To install the messages brokerRabbitMQ
# sudo yum install -y yum install rabbitmq-server -
Enable the service to be started at the boot and start it:
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
- Change the password for the user guest
# rabbitmqctl change_password guest $RABBIT_PASS
where$RABBIT_PASSis a suitable password for the service.
NOTE For RabbitMQ version 3.3.0 or newer, you must enable remote access for the guest account.
Check the version of rabbit doing
rpm -qa | grep rabbit
If necessary edit (you might also have to create the file) /etc/rabbitmq/rabbitmq.config adding
[{rabbit, [{loopback_users, []}]}].
Step 3: Install Identity service (Keystone)
- Install the OpenStack Identity service on the controller node, together with
python-keystoneclient(which is a dependency):
# sudo yum install -y openstack-keystone.noarch
# sudo yum install -y python-keystone.noarch
-
Use the password that you set previously to log in to MariaDB as root. Create a database and an user (both) called
keystone(replace$KEYSTONE_DBPASSwith a strong password you choose for thekeystoneuser and database):# mysql -u root -p mysql> CREATE DATABASE keystone; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '$KEYSTONE_DBPASS'; mysql> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '$KEYSTONE_DBPASS'; mysql> exit -
The Identity service uses such database to store information. Specify the location of the database in the configuration file. In this guide, we use a MariaDB (MySQL) database on the controller node with the username
keystone.Edit
/etc/keystone/keystone.confand change the[database]section:[database] # The SQLAlchemy connection string used to connect to the database connection = mysql://keystone:$KEYSTONE_DBPASS@$MYSQL_IP/keystone ... -
By default, the Ubuntu packages create a SQLite database. Delete the
keystone.dbfile created in the/var/lib/keystone/directory so that it does not get used by mistake:# rm /var/lib/keystone/keystone.db -
Create the database tables for the Identity service:
# su -s /bin/sh -c "keystone-manage db_sync" keystone
- Define an authorization token to use as a shared secret between the Identity service and other OpenStack services. Use
opensslto generate a random token and store it in the configuration file:
# openssl rand -hex 10
- Edit
/etc/keystone/keystone.confand change the [DEFAULT] section, replacing$ADMIN_TOKENwith the results of the previous command:
[DEFAULT]
# A "shared secret" between keystone and other openstack services
admin_token = $ADMIN_TOKEN
...
- Configure token and SQL drivers Edit the
/etc/keystone/keystone.conffile and update the[token]section
[token]
...
provider = keystone.token.providers.uuid.Provider
driver = keystone.token.persistence.backends.sql.Token
and the [revoke] section
[revoke]
...
driver = keystone.contrib.revoke.backends.sql.Revoke
Also enable verbose logging in the [DEFAULT] section to help troubleshooting:
[DEFAULT]
...
verbose = True
- Restart the Identity service and configure it to start when the system boots::
# sudo systemctl start openstack-keystone.service
# sudo systemctl enable openstack-keystone.service
- By default, the Identity service stores expired tokens in the database indefinitely. While potentially useful for auditing in production environments, the accumulation of expired tokens will considerably increase database size consequently decreasing service performance, particularly in test environments with limited resources. We recommend configuring a periodic task using
cronto purge expired tokens hourly.
(crontab -l -u keystone 2>&1 | grep -q token_flush) || echo '@hourly /usr/bin/keystone-manage token_flush >/var/log/keystone/keystone-tokenflush.log 2>& 1' >> /var/spool/cron/keystone
Define users, tenants, and roles
$ export OS_SERVICE_TOKEN=$ADMIN_TOKEN
$ export OS_SERVICE_ENDPOINT=http://controller:35357/v2.0
Create an administrative user
Follow these steps to create an administrative user (called admin), role (called admin), and tenant (called admin). You will use this account for administrative interaction with the OpenStack cloud.
Note
Any role that you create must map to roles specified in the policy.json file included with each OpenStack service. The default policy file for most services grants administrative access to the admin role.
-
Create the
adminuser (replace$ADMIN_PASSwith a strong password and replace$ADMIN_EMAILwith an email address to associate the account to):$ keystone user-create --name=admin --pass=$ADMIN_PASS --email=$ADMIN_EMAIL -
Create the
adminrole:$ keystone role-create --name=admin -
Create the
admintenant:$ keystone tenant-create --name=admin --description="Admin Tenant" -
You must now link the
adminuser,adminrole, andadmintenant together using theuser-role-addoption:$ keystone user-role-add --user=admin --tenant=admin --role=admin -
Link the
adminuser,_member_role, andadmintenant:$ keystone user-role-add --user=admin --role=_member_ --tenant=admin
NOTE sing the --tenant option automatically assigns the _member_ role to a user. This option will also create the _member_ role if it does not exist. This is different from Icehouse OpenStack, where you have to create _member_ role manually.
Create a service tenant
OpenStack services also require a username, tenant, and role to access other OpenStack services. In a basic installation, OpenStack services typically share a single tenant named service.
You will create additional usernames and roles under this tenant as you install and configure each service.
-
Create the
servicetenant:$ keystone tenant-create --name=service --description="Service Tenant"
Create the Identity service
Keystone, the OpenStack Identity service, needs to be registered as a service in itself:
$ keystone service-create --name=keystone --type=identity --description="OpenStack Identity"
The output should be something like this:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | OpenStack Identity |
| enabled | True |
| id | 68683d6ffd7d49859dd9f7fe2fd12be7 |
| name | keystone |
| type | identity |
+-------------+----------------------------------+
Next create the endpoint:
$ keystone endpoint-create --service-id=$(keystone service-list | awk '/ identity / {print $2}') --publicurl=http://controller:5000/v2.0 --internalurl=http://controller:5000/v2.0 --adminurl=http://controller:35357/v2.0
Output example:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| adminurl | http://controller:35357/v2.0 |
| id | 0c34c6e6fd5f411a9e349eeca1c9b3db |
| internalurl | http://controller:5000/v2.0 |
| publicurl | http://10.10.10.3:5000/v2.0 |
| region | regionOne |
| service_id | 68683d6ffd7d49859dd9f7fe2fd12be7 |
+-------------+----------------------------------+
Verify the Identity service installation
-
To verify that the Identity service is installed and configured correctly, clear the values in the
OS_SERVICE_TOKENandOS_SERVICE_ENDPOINTenvironment variables:$ unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINTThese variables, which were used to bootstrap the administrative user and register the Identity service, are no longer needed.
You can now use regular user name-based authentication.
-
Request a authentication token by using the admin user and the password you chose for that user:
$ keystone --os-username=admin --os-password= $ADMIN_PASS --os-auth-url=http://controller:35357/v2.0 token-get
In response, you receive a token paired with your user ID. This verifies that the Identity service is running on the expected endpoint and that your user account is established with the expected credentials.
-
Verify that authorization behaves as expected. To do so, request authorization on a tenant:
$ keystone --os-username=admin --os-password=$ADMIN_PASS --os-tenant-name=admin --os-auth-url=http://controller:35357/v2.0 token-getIn response, you receive a token that includes the ID of the tenant that you specified. This verifies that your user account has an explicitly defined role on the specified tenant and the tenant exists as expected.
-
You can also set your
--os-*variables in your environment to simplify command-line usage. Create anadmin-openrc.shfile in the root home directory, with the following content:export OS_USERNAME=admin export OS_PASSWORD=$ADMIN_PASS export OS_TENANT_NAME=admin export OS_AUTH_URL=http://controller:35357/v2.0 -
Add the following line to the
.bashrcfile into your home directory (/root/in this example) to read in the environment variables at every access:$ source /root/admin-openrc.sh -
Verify that your
admin-openrc.shfile is configured correctly. Run the same command without the--os-*arguments:$ keystone token-get
Step 4: Install Image service (Glance)
- Install the Image service:
$ sudo yum install -y openstack-glance.noarch
$ sudo yum install -y python-glanceclient.noarch
The Image service stores information about images in a database. The examples in this guide use the MySQL database that is used by other OpenStack services.
-
Configure the location of the database. The Image service provides the
glance-apiandglance-registryservices, each with its own configuration file. You must update both configuration files throughout this section. Replace$GLANCE_DBPASSwith your Image service database password.Edit
/etc/glance/glance-api.confand/etc/glance/glance-registry.confand edit the[database]section of each file:[database] connection = mysql://glance:$GLANCE_DBPASS@controller/glance
NOTE Instead of @controller you can use $CONTROLLER_IP ip address.
-
Configure the Image service to use the message broker. Replace
$RABBIT_PASSwith the password you have chosen for theguestaccount in RabbitMQ. Edit the/etc/glance/glance-api.conffile and add the following keys to the[DEFAULT]section:[DEFAULT] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = $RABBIT_PASS -
By default, the Ubuntu packages create an SQLite database. Delete the
glance.sqlitefile (if exists) created in the/var/lib/glance/directory so that it does not get used by mistake:$ rm /var/lib/glance/glance.sqlite -
Use the password you created to log in as root to the DB and create a database called
glanceand an user calledglance(replace$GLANCE_DBPASSwith the password you want to assign to theglanceMySQL user and database):$ mysql -u root -p mysql> CREATE DATABASE glance; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '$GLANCE_DBPASS'; mysql> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '$GLANCE_DBPASS'; -
Create the database tables for the Image service:
$ su -s /bin/sh -c "glance-manage db_sync" glance -
Create a glance user that the Image service can use to authenticate with the Identity service. Choose a password (to replace
$GLANCE_PASSin the following command) and specify an email address (to replace$GLANCE_EMAILin the following command) for theglanceuser. Use the service tenant and give the user the admin role:$ keystone user-create --name=glance --pass=$GLANCE_PASS --email=$GLANCE_EMAIL $ keystone user-role-add --user=glance --tenant=service --role=admin -
Configure the Image service to use the Identity service for authentication.
Edit the
/etc/glance/glance-api.confand/etc/glance/glance-registry.conffiles. Replace$GLANCE_PASSwith the password you chose for the glance user in the Identity service.Add or modify the following keys under the
[keystone_authtoken]section (replace$CONTROLLER_PUBLIC_IPwith the public IP address of the controller node and$GLANCE_PASSwith a suitable password for the Glance service):[keystone_authtoken] auth_uri = http://$CONTROLLER_PUBLIC_IP:5000 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = glance admin_password = $GLANCE_PASS
NOTE Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them.
- Modify the following key under the
[paste_deploy]section:
[paste_deploy]
...
flavor = keystone
Also in both enable verbose logging
[DEFAULT]
...
verbose = True
and
[DEFAULT]
...
notification_driver = noop
In glance-api.conf file set:
[glance_store]
...
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
-
Register the Image service with the Identity service so that other OpenStack services can locate it. Register the service and create the endpoint (note that here the
controllername, associated to the private IP of the controller node into the/etc/hostsfile, is used for theinternalurlandadminurl, while the controller public IP$CONTROLLER_PUBLIC_IPis used for thepublicurl; similar settings will be used for other services):$ keystone service-create --name=glance --type=image --description="OpenStack Image Service" $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ image / {print $2}') --publicurl=http://controller:9292 --internalurl=http://controller:9292 --adminurl=http://controller:9292
NOTE again you can use $CONTROLLER_PUBLIC_IP for the publicurl instead of the host name "controller"
-
Restart the Glance services with its new settings and set it to start at boot time:
$ systemctl enable openstack-glance-api.service openstack-glance-registry.service $ systemctl start openstack-glance-api.service openstack-glance-registry.service
Verify the Image service installation
To test the Image service installation, download at least one virtual machine image that is known to work with OpenStack. For example, CirrOS is a small test image that is often used for testing OpenStack deployments.
$ wget https://download.cirros-cloud.net/0.3.4/cirros-0.3.4-x86_64-disk.img
$ glance image-create --name "cirros-0.3.4-x86_64" --disk-format qcow2 --container-format bare --is-public True --progress < cirros-0.3.4-x86_64-disk.img
Confirm that the image was uploaded and display its attributes:
$ glance image-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| defcfc7ad-56aa-2341-9553-d855997c1he0 | cirros-0.3.2-x86_64 | qcow2 | bare | 13167616 | active |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
NOTE To check where glance has put an image try:
$ mysql -h localhost -u root -p
mysql> use glance;
mysql> select value from image_locations where image_id="your image ID";
using the ID taken from glance image-list
Enable Ceph backend
Create the pool 'images':
$ ceph osd pool create images 128 128
Check the size replica of the new pool 'images' with:
$ ceph osd dump | grep replica
To be consistent with the previous Ceph configuration, if the size of the new pool is not set on 2 and min_size is not set on value 1 do the following commands to set replica 2 for pool 'images' and min_size (minimum amount of active PG to do r/w operations):
$ ceph osd pool set images size 2
$ ceph osd pool set images min_size 1
Install & Configure ceph client (if you haven't done yet):
$ sudo yum install -y python-ceph.x86_64
Copy the file /etc/ceph/ceph.conffrom the ceph node into the controller where you are installing the glance server.
Set up the ceph client authentication:
$ ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
$ ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
$ chown glance:glance /etc/ceph/ceph.client.glance.keyring
Edit the configuration file /etc/glance/glance-api.conf setting the following parameters:
[glance_store]
default_store = rbd
stores = glance.store.rbd.Store
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
Restart services:
$ service openstack-glance-api restart && service openstack-glance-registry restart
Upload an image on the ceph backend (using the option --store rbd)
$ wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1503.qcow2
$ glance image-create --name centos7 --disk-format qcow2 --container-format bare --is-public True --store rbd --progress < CentOS-7-x86_64-GenericCloud-1503.qcow2
Step 5: Install Compute service (Nova)
-
Install the Compute packages necessary for the controller node.
$ sudo yum install -y openstack-nova-api.noarch $ sudo yum install -y openstack-nova-cert.noarch $ sudo yum install -y openstack-nova-conductor.noarch $ sudo yum install -y openstack-nova-console.noarch $ sudo yum install -y openstack-nova-scheduler.noarch $ sudo yum install -y python-novaclient.noarch
Use those names when you need to start/stop/restart the services.
-
Compute stores information in a database. In this guide, we use a MySQL database on the controller node. Configure Compute with the database location and credentials. Replace
$NOVA_DBPASSwith the password for the database that you will create in a later step.Edit the
[database]section in the/etc/nova/nova.conffile, adding it if necessary, to modify this key:[database] connection = mysql://nova:$NOVA_DBPASS@$MYSQL_IP/nova -
Configure the Compute service to use the RabbitMQ message broker by setting these configuration keys in the
[DEFAULT]configuration group of the/etc/nova/nova.conffile:[DEFAULT] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = $RABBIT_PASS -
Set the
my_ip,vncserver_listen, andvncserver_proxyclient_addressconfiguration options to the management interface IP address of the controller node:Edit the
/etc/nova/nova.conffile and add these lines to the[DEFAULT]section:[DEFAULT] ... my_ip = $PUBLIC_CONTROLLER_IP vncserver_listen = $PUBLIC_CONTROLLER_IP vncserver_proxyclient_address = $PUBLIC_CONTROLLER_IP -
By default, the Ubuntu packages create an SQLite database. Delete the
nova.sqlitefile created in the/var/lib/nova/directory so that it does not get used by mistake:$ rm /var/lib/nova/nova.sqlite -
Use the password you created previously to log in as root. Create a nova database user:
$ mysql -u root -p mysql> CREATE DATABASE nova; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '$NOVA_DBPASS'; mysql> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '$NOVA_DBPASS'; -
Create the Compute service tables:
# su -s /bin/sh -c "nova-manage db sync" nova -
Create a nova user that Compute uses to authenticate with the Identity service. Use the
servicetenant and give the user theadminrole (replace$NOVA_PASSwith the password you have chosen for the Compute service Nova and$NOVA_EMAILwith the email address you want to associate to the service):$ keystone user-create --name=nova --pass=$NOVA_PASS --email=$NOVA_EMAIL $ keystone user-role-add --user=nova --tenant=service --role=admin -
Configure Compute to use these credentials with the Identity service running on the controller.
Edit the
[DEFAULT]section in the/etc/nova/nova.conffile to add this key:[DEFAULT] ... auth_strategy = keystone -
Add these keys to the
[keystone_authtoken]section:[keystone_authtoken] ... auth_uri = http://controller:5000 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = nova admin_password = $NOVA_PASS
NOTE Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them.
Also, in `[glance]`section define
[glance]
...
host = controller
To help debugging problems enable verbose output:
[DEFAULT]
...
verbose = True
-
You must register Compute with the Identity service so that other OpenStack services can locate it. Register the service and specify the endpoint:
$ keystone service-create --name=nova --type=compute --description="OpenStack Compute" $ keystone endpoint-create --service-id=$(keystone service-list | awk '/ compute / {print $2}') --publicurl=http://controller:8774/v2/%\(tenant_id\)s --internalurl=http://controller:8774/v2/%\(tenant_id\)s --adminurl=http://controller:8774/v2/%\(tenant_id\)s -
Restart Compute services and enable them to start at boot time:
systemctl enable openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service systemctl start openstack-nova-api.service openstack-nova-cert.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service -
To verify your configuration, list available images:
$ nova image-listThe output should look like this:
+--------------------------------------+---------------------+--------+--------+ | ID | Name | Status | Server | +--------------------------------------+---------------------+--------+--------+ | acafc7c0-40aa-4026-9673-b879898e1fc2 | cirros-0.3.2-x86_64 | ACTIVE | | +--------------------------------------+---------------------+--------+--------+
Step 6: Install Networking service (Neutron)
NOTE The configuration of neutron is divided between Controller and Network nodes. Check beside the tittle of each section on which node you have to perform the instructions.
Prerequisites (ON THE CONTROLLER NODE )
-
Before you configure the OpenStack Networking service (called Neutron), you must create a database and Identity service credentials including a user and service.
Connect to the database as the
rootuser, create theneutrondatabase, and grant the proper access to it:Replace
$NEUTRON_DBPASSwith a suitable password.$ mysql -u root -p mysql> CREATE DATABASE neutron; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '$NEUTRON_DBPASS'; mysql> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '$NEUTRON_DBPASS'; -
Create Identity service credentials for Networking:
Create the neutron user:
Replace
$NEUTRON_PASSwith a suitable password and$NEUTRON_EMAILwith a suitable e-mail address.$ keystone user-create --name neutron --pass $NEUTRON_PASS --email $NEUTRON_EMAILLink the
neutronuser to theservicetenant andadminrole:$ keystone user-role-add --user neutron --tenant service --role adminCreate the
neutronservice:$ keystone service-create --name neutron --type network --description "OpenStack Networking"Create the service endpoint:
$ keystone endpoint-create --service-id $(keystone service-list | awk '/ network / {print $2}') --publicurl http://controller:9696 --adminurl http://controller:9696 --internalurl http://controller:9696
Install the Networking service (ON THE CONTROLLER NODE )
-
Install the Networking components
$ sudo yum install -y openstack-neutron.noarch $ sudo yum install -y openstack-neutron-ml2.noarch $ sudo yum install -y python-neutronclient -
Configure the Networking server component
The Networking server component configuration includes the database, authentication mechanism, message broker, topology change notifier, and plug-in.
Configure Networking to use the database:
Edit the
/etc/neutron/neutron.conffile and add the following key to the[database]section:Replace
$NEUTRON_DBPASSwith the password you chose for the database.[database] ... connection = mysql://neutron:$NEUTRON_DBPASS@controller/neutron -
Configure Networking to use the Identity service for authentication:
Edit the
/etc/neutron/neutron.conffile and add the following key to the[DEFAULT]section:[DEFAULT] ... auth_strategy = keystone -
Add the following keys to the
[keystone_authtoken]section:Replace
$NEUTRON_PASSwith the password you chose for the neutron user in the Identity service.[keystone_authtoken] ... auth_uri = http://controller:5000 identity_uri = http://controller:35357 admin_tenant_name = service admin_user = neutron admin_password = $NEUTRON_PASSNOTE Comment out any
auth_host,auth_port, andauth_protocoloptions because theidentity_urioption replaces them. -
Configure Networking to use the message broker:
Edit the
/etc/neutron/neutron.conffile and add the following keys to the[DEFAULT]section:Replace
$RABBIT_PASSwith the password you chose for the guest account in RabbitMQ.[DEFAULT] ... rpc_backend = rabbit rabbit_host = controller rabbit_password = $RABBIT_PASS -
Configure Networking to notify Compute about network topology changes:
Replace
$SERVICE_TENANT_IDwith theservicetenant identifier (id, obtained with the commandkeystone tenant-list) in the Identity service and$NOVA_PASSwith the password you chose for the nova user in the Identity service.Edit the
/etc/neutron/neutron.conffile and add the following keys to the[DEFAULT]section:[DEFAULT] ... notify_nova_on_port_status_changes = True notify_nova_on_port_data_changes = True nova_url = http://controller:8774/v2 nova_region_name = regionOne nova_admin_username = nova nova_admin_tenant_id = $SERVICE_TENANT_ID nova_admin_password = $NOVA_PASS nova_admin_auth_url = http://controller:35357/v2.0Note To obtain the service tenant identifier (id) you can also run:
$ source admin-openrc.sh $ keystone tenant-get service
which should show an output like this:
+-------------+----------------------------------+
| Property | Value |
+-------------+----------------------------------+
| description | Service Tenant |
| enabled | True |
| id | f727b5ec2ceb4d71bad86dfc414449bf |
| name | service |
+-------------+----------------------------------+
-
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
Edit the
/etc/neutron/neutron.conffile and add the following keys to the[DEFAULT]section:[DEFAULT] ... core_plugin = ml2 service_plugins = router allow_overlapping_ips = TrueNote We recommend adding
verbose = Trueto the[DEFAULT]section in/etc/neutron/neutron.conffor eventual troubleshooting.
Configure the Modular Layer 2 (ML2) plug-in (ON THE CONTROLLER NODE )
-
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances. However, the controller node does not need the OVS agent or service because it does not handle instance network traffic.
Edit the
/etc/neutron/plugins/ml2/ml2_conf.inifile:Add the following keys to the
[ml2]section:[ml2] ... type_drivers = flat,gre tenant_network_types = gre mechanism_drivers = openvswitchAdd the following key to the
[ml2_type_gre]section:[ml2_type_gre] ... tunnel_id_ranges = 1:1000Add the
[securitygroup]section and the following keys to it:[securitygroup] ... firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver enable_security_group = True enable_ipset = True
Configure Compute to use Networking (ON THE CONTROLLER NODE)
-
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the
/etc/nova/nova.confand add the following keys to the[DEFAULT]section:Replace
$NEUTRON_PASSwith the password you chose for the neutron user in the Identity service.[DEFAULT] ... network_api_class = nova.network.neutronv2.api.API linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver firewall_driver = nova.virt.firewall.NoopFirewallDriver security_group_api = neutron[Note] Note By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the
nova.virt.firewall.NoopFirewallDriverfirewall driver.Edit also the
[neutron]section like[neutron] ... url = http://controller:9696 auth_strategy = keystone admin_auth_url = http://controller:35357/v2.0 admin_tenant_name = service admin_username = neutron admin_password = NEUTRON_PASS
Finalize installation (ON THE CONTROLLER NODE )
-
Create symbolic link and populate the database:
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.iniPopulate the database:
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade juno" neutron -
Restart Compute and Networking services:
Restart Nova:
# systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.serviceStart Neutron and enable it to start at boot time:
# systemctl enable neutron-server.service # systemctl start neutron-server.service -
Test Neutron
List loaded extensions to verify successful launch of the neutron-server process:
$ neutron ext-list +-----------------------+-----------------------------------------------+ | alias | name | +-----------------------+-----------------------------------------------+ | security-group | security-group | | l3_agent_scheduler | L3 Agent Scheduler | | ext-gw-mode | Neutron L3 Configurable external gateway mode | | binding | Port Binding | | provider | Provider Network | | agent | agent | | quotas | Quota management support | | dhcp_agent_scheduler | DHCP Agent Scheduler | | l3-ha | HA Router extension | | multi-provider | Multi Provider Network | | external-net | Neutron external network | | router | Neutron L3 Router | | allowed-address-pairs | Allowed Address Pairs | | extraroute | Neutron Extra Route | | extra_dhcp_opt | Neutron Extra DHCP opts | | dvr | Distributed Virtual Router | +-----------------------+-----------------------------------------------+
Install the network-node components (ON NETWORK NODE)
Edit /etc/sysctl.conf to contain the following:
net.ipv4.ip_forward=1
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Implement the changes:
$ sysctl -p
To install the Networking components
$ sudo yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
(the other packages are already installed)
Configure Neutron (ON NETWORK NODE)
Edit the /etc/neutron/neutron.conf file and complete the following actions:
-
In the
[database]section, comment out any connection options because network nodes do not directly access the database. -
In the
[DEFAULT]section, configure RabbitMQ message broker access:
```
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = RABBIT_PASS
```
Replace RABBIT_PASS with the password you chose for the guest account in RabbitMQ.
- In the
[DEFAULT]and[keystone_authtoken]sections, configure Identity service access:
``` [DEFAULT] ... auth_strategy = keystone
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = NEUTRON_PASS
```
NOTE Comment out any auth_host, auth_port, and auth_protocol options because the identity_uri option replaces them
Configure the Layer-2 (ML2) plugin (ON NETWORK NODE)
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini file and complete the following actions:
- In the [ml2] section, enable the flat and generic routing encapsulation (GRE) network type drivers, GRE tenant networks, and the OVS mechanism driver:
```
[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
```
- In the [ml2_type_flat] section, configure the external flat provider network:
```
[ml2_type_flat]
...
flat_networks = external
```
- In the [ml2_type_gre] section, configure the tunnel identifier (id) range:
```
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
```
- In the [securitygroup] section, enable security groups, enable ipset, and configure the OVS iptables firewall driver:
```
[securitygroup]
...
enable_security_group = True
enable_ipset = True
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
```
- In the
[ovs]section, add it if necessary, enable tunnels, configure the local tunnel endpoint, and map the external flat provider network to the br-ex external network bridge:
```
[ovs]
...
local_ip = INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
enable_tunneling = True
bridge_mappings = external:br-ex
```
Replace `INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS` with the IP address of the instance tunnels network interface on your network node.
- In the [agent] section, add it if necessary, enable GRE tunnels:
```
[agent]
...
tunnel_types = gre
```
Configure the Layer-3 (L3) agent (ON NETWORK NODE)
The Layer-3 (L3) agent provides routing services for instance virtual networks.
Edit the /etc/neutron/l3_agent.ini file and add the following keys to the [DEFAULT] section:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
use_namespaces = True
external_network_bridge = br-ex
router_delete_namespaces = True
Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/l3_agent.ini to assist with troubleshooting.
Configure the DHCP agent (ON NETWORK NODE)
The DHCP agent provides DHCP services for instance virtual networks.
Edit the /etc/neutron/dhcp_agent.ini file and add the following keys to the [DEFAULT] section:
[DEFAULT]
...
interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
use_namespaces = True
dhcp_delete_namespaces = True
Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/dhcp_agent.ini to assist with troubleshooting.
Configure the metadata agent
The metadata agent provides configuration information such as credentials for remote access to instances.
Edit the /etc/neutron/metadata_agent.ini file and add the following keys to the [DEFAULT] section:
Replace $NEUTRON_PASS with the password you chose for the neutron user in the Identity service. Replace $METADATA_SECRET with a suitable secret for the metadata proxy (for example, you can generate a string with the openssl command, as shown at the beginning of this page).
[DEFAULT]
...
auth_url = http://controller:5000/v2.0
auth_region = regionOne
admin_tenant_name = service
admin_user = neutron
admin_password = $NEUTRON_PASS
nova_metadata_ip = controller
metadata_proxy_shared_secret = $METADATA_SECRET
Note We recommend adding verbose = True to the [DEFAULT] section in /etc/neutron/metadata_agent.ini to assist with troubleshooting.
ON THE CONTROLLER NODE: Edit the /etc/nova/nova.conf file and add the following keys to the [DEFAULT] section:
Replace $METADATA_SECRET with the secret you chose for the metadata proxy.
```
[DEFAULT]
...
service_neutron_metadata_proxy = True
neutron_metadata_proxy_shared_secret = $METADATA_SECRET
```
Aways on the controller node, restart the Compute API service:
# systemctl restart openstack-nova-api.service
Configure the Open vSwitch (OVS) service
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int handles internal instance network traffic within OVS. The external bridge br-ex handles external instance network traffic within OVS. The external bridge requires a port on the physical external network interface to provide instances with external network access. In essence, this port bridges the virtual and physical external networks in your environment.
Before move on make sure that you ave installed bridge-utils
$ sudo yum install bridge-utils
Restart the OVS service:
$ systemctl enable openvswitch.service
$ systemctl start openvswitch.service
Add the integration bridge:
# ovs-vsctl add-br br-int
Add the external bridge:
# ovs-vsctl add-br br-ex
Add a port to the external bridge that connects to the physical external network interface:
Replace $INTERFACE_NAME with the actual interface name (in our case is the interface on the private network):
# ovs-vsctl add-port br-ex $INTERFACE_NAME
Finalize installation
- The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini. If this symbolic link does not exist, create it using the following command:
```
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
```
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
```
# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
```
- Start the Networking services and configure them to start when the system boots:
```
# systemctl enable neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service neutron-ovs-cleanup.service
# systemctl start neutron-openvswitch-agent.service neutron-l3-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
```
Verify operation
List agents to verify successful launch of the neutron agents:
$ neutron agent-list
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
| 30275801-e17a-41e4-8f53-9db63544f689 | Metadata agent | network | :-) | True | neutron-metadata-agent |
| 4bd8c50e-7bad-4f3b-955d-67658a491a15 | Open vSwitch agent | network | :-) | True | neutron-openvswitch-agent |
| 756e5bba-b70f-4715-b80e-e37f59803d20 | L3 agent | network | :-) | True | neutron-l3-agent |
| 9c45473c-6d6d-4f94-8df1-ebd0b6838d5f | DHCP agent | network | :-) | True | neutron-dhcp-agent |
+--------------------------------------+--------------------+---------+-------+----------------+---------------------------+
Create initial networks
The schema of the initial network is shown in the image beloW. The image is taken from the official Juno guide, and all the adresses refer to that. We have to create the external network ext-net and the tenant network demo-net.

External network
The external network typically provides Internet access for your instances. By default, this network only allows Internet access from instances using Network Address Translation (NAT). You can enable Internet access to individual instances using a floating IP address and suitable security group rules. The admin tenant owns this network because it provides external network access for multiple tenants.
NOTE perform these commands on the controller node
- Create the network:
neutron net-create ext-net --router:external True --provider:physical_network external --provider:network_type flatand you should obtainCreated a new network: +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | True | | id | 893aebb9-1c1e-48be-8908-6b947f3237b3 | | name | ext-net | | provider:network_type | flat | | provider:physical_network | external | | provider:segmentation_id | | | router:external | True | | shared | False | | status | ACTIVE | | subnets | | | tenant_id | 54cd044c64d5408b83f843d63624e0d8 | +---------------------------+--------------------------------------+Like a physical network, a virtual network requires a subnet assigned to it. The external network shares the same subnet and gateway associated with the physical network connected to the external interface on the network node. You should specify an exclusive slice of this subnet for router and floating IP addresses to prevent interference with other devices on the external network.
**NOTE** If you are getting an error `HTTP 400 Bad request` in creating this network and doing the operations in the sections below try to use `--tenant-id $SERVICE_TENANT_ID`, where you can find the tenant id by doing `keystone tenant-list`.
- Create a subnet on the external network
Create the subnet:
```
# neutron subnet-create ext-net --name ext-subnet --allocation-pool start=FLOATING_IP_START,end=FLOATING_IP_END --disable-dhcp --gateway EXTERNAL_NETWORK_GATEWAY EXTERNAL_NETWORK_CIDR
```
Replace `FLOATING_IP_START` and `FLOATING_IP_END` with the first and last IP addresses of the range that you want to allocate for floating IP addresses. Replace `EXTERNAL_NETWORK_CIDR` with the subnet associated with the physical network. Replace `EXTERNAL_NETWORK_GATEWAY` with the gateway associated with the physical network, typically the ".1" IP address. You should disable DHCP on this subnet because instances do not connect directly to the external network and floating IP addresses require manual assignment.
Tenant network
The tenant network provides internal network access for instances. The architecture isolates this type of network from other tenants. The demo tenant owns this network because it only provides network access for instances within it.
- Create the network:
```
# neutron net-create demo-net
Created a new network:
+-----------------+--------------------------------------+
| Field | Value |
+-----------------+--------------------------------------+
| admin_state_up | True |
| id | ac108952-6096-4243-adf4-bb6615b3de28 |
| name | demo-net |
| router:external | False |
| shared | False |
| status | ACTIVE |
| subnets | |
| tenant_id | cdef0071a0194d19ac6bb63802dc9bae |
+-----------------+--------------------------------------+
```
Like the external network, your tenant network also requires a subnet attached to it. You can specify any valid subnet because the architecture isolates tenant networks. By default, this subnet will use DHCP so your instances can obtain IP addresses.
- Create the subnet
```
neutron subnet-create demo-net --name demo-subnet --gateway TENANT_NETWORK_GATEWAY TENANT_NETWORK_CIDR
```
Replace TENANT_NETWORK_CIDR with the subnet you want to associate with the tenant network and TENANT_NETWORK_GATEWAY with the gateway you want to associate with it, typically the ".1" IP address.
Example using 192.168.1.0/24:
```
# neutron subnet-create demo-net --name demo-subnet \
--gateway 192.168.1.1 192.168.1.0/24 Created a new subnet: +-------------------+------------------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------+ | allocation_pools | {"start": "192.168.1.2", "end": "192.168.1.254"} | | cidr | 192.168.1.0/24 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.1.1 | | host_routes | | | id | 69d38773-794a-4e49-b887-6de6734e792d | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | demo-subnet | | network_id | ac108952-6096-4243-adf4-bb6615b3de28 | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +-------------------+------------------------------------------------------+ ``` A virtual router passes network traffic between two or more virtual networks. Each router requires one or more interfaces and/or gateways that provide access to specific networks. In this case, you will create a router and attach your tenant and external networks to it.
- Create the router
```
# neutron router-create demo-router
Created a new router: +-----------------------+--------------------------------------+ | Field | Value | +-----------------------+--------------------------------------+ | admin_state_up | True | | external_gateway_info | | | id | 635660ae-a254-4feb-8993-295aa9ec6418 | | name | demo-router | | routes | | | status | ACTIVE | | tenant_id | cdef0071a0194d19ac6bb63802dc9bae | +-----------------------+--------------------------------------+ ```
- Attach the router to the demo tenant subnet:
```
# neutron router-interface-add demo-router demo-subnet
Added interface b1a894fd-aee8-475c-9262-4342afdc1b58 to router demo-router.
```
- Attach the router to the external network by setting it as the gateway:
```
# neutron router-gateway-set demo-router ext-net
Set gateway for router demo-router
```
Verify connectivity
The router that we have create, if the configuration is correct, should occupy the lowest IP address available on the external network. From any host on the on the external network you should be able to ping the router. Try that and ensure to solve any eventual problems before move on.
Note Depending on your network interface driver, you may need to disable Generic Receive Offload (GRO) to achieve suitable throughput between your instances and the external network.
To temporarily disable GRO on the external network interface while testing your environment:
# ethtool -K $INTERFACE_NAME gro off
Restart the Networking services:
# service neutron-openvswitch-agent restart
# service neutron-l3-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
Step 7: Install the dashboard (Horizon)
Install the packages:
# sudo yum install openstack-dashboard httpd mod_wsgi memcached python-memcached
Check if the CACHES['default']['LOCATION'] in /etc/openstack-dashboard/local_settings.py like:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}
Update the ALLOWED_HOSTS in /etc/openstack-dashboard/local_settings.py to allow all host to access the dash board
ALLOWED_HOSTS = ['*']
NOTE You can set the access more strictly and include only the addresses you wish to access the dashboard from; for example, if you want to access the dashboard only from localhost, from your desktop (my-desktop) and from host1 and host2, insert:
ALLOWED_HOSTS = ['localhost', 'my-desktop', 'host1', 'host2']
Edit /etc/openstack-dashboard/local_settings.py and change OPENSTACK_HOST to the hostname of your Identity service (in this case the controller node; this can be used to run the dashboard on a separate host):
OPENSTACK_HOST = "controller"
Optionally set the time zone:
TIME_ZONE = "Europe/Rome"
If you haven't set SELinux to disabled, configure it to permit the web server to connect to Openstack services:
setsebool -P httpd_can_network_connect on
NOTE Due to a packaging bug, the dashboard CSS fails to load properly. Run the following command to resolve this issue:
chown -R apache:apache /usr/share/openstack-dashboard/static
Enable and restart the httpd and memcached services
systemctl enable httpd.service memcached.service
systemctl start httpd.service memcached.service
Now you can access the dashboard from
http://$node-name/dashboard
were $node-name in the name that you have specified in the OPENSTACK-HOST field in /etc/openstack-dashboard/local_settings.
Use the hostname of the node on the pubblic network if you want to reach the node from outside.
To log in you can use the credentials of admin or demo users. Also the services credentials are accepted.
NOTE Usually all commands are launched as admin user. If you access to the the dashboard using another users, for example nova, and you create an instances, them won't be showed doing nova list unless you pass to OpenStack nova credentials using --os- (see nova -h for more informations) options or creating a file like admin-openrc.sh to source.