3. Compute node Installation - marcosaletta/Juno-CentOS7-Guide GitHub Wiki
In this installation guide, we use the KVM Hypervisor
To install nova-compute
on all compute nodes
$ sudo yum install openstack-nova-compute sysfsutils
If prompted to create a supermin appliance, respond yes.
Edit the /etc/nova/nova.conf
configuration file and add these lines to the appropriate sections:
[DEFAULT]
...
auth_strategy = keystone
...
[database]
# The SQLAlchemy connection string used to connect to the database
connection = mysql://nova:$NOVA_DBPASS@$MYSQL_IP/nova
[keystone_authtoken]
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = nova
admin_password = $NOVA_PASS
Configure the Compute service to use the RabbitMQ message broker by setting these configuration keys in the [DEFAULT]
configuration group of the /etc/nova/nova.conf
file:
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = $RABBIT_PASS
Configure Compute to provide remote console access to instances.
Edit /etc/nova/nova.conf
and add the following keys under the [DEFAULT]
section:
[DEFAULT]
...
my_ip = $COMPUTE_NODE_PRIVATE_IP
vnc_enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $MANAGEMENT_INTERFACE_IP_ADDRESS
novncproxy_base_url = http://controller:6080/vnc_auto.html
NOTE: If the web browser to access remote consoles resides on a host that cannot resolve the controller hostname, you must replace controller with the management interface IP address of the controller node.
Specify the host that runs the Image Service. Edit /etc/nova/nova.conf
file and add these lines to the [DEFAULT]
section:
[DEFAULT]
...
glance_host = controller
To assist troubleshooting enable verbose option:
[DEFAULT]
...
verbose = True
You must determine whether your system's processor and/or hypervisor support hardware acceleration for virtual machines.
Run the following command:
$ egrep -c '(vmx|svm)' /proc/cpuinfo
If this command returns a value of one or greater, your system supports hardware acceleration which typically requires no additional configuration.
If this command returns a value of zero, your system does not support hardware acceleration and you must configure libvirt to use QEMU instead of KVM.
NOTE The configuration file /etc/nova/nova-compute.conf
doesn't exits on CentOS. Search for [libvirt]
in /etc/nova/nova.conf
and edit
[libvirt]
...
virt_type = qemu
Enable and start the services:
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
Edit /etc/sysctl.conf
to contain the following:
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
Implement the changes:
# sysctl -p
install the Networking components on compute nodes
# yum install openstack-neutron-ml2 openstack-neutron-openvswitch
The Networking common component configuration includes the authentication mechanism, message broker, and plug-in.
Configure Networking to use the Identity service for authentication:
Edit the /etc/neutron/neutron.conf
file and add the following key to the [DEFAULT]
section:
[DEFAULT]
...
auth_strategy = keystone
Add the following keys to the [keystone_authtoken]
section:
Replace $NEUTRON_PASS with the password you chose for the neutron user in the Identity service.
[keystone_authtoken]
...
auth_uri = http://controller:5000/v2.0
identity_uri = http://controller:35357
admin_tenant_name = service
admin_user = neutron
admin_password = $NEUTRON_PASS
Configure Networking to use the message broker:
Edit the /etc/neutron/neutron.conf
file and add the following keys to the [DEFAULT]
section:
Replace $RABBIT_PASS
with the password you chose for the guest account in RabbitMQ.
[DEFAULT]
...
rpc_backend = rabbit
rabbit_host = controller
rabbit_password = $RABBIT_PASS
Configure Networking to use the Modular Layer 2 (ML2) plug-in and associated services:
Edit the /etc/neutron/neutron.conf
file and add the following keys to the [DEFAULT]
section:
[DEFAULT]
...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = True
[Note] Note
We recommend adding verbose = True
to the [DEFAULT]
section in /etc/neutron/neutron.conf
to assist with troubleshooting.
To configure the Modular Layer 2 (ML2) plug-in
The ML2 plug-in uses the Open vSwitch (OVS) mechanism (agent) to build the virtual networking framework for instances.
NOTE In this guide we will use gre
as driver type for the network.
Edit the /etc/neutron/plugins/ml2/ml2_conf.ini
file:
Add the following keys to the [ml2]
section:
[ml2]
...
type_drivers = flat,gre
tenant_network_types = gre
mechanism_drivers = openvswitch
Add the following keys to the [ml2_type_gre]
section:
[ml2_type_gre]
...
tunnel_id_ranges = 1:1000
Add the [ovs]
section and the following keys to it:
[ovs]
...
local_ip = $INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
enable_tunneling = True
Replace $INSTANCE_TUNNELS_INTERFACE_IP_ADDRESS
with the IP address of the instance tunnels network interface on your compute node (usually the private IP, in our case is the only avaiable).
Add the [securitygroup]
section and the following keys to it:
[securitygroup]
...
firewall_driver = neutron.agent.linux.iptables_firewall.OVSHybridIptablesFirewallDriver
enable_security_group = True
enable_ipset= True
In the [agent]
section, enable GRE tunnels:
[agent]
...
tunnel_types = gre
The OVS service provides the underlying virtual networking framework for instances. The integration bridge br-int
handles internal instance network traffic within OVS.
Restart the OVS service:
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
###To configure Compute to use Networking
By default, most distributions configure Compute to use legacy networking. You must reconfigure Compute to manage networks through Networking.
Edit the /etc/nova/nova.conf
and add the following keys to the [DEFAULT]
section:
[DEFAULT]
...
network_api_class = nova.network.neutronv2.api.API
security_group_api = neutron
linuxnet_interface_driver = nova.network.linux_net.LinuxOVSInterfaceDriver
firewall_driver = nova.virt.firewall.NoopFirewallDriver
and in [neutron]
section
[neutron]
...
url = http://controller:9696
auth_strategy = keystone
admin_auth_url = http://controller:35357/v2.0
admin_tenant_name = service
admin_username = neutron
admin_password = $NEUTRON_PASS
Replace $NEUTRON_PASS
with the password you chose for the neutron user in the Identity service.
By default, Compute uses an internal firewall service. Since Networking includes a firewall service, you must disable the Compute firewall service by using the nova.virt.firewall.NoopFirewallDriver
firewall driver.
###To finalize the installation
The Networking service initialization scripts expect a symbolic link /etc/neutron/plugin.ini pointing to the ML2 plug-in configuration file, /etc/neutron/plugins/ml2/ml2_conf.ini
. If this symbolic link does not exist, create it using the following command:
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
Due to a packaging bug, the Open vSwitch agent initialization script explicitly looks for the Open vSwitch plug-in configuration file rather than a symbolic link /etc/neutron/plugin.ini
pointing to the ML2 plug-in configuration file. Run the following commands to resolve this issue:
# cp /usr/lib/systemd/system/neutron-openvswitch-agent.service /usr/lib/systemd/system/neutron-openvswitch-agent.service.orig
# sed -i 's,plugins/openvswitch/ovs_neutron_plugin.ini,plugin.ini,g' /usr/lib/systemd/system/neutron-openvswitch-agent.service
Restart the Compute service:
# systemctl restart openstack-nova-compute.service
Restart and enable the Open vSwitch (OVS) agent:
# systemctl enable neutron-openvswitch-agent.service
# systemctl start neutron-openvswitch-agent.service
NOTE On the controller node:
$ nova service-list
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| Id | Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2014-09-16T23:54:02.000000 | - |
| 2 | nova-consoleauth | controller | internal | enabled | up | 2014-09-16T23:54:04.000000 | - |
| 3 | nova-scheduler | controller | internal | enabled | up | 2014-09-16T23:54:07.000000 | - |
| 4 | nova-cert | controller | internal | enabled | up | 2014-09-16T23:54:00.000000 | - |
| 5 | nova-compute | compute1 | nova | enabled | up | 2014-09-16T23:54:06.000000 | - |
+----+------------------+------------+----------+---------+-------+----------------------------+-----------------+
and
$ neutron agent-list
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
| id | agent_type | host | alive | admin_state_up | binary |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
...
| a5a49051-05eb-4b4f-bfc7-d36235fe9131 | Open vSwitch agent | compute1 | :-) | True | neutron-openvswitch-agent |
+--------------------------------------+--------------------+----------+-------+----------------+---------------------------+
- Install packages:
# yum install ceph-fuse
-
Copy the keyring (file
ceph.client.admin.keyring
) from the controller node to the/etc/ceph/
directory on the compute node, in order to use cephx for authentication -
Mount ceph-fs:
# mkdir /ceph-fs
# ceph-fuse -m <mon IP>:6789 /ceph-fs
- Stop nova service:
# service openstack-nova-compute stop
- Move the instances directory to ceph-fs
# mkdir -p /ceph-fs/nova
# cp -r /var/lib/nova/* /ceph-fs/nova/
# rm -r /var/lib/nova/
# ln -s /ceph-fs/nova/ /var/lib/nova
- Change the owner of the nova dir
# chown -R nova:nova /ceph-fs/nova
- Start nova services
# service openstack-nova-compute start
In order to configure live migration change the lines in /etc/libvirt/libvirtd.con/
as described in the Prisma guide.
NOTE The files /etc/init/libvirt-bin.conf
and /etc/default/libvirt-bin
are note presents, so you need only to change the first one. (PUNTO DA CONTROLLARE)