OpenStack installation, configuration of floating IPs on RDO - csabahenk/manila GitHub Wiki

Author: Karthick Ramdoss (@kramdoss)

I'm dumping here the steps followed to configure RDO as a single node deployment. This will serve as a reference document.

Install Openstack on a single node server:

  1. [Since Cent-OS was used, did not have to subscribe to subscription manager. Just updated everything from 'yum']

    yum install -y yum-utils yum update -y yum update yum update --skip-broken cat /etc/redhat-release

  2. Install RDO package using yum:

    sudo yum install -y https://rdoproject.org/repos/rdo-release.rpm

  3. Disabled Network Manager: Openstack doesn't support systems with NM enabled. OS configures network and NM blocks configuring network.

    systemctl disable NetworkManager systemctl status NetworkManager systemctl stop NetworkManager.service systemctl status NetworkManager systemctl disable NetworkManager

  4. Reboot the system

  5. Install Packstack:

    yum install openstack-packstack

Note: Do not run 'packstack --allinone'. Kilo release by default doesn't have 'Manila' enabled. Answer key has to be modified before running packstack installation.

 packstack --gen-answer-file=answer.txt
 vi answer.txt
  1. Run the packstack installation with the custom generated answer file

    packstack --answer-file=answer.txt

This completes the packstack installation for 'kilo' release with manila enabled.

Configure OpenStack Network:

OpenStack networking was configured with help of following links.

https://access.redhat.com/articles/11461source73 https://www.rdoproject.org/Neutron_with_existing_external_network

Additional reference http://dcshetty.blogspot.in/2015/01/using-glusterfs-native-driver-in.html

  1. Clear packstack's default configuration

    source keystonerc_admin neutron router-list neutron router-gateway-clear router1 neutron subnet-delete public_subnet neutron subnet-list neutron port-list neutron port-show 174.24.4.227

  2. Make /etc/sysconfig/network-scripts/ifcfg-br-ex resemble: (note this file will exist, and IPADDR/NETMASK will be populated with _br_ex at the end, remove that part, and fill all the missing fields). This file will configure the network parameters we probably had into our eth0 interface but, over br-ex.

    vim /etc/sysconfig/network-scripts/ifcfg-br-ex cat /etc/sysconfig/network-scripts/ifcfg-br-ex

     DEVICE=br-ex
     DEVICETYPE=ovs
     TYPE=OVSBridge
     BOOTPROTO=static
     IPADDR=10.70.36.11
     NETMASK=255.255.254.0
     GATEWAY=10.70.37.254
     DNS1=10.70.34.2
     ONBOOT=yes
    
  3. Make /etc/sysconfig/network-scripts/ifcfg-eth0 resemble (no BOOTPROTO!):

    vim /etc/sysconfig/network-scripts/ifcfg-enp3s0f0 cat /etc/sysconfig/network-scripts/ifcfg-enp3s0f0

     DEVICE="enp3s0f0"
     ONBOOT=yes
     TYPE=OVSPort
     DEVICETYPE=ovs
     OVS_BRIDGE=br-ex
     ONBOOT=yes
     HWADDR="00:25:90:93:5f:78"
    

This means, we will bring up the interface, and plug it into br-ex OVS bridge as a port, providing the uplink connectivity.

Reboot or, alternatively:

 service network restart
 service neutron-openvswitch-agent restart
 service neutron-server restart

Modify network configuration:

source keystonerc_admin
openstack-config --set /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini ovs bridge_mappings extnet:br-ex

Note: This will define a logical name for our external physical L2 segment, as "extnet", this will be referenced as a provider network when we create the external networks.

openstack-config --set /etc/neutron/plugin.ini ml2 type_drivers vxlan,flat,vlan

Note: This one will overcome a packstack deployment bug where only vxlan is made available.

Quick read on different network types:

A local network is a network that can only be realized on a single host. This is only used in proof-of-concept or development environments, because just about any other OpenStack environment will have multiple compute hosts and/or a separate network host.

A flat network is a network that does not provide any segmentation options. A traditional L2 ethernet network is a "flat" network. Any servers attached to this network are able to see the same broadcast traffic and can contact each other without requiring a router. flat networks are often used to attach Nova servers to an existing L2 network (this is called a "provider network").

A vlan network is one that uses VLANs for segmentation. When you create a new network in Neutron, it will be assigned a VLAN ID from the range you have configured in your Neutron configuration. Using vlan networks requires that any switches in your environment are configured to trunk the corresponding VLANs.

gre and vxlan networks are very similar. They are both "overylay" networks that work by encapsulating network traffic. Like vlan networks, each network you create receives a unique tunnel id. Unlike vlan networks, an overlay network does not require that you synchronize your OpenStack configuration with your L2 switch configuration.

Recreate network topology:

The following link gives good introduction about neutron concepts. http://docs.openstack.org/kilo/install-guide/install/yum/content/neutron-concepts.html

Run through the following commands to re-configure public & private subnets, router gateways.

.keystonerc_admin
 neutron net-create external_network --provider:network_type flat --provider:physical_network extnet  --router:external --shared

An external network is created. 'extnet' is the L2 segment we already created. This external_network is created over 'extnet'

neutron subnet-create --name public_subnet --enable_dhcp=False --allocation-pool=start=10.70.36.84,end=10.70.36.93 --gateway=10.70.37.254 external_network 10.70.36.0/23

Public subnet is recreated. Floating IP range to be used for allocation, CIDR and gateway are provided. [CIDR was provided by lab team, I am not sure if this can be found without their help]

 neutron router-create router1
 neutron router-gateway-set router1 external_network
 neutron router-gateway-set router1 external_network

Create a router and add external_network as a port to the router. Set external_network as the gateway to 'router1'

 neutron net-create private_network
 neutron subnet-create --name private_subnet private_network  10.0.0.0/24

Create a private subnet and add private subnet to the router. Now both public and private subnets are routed through 'router1'.

 neutron router-interface-add router1 private_subnet

So now we have a private network and public network communicating through a router. Router communicates with 'internet' through the gateway to 'external_network'.

Spawning a VM, generating floating IP and assigning floating IP

  1. Generate a floating IP

    neutron floatingip-create external_network
    
  2. Check the list of images available

    glance image-show cirros
    
  3. Create a key to have passwordless ssh

    nova keypair-add --pub_key ~/.ssh/id_rsa.pub admin_key

  4. Get the list of networks,

    neutron net-list +--------------------------------------+------------------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+----------------------------------------------------+ | 78eaafe5-9255-4ddc-9ddc-ff02271502a6 | private | cde02214-6211-4dcb-bffb-318d057b2235 10.0.0.0/24 | | 9e3cae30-e292-4f62-89f9-8e8090b1edf8 | external_network | 490b535e-ca74-4812-a544-1dfd4a7fcb1b 10.70.36.0/23 | +--------------------------------------+------------------+----------------------------------------------------+

  5. Check the list of OS flavours available and spawn a VM selecting the preferred flavour. The key generated previously should be added to have passwordless ssh to and between the VMs. The net-id should be the id of private n/w

    nova flavor-list nova boot --image cirros --key admin_key --flavor m1.tiny --nic net-id=78eaafe5-9255-4ddc-9ddc-ff02271502a6 --poll cirros-vm

The VMs generated will not be accessible unless neutron security groups are updated to allow ssh. Multiple tenants will be allowed to be hosted on open stack. So it is important to segregate network between tenants. security group is a feature on neutron which provides restriction between tenants.

  1. Run through these two commands to find out the list of security-groups anf tenants.

    neutron security-group-list keystone tenant-list

  2. Run through these two commands to find out security group and tenant mapping.

    neutron security-group-show 9b050cb7-e840-42d2-83cc-d91d9e955065 neutron security-group-show b236ea6f-13ee-4ec7-aa80-1dbfb6d75943 neutron security-group-show dbb2901d-0f3d-432e-b751-707e23d36666

  3. Run through these commands to find out on which tenant does the newly created VM

    nova list nova show cirros-vm

  4. Add rules to the security group to allow tcp and icmp access.

    neutron security-group-rule-create --protocol tcp --port-range-min 22 --port-range-max 22 --direction ingress b236ea6f-13ee-4ec7-aa80-1dbfb6d75943 neutron security-group-rule-create --protocol icmp --direction ingress b236ea6f-13ee-4ec7-aa80-1dbfb6d75943 neutron security-group-show b236ea6f-13ee-4ec7-aa80-1dbfb6d75943

Note: VMs generated can be accessed internally using the following commands. We access the VMs through the virtual router created.

sudo ip netns
sudo ip netns exec qrouter-98f16bc7-b43e-4389-9c92-a701a7cc8dac ping 10.0.0.4
sudo ip netns exec qrouter-98f16bc7-b43e-4389-9c92-a701a7cc8dac ssh -i ~/.ssh/id_rsa [email protected]

Associate floating IPs to VMs

Get the list of floating IPs to see unmapped IPs and select the 'id' of floating-ip to be mapped

neutron floatingip-list

Get the private IP of VM and its corresponding port-id

nova show cirros-vm
neutron port-list

Associate the id of floating-ip to port-id of the VM

neutron floatingip-associate b50b1086-b506-424f-9c9e-9620365a58ba e803ff78-e06f-4ec1-8957-36dbf1e593a6

Ping and ssh from internet to the VM using floating IP

ping 10.70.36.85
ssh [email protected]

Reading materials that were helpful:

https://www.youtube.com/watch?v=IGGgVuZe7UA