Compute agents 3.1 secgrp - noironetworks/opflex-documentation GitHub Wiki

Installing and Configuring the OpFlex and Host Agents

This section describes how to install and configure the OpFlex and host agents.

Noe: Compute nodes require installation and configuration of the Neutron OpFlex Agent, and the OpFlex Agent that programs OVS (agent-ovs). You may also want to deploy these agents on the controller/network nodes if dataplane needs to be extended to the controller/network nodes.

Step 1. Install packages

Install the neutron-opflex-agent and agent-ovs from the repo

yum install neutron-opflex-agent agent-ovs

Step 2. openvswitch-agent.ini

Ensure that the /etc/neutron/plugins/ml2/openvswitch_agent.ini file contains the following settings in the example below

[ovs]

enable_tunneling = False

integration_bridge = br-int

Also ensure the configuration lines for tunnel_bridge, vxlan_udp_port and tunnel_types are deleted or commented out.

Step 3. Disable agents

Disable the openvswitch agent on the node using the following commands.

systemctl stop neutron-openvswitch-agent

systemctl disable neutron-openvswitch-agent

Step 4. agent-ovs configuration

The agent-ovs service reads its configuration from the /etc/opflex-agent-ovs/opflex-agent-ovs.conf file, and the conf.d subdirectory allows granular override of specific settings in that file using smaller JSON-formatted files. Create a new /etc/opflex-agent-ovs/conf.d/10-opflex-connection.conf file with the example contents below:

{

"opflex": {

    "domain": "comp/prov-OpenStack/ctrlr-[<APIC_SYSTEM_ID>]-<APIC_SYSTEM_ID>/sw-InsiemeLSOid",

    "name": "<hostname of this system>",

    "peers": [

        {"hostname": "10.0.0.30", "port": "8009"}

    ],

    "ssl": {

        "mode": "encrypted"

    }

}

}

where:

  • <APIC_SYSTEM_ID> is the unique name for identifying the OpenStack cluster, discussed earlier.
  • <hostname of this system> is the server hostname of current node.
  • The IP address next to hostname in this example is a fabric interface for OpFlex communication, if the ACI fabric was installed with default IP address pool for tunnel endpoints; 10.0.0.0/16. If this IP address pool was altered during the fabric install, change the addressing used here to match the fabric. SSH to a leaf switch and use the show ip interface command to identify the addresses in use on your fabric. The hostname address for the OpFlex peer is the anycast IP address assigned to the SVI of the infra VLAN on the leaf switches.

If using VXLAN encapsulation please follow section 4a, if you plan to use VLAN encapsulation then follow section 4b

Step 4a. agent-ovs configuration - using VXLAN encapsulation

The OpFlex configuration requires a second set of override values specific to the VXLAN configuration between the host and the Leaf switch. Create a new /etc/opflex-agent-ovs/conf.d/20-vxlan-aci-renderer.conf file using the example contents shown below:

{

 "renderers": {

     "stitched-mode": {

         "int-bridge-name": "br-fabric",

         "access-bridge-name": "br-int",

         "encap": {

             "vxlan" : {

                 "encap-iface": "br-fab_vxlan0",

                 "uplink-iface": "eth1.<infra_VLAN>",

                 "uplink-vlan": <infra_VLAN>,

                 "remote-ip": "10.0.0.32",

                 "remote-port": 8472

             }

         },

         "flowid-cache-dir": "/var/lib/opflex-agent-ovs/ids"

    }

}

}

Note: Replace the <infra_VLAN> with ACI Infra VLAN. eth1 in the example is the uplink interface used by the host for tenant network traffic.

The IP address for remote-ip in the 20-vxlan-aci-renderer.conf file is a default fabric interfaces for OpFlex communication, if the ACI fabric was installed with default IP address pool for tunnel endpoints; 10.0.0.0/16. If this IP address pool was altered during the fabric install, change the addressing used here to match the fabric. SSH to a leaf switch and use the show vlan extended and show ip interface commands to identify the addresses in use on your fabric. The remote-ip address matches the anycast ip address assigned to interface Loopback 1023 on the leaf switches. TODO: Add how to get this without having to log into switch.

In order to utilize VXLAN encapsulation between the OpenStack servers and the ACI Leaf switches, a VXLAN interface needs to be defined in OVS. This interface name should match the encap-iface setting in the opflex-agent-ovs.conf file, it can be created using the following command:

ovs-vsctl add-port br-fabric br-fab_vxlan0 -- set Interface br-fab_vxlan0 type=vxlan options:remote_ip=flow options:key=flow options:dst_port=8472

Step 4b. agent-ovs configuration - using VLAN encapsulation

The OpFlex configuration requires a second set of override values specific to the VLAN configuration between the host and the Leaf switch. Create a new /etc/opflex-agent-ovs/conf.d/20-vlan-aci-renderer.conf file using the example contents shown below:

{

 "renderers": {

     "stitched-mode": {

         "int-bridge-name": "br-fabric",

         "access-bridge-name": "br-int"

         "encap": {

             "vlan" : {

                 "encap-iface": "<tenant-VLAN-trunk>"

             }

         },

         "flowid-cache-dir": "/var/lib/opflex-agent-ovs/ids"

    }

}

}

The interface for OpenStack tenant networking from the compute nodes is a physical interface supporting VLAN trunking. In some cases this will be the parent interface of the infra VLAN subinterface. In the case of a VPC and Cisco VIC based configuration is described in the Manually Configure the Host vPC section. This would be the separate main-bond interface where LACP traffic is transmitted. This interface name must match the encap-iface setting in the opflex-agent-ovs.conf file. Add the tenant VLAN trunk interface to the OVS bridge br-fabric, using the following command syntax:

ovs-vsctl add-br br-fabric

ovs-vsctl add-port br-fabric <tenant-VLAN-trunk>

Step 6. Add OVS bridge br-fabric

Add the OVS bridge for br-fabric with the following command:

ovs-vsctl add-br br-fabric

Step 6. metadata-agent.ini

Please make sure the metadata agent configuration files are correct on the compute nodes. If not you should be able to replace them with the one on the controllers.

Step 7. start and enable agents

With the OpFlex configuration in place, start and enable the neutron-opflex-agent and agent-ovs services, enter the following commands:

service agent-ovs restart

service neutron-opflex-agent restart

systemctl enable agent-ovs

systemctl enable neutron-opflex-agent

Step 8. Add iptable rule to allow VXLAN traffic on the compute nodes

Edit /etc/rc.local and add the following line:

iptables -I INPUT -p udp -m multiport --dports 8472 -m comment --comment "vxlan" -m state --state NEW -j ACCEPT

Make sure the /etc/rc.d/rc.local is executable file, i.e. type

chmod u+x /etc/rc.d/rc.local

Reboot the compute nodes.

⚠️ **GitHub.com Fallback** ⚠️