IT: HOWTO: Setup Octavia LBAAS - feralcoder/shared GitHub Wiki
feralcoder public Home
feralcoder IT
Living Room Data Center
My Private Cloud
Kolla-Ansible OpenStack Deployment
HOWTO: Install Kolla-Ansible
HOWTO: Install Ceph
HOWTO: Setup Docker Registries For OpenStack
HOWTO: Kolla-Ansible Container Management
OpenStack has offered LBAAS in Neutron for several releases. Neutron-LBAAS has been retired in favor of Octavia, which offers many improvements and enables the new virtual and software network technologies in the pipeline. Another is that amphorae are deployed as VM's, not controller services, making the whole architecture much more scaleable.
One noteable facility in Octavia is support for redundant load balancer units (amphorae) for failover and capacity.
Setting up Octavia has been an ordeal, and deserves its own page. There are many issues with the default installation not explained in docs, not captured in Ansible, and some misdocumented things, also.
The Octavia management layer interracts with Amphora over a private management network. This network does not provision properly. Some neutron and OVS configurations are not handled by Ansible and need to be edited into .ini files directly. VLAN interfaces need to be established on all control nodes, with startup scripts and some OVS action in network containers.
The amphora is the basic load-balancer unit in Octavia. As a VM living alongside compute resources, it allows better scaling than the Neutron-LBAAS model, where balancers lived on the network nodes, usually the controllers.
A custom-built amphora image is required, and this was worse than non-trivial. All documentation is insufficient, and some shortcomings exist in the code, requiring patches.
The management services communicate with amphorae using certs on both ends. The managers create certs for the amphorae and install them on amphora creation. This process isn't fully working at the moment and requires a code patch on the managers, and some hand-configuration.
Enable Octavia in globals.yml:
enable_octavia: "yes"
Kolla-ansible does allow for generation of Octavia certs. Start by editing these configurables into global.yml:
octavia_certs_country: US octavia_certs_state: California octavia_certs_organization: OpenStack octavia_certs_organizational_unit: Octavia
I'm running the Octavia management network on a VLAN on my external physical network. To enable this interface on all nodes (since the amphorae run on compute nodes) I have to enable provider networks in neutron. The 'v-lbaas' network interface isn't defined anywhere, yet - this is one I have to set up outside of kolla-ansible.
In globals.yml:
enable_neutron_provider_networks: "yes" octavia_amp_network: name: lb-mgmt-net provider_network_type: vlan provider_segmentation_id: 131 provider_physical_network: physnet1 external: false shared: false subnet: name: lb-mgmt-subnet cidr: "172.31.0.0/24" allocation_pool_start: "172.31.0.10" allocation_pool_end: "172.31.0.254" gateway_ip: "172.31.0.241/24" enable_dhcp: yes octavia_network_interface: v-lbaas
This configuration isn't sufficient to drive network setup. I also have to map the network:VLAN directly into neutron's configs. In $KOLLA_ANSIBLE_CODE_CHECKOUT/share/kolla-ansible/ansible/roles/neutron/templates/ml2_conf.ini.j2:
[ml2_type_vlan] {% if enable_ironic | bool %} network_vlan_ranges = physnet1:131:131 {% else %} network_vlan_ranges = physnet1:131:131 {% endif %}
More network configuration is required, which I'll detail later, in the Post-Install section.
Configure a flavor for your amphora instances, in globals.yml:
octavia_amp_flavor: name: "amphora" is_public: no vcpus: 1 ram: 1024 disk: 5
The octavia-workers need some additional configuration to provision amphora with their certs. Create this file: /etc/kolla/config/octavia/octavia-worker.conf:
[controller_worker] user_data_config_drive = true
Some code patching is required on the octavia-workers, which I'll come back to post-deployment.
I'm currently deploying Octavia after I do a full Ceph/OpenStack deployment with all my other components - I update globals.yml between deployments. I will probably bring all configs together at some point in the future, after I have my tooling worked out better, and have more confidence in the repeatability of the process.
Run this to deploy Octavia. Note the other services I'm updating, too: notably the horizon dashboard and neutron.
kolla-ansible -i $INVENTORY deploy --tags common,horizon,octavia,neutron
Some code patching is required on the octavia-workers to allow the user data (certs) to be transferred to new amphorae.
This can also be done before the containers are built and stuffed into your registry, see the "Custom Containers" section of the Kolla-Ansible Container Management HOWTO for that.
First, the patch:
From d1c123ea91f0b4983826bcad447291f9e922b6bc Mon Sep 17 00:00:00 2001 From: Adrian Vladu Date: Fri, 15 Jan 2021 14:06:18 +0200 Subject: [PATCH] Fix userdata template --- octavia/common/jinja/templates/user_data_config_drive.template | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/octavia/common/jinja/templates/user_data_config_drive.template b/octavia/common/jinja/templates/user_data_config_drive.template index 4635c894..a927a78d 100644 --- a/octavia/common/jinja/templates/user_data_config_drive.template +++ b/octavia/common/jinja/templates/user_data_config_drive.template @@ -26,7 +26,7 @@ write_files: {%- for key, value in user_data.items() %} - path: {{ key }} content: | - {{ value|indent(8) }} + {{ (value if value is string else value.decode('utf-8'))|indent(8) }} {%- endfor -%} {# restart agent now that configurations are in place #} -- 2.25.1
The easiest way to apply this patch to all running octavia-workers is to apply it to local octavia code, then copy it into all workers.
Get the code and apply the patch:
CODE_DIR=you/tell/me mkdir -p $CODE_DIR/openstack/ && cd $CODE_DIR/openstack/ && git clone https://github.com/openstack/octavia git apply $PATH_TO_PATCH/octavia-Fix-userdata-template.patch
For each controller, do this:
rsync octavia/common/jinja/templates/user_data_config_drive.template $CONTROLLER:/tmp/ ssh root@$CONTROLLER docker cp /tmp/user_data_config_drive.template \ octavia_worker:/var/lib/kolla/venv/lib/python3.6/site-packages/octavia/common/jinja/templates/user_data_config_drive.template ssh root@$CONTROLLER docker cp /tmp/user_data_config_drive.template \ octavia_worker:/octavia-base-source/octavia-7.1.2.dev2/octavia/common/jinja/templates/user_data_config_drive.template
There's no need to restart the octavia-workers afterward.
This was one of the most frustrating parts of the process, partly due to above configurations which weren't known to me as I tested build-deploy of amphorae repeatedly. The final process is satisfyingly simple.
I use a centos-8 docker container to do the work, which then builds a qcow image. Here's the Dockerfile:
FROM centos:8 RUN dnf -y install epel-release RUN dnf -y install debootstrap python3 qemu-img sudo git yum-utils gdisk kpartx e4fsprogs dosfstools RUN python3 -m pip install diskimage-builder WORKDIR /octavia/diskimage-create ENV CLOUD_INIT_DATASOURCES="ConfigDrive, OpenStack" CMD ./diskimage-create.sh -a amd64 -i centos-minimal -s 3
I actually use my own custom centos:8 image, 192.168.127.220:4001/feralcoder/centos-feralcoder:8. I've pointed its yum repos at my own local repos, this is my base image for everything, and it helps a lot. I recommend doing the same, see the Container HOWTO linked above.
The diskimage-create.sh tool is part of the octavia codebase, and you'll map it into the container below:
mkdir $AMPHORA_BUILD_DIR && cd $AMPHORA_BUILD_DIR mkdir amphora-image-amd64-docker cp $PATH_TO_DOCKERFILE amphora-image-amd64-docker/Dockerfile.Centos.amd64 docker build amphora-image-amd64-docker -f Dockerfile.Centos.amd64 \ -t amphora-image-build-amd64-centos docker run --privileged -v /dev:/dev -v /proc:/proc -v $CODE_DIR/openstack/octavia/:/octavia \ -ti amphora-image-build-amd64-centos
Upload the amphora image:
. /etc/kolla/admin-openrc.sh export OS_USERNAME=octavia export OS_PASSWORD=$(grep octavia_keystone_password /etc/kolla/passwords.yml | awk '{ print $2}') export OS_PROJECT_NAME=service export OS_TENANT_NAME=service openstack image create amphora-x64-haproxy.qcow2 \ --container-format bare \ --disk-format qcow2 \ --private \ --tag amphora \ --file $CODE_DIR/openstack/octavia/diskimage-create/amphora-x64-haproxy.qcow2
Install the Octavia client and test:
pip3 install python-octaviaclient
This site was very helpful as I worked through networking issues: https://leftasexercise.com/2020/05/01/openstack-octavia-architecture-and-installation