Home - HewlettPackard/sdflex-ironic-driver GitHub Wiki

Welcome to the sdflex-ironic-driver and sflexutils wiki!

sdflex-ironic-driver is a package that provide a hardware type sdflex-redfish for OpenStack Ironic that manages HPE Superdome Flex 280 and HPE Superdome Flex Servers in cloud environment including OS provisioning. sdflex-ironic-driver uses sdflexutils library to perform the required operations on SuperdomeFlex using the RMC management redfish interface. This package enables following value add features in addition to the features of Ironic redfish hardware type.

Openstack Ironic value add Features for Superdome Flex 280 Server:

Features Version 1.5.x 1.4.x
Support for Event subscription management New X
Support for Anaconda (ISO) based OS provisioning for RHEL OSes New X
Support for OpenStack Ironic release Zed New X
Support for OpenStack Ironic release Wallaby X New
RAID configuration support for the Tri-Mode controller
Adaptec SmartRAID Ultra 3258P-32i /e including NVMe drives
Parity New
Hardware disk erase support for the Tri-Mode controller
Adaptec SmartRAID Ultra 3258P-32i /e including NVMe drives
Parity New
Deploy Steps support for RAID configuration,
SUM based IO firmware update and BIOS settings
Parity New
DHCP Less OS provisioning Parity Parity from 1.3.x
Support for Standalone Ironic Parity Parity from 1.3.x
Boot from Pre Provisioned FC Volume Parity Parity from 1.3.x
OS provisioning with UEFI HTTP boot configured Parity Parity from 1.2.x
Virtual Media based OS provisioning Parity Parity from 1.2.x
IO Firmware update using SUM Parity Parity from 1.2.x
Secure boot Parity Parity from 1.2.x
Directed LAN boot Parity Parity from 1.2.x
BIOS settings Parity Parity from 1.2.x
Firmware update Parity Parity from 1.2.x
RAID configuration Parity Parity from 1.2.x
Hardware disk erase Parity Parity from 1.2.x

Openstack Ironic value add Features for Superdome Flex Server:

Features Version 1.5.x 1.4.x
Support for Event subscription management New X
Support for Anaconda (ISO) based OS provisioning for RHEL OSes New X
Support for OpenStack Ironic release Zed New X
Support for OpenStack Ironic release Wallaby X New
Deploy Steps support for RAID configuration,
SUM based IO firmware update and BIOS settings
Parity New
Virtual Media based OS provisioning Parity New
DHCP Less OS provisioning Parity Parity from 1.3.x
Support for Standalone Ironic Parity Parity from 1.3.x
Boot from Pre Provisioned FC Volume Parity Parity from 1.3.x
OS provisioning with UEFI HTTP boot configured Parity Parity from 1.2.x
IO Firmware update using SUM Parity Parity from 1.2.x
Secure boot Parity Parity from 1.2.x
Directed LAN boot Parity Parity from 1.2.x
BIOS settings Parity Parity from 1.2.x
Firmware update Parity Parity from 1.2.x
RAID configuration Parity Parity from 1.2.x
Hardware disk erase Parity Parity from 1.2.x

Table of Contents

Prerequisites

Install and configure Openstack Ironic. Refer to the document ‘HPE Superdome Flex Server OS Installation Guide’ or ‘Superdome Flex 280 Server OS Installation Guide’ and the section ‘Installing and configuring Openstack Ironic’ for procedure.

sdflex-ironic-driver & sdflexutils Version OpenStack Ironic Version
1.5.x Zed
1.4.x Wallaby

Installing sdflexutils and sdflex-ironic-driver

1. Install sdflexutils PyPI module:

 pip install sdflexutils

2. Install sdflex-ironic-driver PyPI module:

 pip install sdflex-ironic-driver

Enabling Openstack Ironic with sdflex-redfish hardware type

1. Add the following to the existing ironic.conf configuration.

 [DEFAULT]
 enabled_management_interfaces = redfish,sdflex-redfish 
 enabled_power_interfaces = redfish,sdflex-redfish 
 enabled_boot_interfaces = pxe,sdflex-redfish 
 enabled_bios_interfaces = no-bios,sdflex-redfish 
 enabled_hardware_types = redfish,sdflex-redfish

2. Restart the OpenStack Ironic conductor service.

3. Run the following command to ensure sdflex-redfish is listed.

 openstack baremetal driver list

Installing the Operating System using OpenStack Ironic

This procedure applies to all supported versions of RHEL, SLES, Oracle Linux, Oracle Virtual Server, Windows 2019 and Windows 2016 operating systems.
1. Enrol the ironic node using the following command.

 openstack baremetal node create --driver sdflex-redfish --driver-info redfish_address=<https://RMC-IP> --driver-info redfish_system_id=/redfish/v1/Systems/<PARTITION-ID> --driver-info redfish_username=<ADMIN-USER--driver-info> redfish_password=<PASSWORD>

2. Obtain the node UUID using the following command.

 openstack baremetal node list

3. Set redfish_verify_ca to False using the following command.

 openstack baremetal set <NODE-UUID> --driver-info redfish_verify_ca="False"

4. Create the Ironic port.

 openstack baremetal port create --node <NODE-UUID> <MAC-ADDRESS-CHASSISMAC>

NOTE: Provide the address for the chassis NIC that is connected to the network.

For example:

 openstack baremetal port create --node e15d658a-8dcb-42a1-b1d4-dcad8be5188c 08:00:69:11:22:33

5. Create a network port.

 openstack port create --mac-address <MAC-ADDRESS-CHASSISMAC> --network <NETWORK> <PORT NAME>

NOTE: Provide the address for the chassis NIC that is connected to the network.

For example:

 openstack port create --mac-address 08:00:69:11:22:33 --network ironic VIFport

6. Associate the neutron port (VIFport) to the Ironic port.

 openstack baremetal node vif attach --port-uuid <IRONIC-PORT> <NODE-UUID> <NEUTRON-PORT>

For example:

 openstack baremetal node vif attach --port-uuid 9415296c-a25e-4d18-8f88-0d9f4d0eab34 138ded88-6d10-4d05-8a97-0e2875d7377e 734efc0c-bad2-4fa6-b1c0-644d1b63a501

7. Set the boot_mode to uefi.

 openstack baremetal node set <NODE-UUID> --property capabilities='boot_mode:uefi'

8. Set the hardware details to Ironic node properties.

 openstack baremetal node set <NODE-UUID> --property cpus=<NUMBER-CPUs> --property memory_mb=<MEMORY-SIZE> --property local_gb=<LOCAL-STORAGE-SIZE> --property cpu_arch=<CPU-ARCHITECTURE>

For example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e--property cpus=4 --property memory_mb=761408 --property local_gb=20 --property cpu_arch=x86_64

NOTE: Ironic Inspector can be configured to automatically fetch hardware details and populate in Ironic node properties. If the ironic inspector is configured, the following command can be used.

 openstack baremetal node inspect <NODE-UUID>

9. Set the Glance IDs of deploy images (Ramdisk and kernel).

 openstack baremetal node set <NODE-UUID> --driver-info deploy_ramdisk=<RAMDISK-GLANCE-ID> --driver-info deploy_kernel=<KERNEL-GLANCE-ID>

For example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e --driver-info deploy_ramdisk=b28e2d32-806e-46ad-8c1b-1f5e1b764bad --driver-info deploy_kernel=a9a82fc6-85e7-4b23-b54c-fcb022ef5c9a

10. Set the user OS image details. Set the image_source value with the respective OS image glance ID in the instance_info of the node.

 openstack baremetal node set <NODE-UUID> --instance-info image_source=<GLANCE-ID-USER-OS-IMAGE> --instance-info root_gb=SIZE

For example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e --instance-info image_source=93d99e9d-80ec-4612-81b7-a53224c5584d --instance-info root_gb=25

NOTE: Incase of Boot from Pre-provisioned FC volume, this image source is still needed for the purpose of validation and will not be used for provisioning.

11. Perform node validate.

 openstack baremetal node validate <NODE-UUID>

12. Move the ironic node provision state to manage.

 openstack baremetal node manage <NODE-UUID>

13. Move ironic node provision state to provide.

 openstack baremetal node provide <NODE-UUID>

NOTE: If automated_clean is set to true in /etc/ironic/ironic.conf, an automated clean will be performed.

14. Provision OS.

 openstack baremetal node deploy <NODE-UUID>

Once the OS provisioning is complete, it moves the ironic node provision state to active and initiates the OS boot.

NOTE: Post OS install tasks will be performed when configdrive is configured.

Provisioning OS with secure boot enabled

1. Enrol the OpenStack Ironic node:

 openstack baremetal node create --driver sdflex-redfish --driver-info redfish_address=https://<RMC IP> --driver-info redfish_system_id=/redfish/v1/Systems/<Partition ID> --driver-info redfish_username=<admin user> --driver-info redfish_password=<admin password>

NOTE: Both IPv4 and IPv6 are supported for RMC IP.

To see the node is enrolled, use the following command

 openstack baremetal node list

2. Set redfish_verify_ca to False:

 openstack baremetal node set <Node UUID> --driver-info redfish_verify_ca="False"

3. Create the OpenStack Ironic port:

 openstack baremetal port create --node <Node UUID> <MAC address of the chassis NIC connected to network>

For example:

 openstack baremetal port create --node e482148c-20c9-4362-bbbf-22247b227f9d 08:00:69:17:ac:7a

4. Create a neutron port:

 openstack port create --mac-address <MAC address of the chassis NIC connected to network> --network <network> <Port name>

For example:

 openstack port create --mac-address 08:00:69:17:ac:7a --network private VIFport

5. Associate the neutron port (VIFport) to the OpenStack Ironic port:

 openstack baremetal node vif attach --port-uuid <IRONIC-PORT> <NODE-UUID> <NEUTRON-PORT>

For example:

 openstack baremetal node vif attach --port-uuid 9415296c-a25e-4d18-8f88-0d9f4d0eab34 138ded88-6d10-4d05-8a97-0e2875d7377e 734efc0c-bad2-4fa6-b1c0-644d1b63a501

6. Set the boot_mode and boot_interface:

 openstack baremetal node set <Node UUID> --property capabilities='boot_mode:uefi'
 openstack baremetal node set <Node UUID> --boot-interface sdflex-redfish

7. Set secure_boot:

 openstack baremetal node set <Node UUID> --instance-info capabilities='{"secure_boot": "true"}'

8. Set the hardware details to Ironic node properties:

 openstack baremetal node set <Node UUID> --property cpus=<No. of CPUs> --property memory_mb=<Memory size> --property local_gb=<local storage size> --property cpu_arch=<cpu architecture>

For example:

 openstack baremetal node set e482148c-20c9-4362-bbbf-22247b227f9d --property cpus=4 --property memory_mb=2048 -–property local_gb=20 --property cpu_arch=x86_64

NOTE: Ironic Inspector can be configured to automatically fetch hardware details and populate in Ironic node properties. For example:

 openstack baremetal node inspect <Node UUID>

9. Set the Glance IDs of deploy images (ramdisk and kernel).

 openstack baremetal node set <Node UUID> --driver-info deploy_ramdisk=<Glance ID of deploy ramdisk> --driver-info deploy_kernel=<Glance ID of deploy kernel>

For example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e --driver-info deploy_ramdisk=b28e2d32-806e-46ad-8c1b-1f5e1b764bad --driver-info deploy_kernel=a9a82fc6-85e7-4b23-b54c-fcb022ef5c9a

10. Set the user OS image details:

 openstack baremetal node set <Node UUID> --instance-info image_source=<Glance ID of secure bootable user OS image> --instance-info root_gb=<size>

For example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e --instance-info image_source=93d99e9d-80ec-4612-81b7-a53224c5584d --instance-info root_gb=25

11. Perform node validation:

 openstack baremetal node validate <Node UUID>

12. Move the ironic node provision state to manage:

 openstack baremetal node manage <Node UUID>

13. Move the ironic node provision state to provide:

 openstack baremetal node provide <Node UUID>

NOTE: If automated_clean is set to true in ironic.conf, an automated clean will be performed.

14. Provision the OS:

 openstack baremetal node deploy <Node UUID>

Once the OS provisioning is complete, it Moves the ironic node provision state to active. Initiates the OS boot. The OS boots up in secure boot mode.

NOTE: Un provisioning the node (undeploy) disables the secure boot on the partition and node state moves to available.

 openstack baremetal node undeploy <Node UUID>

Provisioning OS with Directed LAN boot configured

1. Complete steps 1 through 8 of one of the following tasks:

2. Set enable_directed_lanboot to True in driver_info of the ironic node.
 openstack baremetal node set <node-uuid> --driver_info enable_directed_lanboot=True 

For Example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e --driver_info enable_directed_lanboot=True

3. Set boot_file_path in the driver_info of the ironic node.

 openstack baremetal node set <node-uuid> --driver_info boot_file_path='{"UrlBootFile": "<tftp bootfile url>"}'

For Example:

 openstack baremetal node set 138ded88-6d10-4d05-8a97-0e2875d7377e --driver_info boot_file_path='{"UrlBootFile":"tftp://1.2.3.4/tftpboot/bootx64.efi”}

4. Continue with the remaining steps (steps 9 to 14) of the provision task. Once the provisioning is complete, OS boots up with Directed LAN boot configured.

Managing BIOS configuration

BIOS setting can be applied on the node using manual cleaning step. The required BIOS settings can be provided through the settings argument which contains a list of BIOS options to be applied, each BIOS option is a dictionary with name and value keys.

Sample names and value keys are as follows:

 "BootSlots": "3,5",
 "UrlBootFile": "tftp://1.2.3.4/tftp/bootx.efi",
 "UrlBootFile2": "tftp://1.2.3.5/tftp/bootx.efi"

Procedure
1. Create the ironic node and change the provision-state to manage.
2. Apply BIOS setting:
Using JSON string:

 openstack baremetal node clean <node> --clean-steps ‘[{“interface”: “bios”, “step”: “apply_configuration”, “args” :{ “settings”: [{“name”: “name”, “value”:“value”},{“name”: “name”, “value”: “value”}]}}]’

Using a file (prepare a file with json data):

 openstack baremetal node clean <node> --clean-steps my-clean-steps.json

After cleaning the Ironic node, check the BIOS configuration. For more information, see Retrieving BIOS settings.

Retrieving BIOS settings

1. Retrieve the BIOS configuration:

 openstack baremetal node bios setting list <node-uuid>

For example:

 openstack baremetal node bios setting list 138ded88-6d10-4d05-8a97-0e2875d7377e

2. To get a specified BIOS setting:

 openstack baremetal node bios setting show <node-uuid> <setting-name>

For example:

 openstack baremetal node bios setting show 138ded88-6d10-4d05-8a97-0e2875d7377e BootSlots

NOTE: If -f json is added as suffix to above commands, it returns BIOS settings in json format.

Updating Superdome Flex Server firmware

1. Copy the following python code in to a script sdflex-firmware-udpate.py and save it.

 import argparse
 import logging
 from sdflexutils import client
 LOG = logging.getLogger()
 LOG.setLevel(logging.DEBUG)
 stream_handler = logging.StreamHandler()
 LOG.addHandler(stream_handler)
 parser = argparse.ArgumentParser(
     formatter_class=argparse.ArgumentDefaultsHelpFormatter)
 parser.add_argument('rmcip',
                         help='IP address of the Superdome Flex RMC')
 parser.add_argument('username',
                         help='User name of the Superdome Flex RMC')
 parser.add_argument('password',
                         help='User password of the Superdome Flex RMC')
 parser.add_argument('fw_bundle_url',
                         help='url of the firmware bundle')
 parser.add_argument('--reinstall', default=False,
                         help='Force reinstall firmware')
 parser.add_argument('--exclude_npar_fw', default=False,
                         help='exclude updating npar firmware')
 args = parser.parse_args()
 rmc_ip = 'https://' + args.rmcip
 partition = 'redfish/v1/Systems'
 client_obj = client.SDFlexClient(rmc_ip, args.username, args.password,
                                  partition)
 client_obj.update_firmware(args.fw_bundle_url, bool(args.reinstall),
                            bool(args.exclude_npar_fw))

2. Run the utility sdflex-firmware-udpate.py with following parameters using the python version 3. This command updates the complex firmware and nPar firmware in online and offline modes. For more details on firmware update feature of the Superdome Flex server, refer to HPE Superdome Flex Server Administration Guide. Please make sure that the sdflexutils version is 1.0.1 or later for this feature to work.

 python sdflex-firmware-udpate.py <RMC IP Address> <administrator user name> <administrator password> <firmware bundle url>

The following parameter are optional and the default value for these parameters is False

 --reinstall <True/False> and --exclude_npar_fw <True/False>

Adding components to the Service OS ramdisk

The in-band clean step features like RAID configuration, Disk erase, SUM based firmware update require the following utilities and drivers to be present in the Service OS ramdisk (Deploy ramdisk). Make sure that these components are installed in the Service OS ramdisk image.

  • sdflexutils library
  • The respective RAID controller utilities and drivers from the HPE Superdome Flex I/O Service Pack(for RAID and Disk Erase)
  • The respective drivers and utilities for the respective IO cards.
Please refer to release specific HPE Superdome Flex Server I/O Service Pack ISO
  • Other utilities/packages required: lspci, unzip, strings if these are not already present in the ramdisk
Sample steps of installing these components in to the Service OS ramdisk:
  1. Open the deploy ramdisk
    1. Copy the deploy ramdisk to current directory
    2. create a mount directory and unpack the deploy ramdisk in to mount directory
       mkdir mount
       cd mount
       sudo gunzip < <original_ramdisk_name> | sudo cpio -i --make-directories
  2. Copy and Install the required components
    1. Download sdflexutils pypi module
    2. Copy respective driver and the utility rpms based on the deploy ramdisk OS from the HPE Superdome Flex Server I/O Service Pack and labeled “Software—CD”. Please add following:
      1. For "HPE 3154 ‐ 8e, 3162-8i /e RAID Controllers" or "Adaptec SmartRAID Ultra 3258P-32i /e RAID Controller":
        1. Linux HPE Smart Storage Administrator (HPE SSA) CLI for Linux 64-bit
        2. HPE Superdome Flex SmartRAID 3100 (64-bit) Driver
      2. For "HPE 9361 ‐ 4i RAID Controller":
        1. HPE Superdome Flex MegaRAID Storage Administrator StorCLI for Linux 64-bit
        2. HPE Superdome Flex MegaRAID SAS 9361-4i controller Driver
    3. Perform chroot, install sdflexutils, required rpms and then exit
       sudo chroot .
       pip3 install sdflexutils
       rpm ivh <respective rpm>
       rpm – ivh <respective rpm>
       exit
  3. Re-pack the deploy ramdisk. The following command will generate the new deploy ramdisk with given name
     sudo find . | sudo cpio --quiet -H newc -o | sudo gzip -9 -n > ../<new_ramdisk_name>

RAID Configuration

For more information about in-band RAID functionality, see "https://docs.openstack.org/ironic/zed/admin/raid.html#raid".

Here is an example to perform clean steps for RAID creation and RAID deletion with ssacli based RAID controllers when the Openstack Ironic node is in manage state.

Create RAID (ssacli based):
1. Create a json file set-raidconfig.json to specify the required target RAID configuration. Following can be used as sample contents. Note that an optional parameter interface_type can also be set to sas or sata however, to create the RAID volumes with NVMe drives, exclude this parameter.

   {
   "logical_disks": [
                     {
                      "size_gb": 100,
                      "raid_level": "1",
                       "controller": "MSCC SmartRAID 3154-8e in Slot 2085",
                       "physical_disks": ["CN1:1:2", "CN1:1:3"],
                       "is_root_volume": true
                      }
                     ]
   }

2. Create a json file create-raid.json with following contents

   [{
    "interface": "raid",
    "step": "create_configuration"
   }]

3. Set target-raid-config to the node

 openstack baremetal node set --target-raid-config ./set-raidconfig.json <node UUID>

4. Perform clean step for raid creation using the command

 openstack baremetal node clean <node> --clean-steps ./create-raid.json

Delete RAID (ssacli based):
1. Create a json file delete-raid.json with following contents

   [{
    "interface": "raid",
    "step": "delete_configuration"
   }]

2. Set raid-interface to the node

 openstack baremetal node set --raid-interface agent <node UUID>

3. Perform clean step for raid deletion using the command

 openstack baremetal node clean <node> --clean-steps ./delete-raid.json

Here is an example to perform clean steps for RAID creation and RAID deletion with storcli based RAID controllers when the Openstack Ironic node is in manage state.

Create RAID (storcli based):
1. Create a json file set-raidconfig.json with the following contents

   {
   "logical_disks": [
                     {
                      "size_gb": 100,
                      "raid_level": "1",
                      "disk_type": “ssd”,
                      "interface_type": "sata",
                       "controller": 0,
                       "physical_disks": ["252:0", "252:1"]
                      }
                     ]
   }

2. Create a json file create-raid.json with following contents

   [{
    "interface": "raid",
    "step": "create_configuration"
   }]

3. Set target-raid-config to the node

 openstack baremetal node set --target-raid-config set-raidconfig.json <node UUID>

4. Perform clean step for raid creation using the command

 openstack baremetal node clean <node> --clean-steps ./create-raid.json

Delete RAID (storcli based):
1. Create a json file delete-raid.json with following contents

   [{
    "interface": "raid",
    "step": "delete_configuration"
   }]

2. Set raid-interface to the node

 openstack baremetal node set --raid-interface agent <node UUID>

3. Perform clean step for raid deletion using the command

 openstack baremetal node clean <node> --clean-steps ./delete-raid.json

NOTE:

  • Only ssacli based RAID config and Disk Erase are applicable for Superdome Flex 280 servers.
  • If the Superdome Flex system has both ssacli based controllers and storcli based controllers, need to specify the controller id in the target raid configuration. In case controller id is not specified, it picks ssacli based controller by default for the raid creation.
  • In Superdome Flex, each chassis can have maximum of 4 disks, hence storcli based controllers support RAID levels that need less than or equal to 4 disks however ssacli based controller connected to JBOD support all the RAID levels.
  • Currently with storcli based controllers, OpenStack ironic does not take number of disks as a user input, it derives the number disks required and proceeds with RAID creation.

DISK Erase

The sdflex-redfish hardware type supports the in-band clean step erase_devices. It erases all the disks, including the disks visible to OS and the raw disks visible to the respective Storage controller utility.

The disk erasure via shred is used to erase disks visible to the OS and its implementation is available in Ironic Python Agent. To erase disks connected to the Storage Controller, the respective Storage controller utility is used. Storage controller utilities support various erase methods. For more information on supported erase methods, see the respective controller utility documentation.

This clean step erase_devices is supported when the agent ramdisk contains the sdflexutils library. This clean step is performed as part of automated cleaning and it is disabled by default. See In-band vs out-of-band for more information on enabling/disabling a clean step.

Clean step erase_devices can also be performed manually when the node is in manageable state and here is an example.

 openstack baremetal node clean <node UUID> --clean-steps '[{"interface": "deploy", "step": "erase_devices"}]'

Provisioning OS with UEFI HTTP boot configured

1. Pre-requisites:

  • A webserver running with default port 80.
  • Superdome Flex server firmware bundle version 3.25.46 or above.
2. Add webserver address and location in ironic.conf
 [deploy]
 http_root = <HTTP root path>
 http_url = <HTTP server URL>

3. Copy UEFI HTTP bootable bootloader to http root directory
4. Create grub directory inside and add grub.cfg inside it with the following content

 set default=master
 set timeout=5
 set hidden_timeout_quiet=false
 menuentry "master"  {
 configfile /$net_default_mac.conf
 }

5. Complete steps 1 through 8 of one of the following OS provision tasks:

6. Enable UEFI HTTP Boot
 openstack baremetal node set <Node UUID> --driver-info enable_uefi_httpboot=True

7. Set boot_file_path

 openstack baremetal node set <Node UUID> --driver-info boot_file_path='{"UrlBootFile":"http://<http root server>/<bootloader> "}'

For example:

 openstack baremetal node set <Node UUID> --driver-info boot_file_path='{"UrlBootFile":"http://1.2.3.4/grubnetx64-2.02-Ubuntu.efi"}'

8. Download ‘uefi_http_boot_config.template’
9. Set uefi_http_boot_config.template as pxe_template

 openstack baremetal node set --driver-info pxe_template=<template-path> <Node UUID>

For example:

 openstack baremetal node set --driver-info pxe_template='/home/ubuntu/uefi_http_boot_config.template'  138ded88-6d10-4d05-8a97-0e2875d7377e

10. Continue with the remaining steps (steps 9 to 14) of the provision task. Once the provisioning is complete, OS boots up with UEFI HTTP Boot configured.

Provisioning OS using Virtual Media

1. Pre-requisites:

  • Either NFS or CIFS share folder which is accessible from the OpenStack Ironic system with write permission.
  • Mount NFS or CIFS share folder on to OpenStack Ironic system.
  • OpenStack Ironic setup with Ussuri release or later
2. Configuration:
  • Add sdflex-redfish-vmedia as boot interface in ironic.conf and restart the ironic conductor service
 [DEFAULT]
 enabled_boot_interfaces = iscsi,direct,sdflex-redfish,sdflex-redfish-vmedia

3. Create the Ironic node with sdflex-redfish-vmedia as a boot interface and complete steps of 1 through 8 Installing the Operating System using OpenStack Ironic
4. Update driver-info with the remote image share details.
For NFS share:

 openstack baremetal node set <NODE-UUID> --driver-info remote_image_share_type='nfs'

For CIFS share:

 openstack baremetal node set <NODE-UUID> --driver-info remote_image_share_type='cifs'

5. For NFS remote_image_share_type , add "remote_image_server" and "remote_image_share_root" in the driver-info For example:

 openstack baremetal node set <NODE-UUID> --driver-info remote_image_server=1.2.3.4 --driver-info remote_image_share_root='/home/ubuntu/nfsfolder'

Note: Update NFS server ip as "remote_image_server" and folder name in "remote_image_share_root" where the NFS is mounted


6. For CIFS remote_image_share_type , add "remote_image_share_root", "remote_image_server", "remote_image_user_name", "remote_image_user_password" and "image_share_root" in the driver-info For example:

 openstack baremetal node set <NODE-UUID> --driver-info remote_image_server=1.2.3.4 --driver-info remote_image_share_root='/openstack' --driver-info image_share_root='/home/ubuntu/sambafolder' --driver-info remote_image_share_type='cifs' --driver-info remote_image_user_name='guest' --driver-info remote_image_user_password='guest'

Note: Update CIFS server ip as "remote_image_server", folder name in "remote_image_share_root" where the CIFS URL, image_share_root is the local folder name where the CIFS is mounted on the ironic setup, username to access CIFS in "remote_image_user_name" and password to access CIFS in "remote_image_password"

7. Add uefi bootable bootloader location in the driver info For example:

 openstack baremetal node set <NODE-UUID> --driver-info bootloader='http://1.2.3.4/bootloader.efi'

8. Add required kernel_append params in the instance info For example:

 openstack baremetal node set <NODE-UUID> --instance-info kernel_append_params='earlyprintk=ttyS0,115200 console=ttyS0,115200 erst_disable ipa-debug=1'

9. Complete steps 9 to 14 of Installing the Operating System using OpenStack Ironic. This will complete the OS provisioning through virtual media and the installed OS boots up on the system

IO firmware update using HPE Smart Update Manager

sdflex-redfish hardware type supports IO firmware update using HPE Smart Update Manager (HPE SUM) as a clean step "update_firmware_sum".

Pre-requisites:

1. The SUM update log files are collected and stored as tar files on the conductor node. The default location for log data is "/var/log/ironic/deploy". Edit ironic.conf and set deploy_logs_local_path to modify the default.
   [agent]
   deploy_logs_local_path = /var/log/ironic/deploy

2. Restart ironic-conductor.
3. Create a json file with following contents, say 'update.json':

   [{"interface": "management",
     "step": "update_firmware_sum",
     "args": {"url": "http://2.2.2.2/bp-SuperdomeFlex-2020-09-05-00.iso",
              "checksum": "eb5810601c0cf12eabe2ead026f9af5b0ec7dbd07f165d0ebbae728b539bf9c0"
             }
   }]

url: URL for the HPE Superdome Flex I/O Service Pack ISO.
checksum: sha256 checksum of the HPE Superdome Flex I/O Service Pack ISO.

4. Perform clean step for SUM based IO firmware update using following command:

   openstack baremetal node clean <node UUID> --clean-steps ./update.json

NOTE:

  • Once IO firmware update completes, the node moves to manage state.
  • Logs are available at path deploy_logs_local_path.
  • Log file name format is node_uuid_cleaning_date-timestamp.tar.gz.

Boot from pre-provisioned Fiber Channel volume

1. Pre-requisites:

  • Superdome Flex / Superdome Flex 280 server with at least one FC card which is connected to FC storage.
2. Configuration:
  • Add sdflex-redfish as deploy interface in ironic.conf and restart the ironic conductor service
 [DEFAULT]
 enabled_deploy_interfaces = fake,direct,sdflex-redfish

3. Attach the FC volume(pre-provisioned with OS) to Superdome Flex 280 nPar from the FC storage
4. Enroll Superdome Flex 280 nPar as a baremetal node with "sdflex-redfish" as driver and "sdflex-redfish" as deploy interface and follow the steps 1 through 12 of Installing the Operating System using OpenStack Ironic
5. Update the baremetal node with bfpv=true in the driver-info

 openstack baremetal node set <NODE_UUID> --driver-info bfpv='true'

6. Add the root hint of the FC volume i.e. serial number of the volume which is attached to the baremetal. Serial number can be obtained by performing the inspection and introspection on the ironic node.

 openstack baremetal node set <NODE_UUID> --property root_device='{"serial":"600c0ff0003c74492ec0865f01000000"}'

7. Complete steps 13 and 14 of Installing the Operating System using OpenStack Ironic
This will boot up the OS from the pre-provisioned FC volume on the Superdome Flex / Superdome Flex 280 nPar.

DHCP less OS Provisioning

Prerequisites:

  • No DHCP service should be running in the network including neutron-dhcp.
  • Wholedisk type of qcow2 to be used for image_source in instance-info.
  • OpenStack V release or later
  • simple-init element should be included in building centos7 the deploy ramdisk. Only centos7 deploy ramdisk to be used for this DHCPLess provisioning.
1.Ironic Configuration:
Add "sdflex-redfish-dhcpless" as boot interface in ironic.conf and restart the ironic conductor service
 [DEFAULT]
 enabled_boot_interfaces = pxe,sdflex-redfish,sdflex-redfish-vmedia,sdflex-redfish-dhcpless

2. Enroll Superdome Flex nPar as a baremetal node with "sdflex-redfish" as driver and "sdflex-redfish-dhcpless" as a boot interface and follow the steps 1 to 4 and 7 through 10 of Installing the Operating System using OpenStack Ironic

3. In Step 11, make sure to use Wholedisk Image only.

4. Add the network data to the baremetal node.

 openstack baremetal node set <NODE_UUID> --network-data <network-data-script.json>

Sample contents of network-data-script.json

 {
    "links":[
       {
          "id":"enp1s0",
          "type":"phy",
          "ethernet_mac_address":"1a:2b:3c:4d:5e:6f",
          "mtu":1500
      }
    ],
    "networks":[
       {
          "id":"provisioning IPv4",
          "type":"ipv4",
          "link":"enp1s0",
          "ip_address":"1.2.3.4",
          "netmask":"255.255.255.0",
          "routes":[
             {
                "network":"1.2.3.0",
                "netmask":"255.255.255.0",
                "gateway":"1.2.3.1"
             },
             {
                "network": "0.0.0.0",
                "netmask": "0.0.0.0",
                "gateway": "1.2.3.1"
             }
         ] ,
          "network_id": ""
       }
    ],
    "services":[
       {
          "type":"dns",
          "address":"1.2.3.1"
       }
    ]
 }

5. Add uefi bootable bootloader location in the driver info.
For example:

 openstack baremetal node set <NODE-UUID> --driver-info bootloader='http://1.2.3.4/bootloader.efi'

6. Add required kernel_append params in the instance info.
For example:

 openstack baremetal node set <NODE-UUID> --instance-info kernel_append_params='earlyprintk=ttyS0,115200 console=ttyS0,115200 erst_disable ipa-debug=1 ipv6.disable=1'

7. Complete steps 11 to 14 of Installing the Operating System using OpenStack Ironic

Event subscription management

Event subscription management support the following event subscription methods:

  • create_subscription
  • delete_subscription
  • get_all_subscriptions
  • get_subscription
Usage procedure:
1. Ironic Configuration:
Add "sdflex-redfish" as vendor interface in ironic.conf and restart the ironic conductor service
 [DEFAULT]
 enabled_vendor_interfaces = sdflex-redfish,no-vendor

2. Enroll Superdome Flex nPar as a baremetal node with "sdflex-redfish" as driver and "sdflex-redfish" as a vendor interface
3. Move the provisioning state to manageable using the command

 baremetal node manage <NODE UUID>

4. Obtain Ironic condoctor token

 TOKEN=$(openstack --os-cloud devstack-admin token issue -f value -c id)

5. create subscription

 curl -i -X POST -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://<IRONIC-CONDUCTOR-IP>/baremetal/v1/nodes/<NODE-UUID>/vendor_passthru?method=create_subscription -d '{"Destination": "https://<WEB-SERVER-IP>/<LOCATION-IN-THE-WEB-SERVER>","Context": "<context>","args": "<args>"}'

For Example:

 curl -i -X POST -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://2.2.2.2/baremetal/v1/nodes/fccf4d02-7b86-4211-975e-95bc584e29b5/vendor_passthru?method=create_subscription -d '{"Destination": "https://2.2.2.3/event_logger","Context": "openstack-cli","args": "Redfish"}'

6. Get all susbscriptions

 curl -i -X GET -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://<IRONIC-CONDUCTOR-IP>/baremetal/v1/nodes/<NODE-UUID>/vendor_passthru?method=get_all_subscriptions

For Example:

 curl -i -X GET -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://1.1.1.1/baremetal/v1/nodes/fccf4d02-7b86-4211-975e-95bc584e29b5/vendor_passthru?method=get_all_subscriptions

7. Get a specific subscription

 curl -i -X GET -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://<IRONIC-CONDUCTOR-IP>/baremetal/v1/nodes/<NODE-UUID>/vendor_passthru?method=get_subscription -d '{"id": "<SUBSCRIPTION-ID>"}'


For Example:

 curl -i -X GET -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://1.1.1.1/baremetal/v1/nodes/fccf4d02-7b86-4211-975e-95bc584e29b5/vendor_passthru?method=get_subscription -d '{"id": "de39420d4d714ef894007377548fdd1d"}'

8. Delete subscription

 curl -i -X DELETE -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://<IRONIC-CONDUCTOR-IP>/baremetal/v1/nodes/<NODE-UUID>/vendor_passthru?method=delete_subscription -d '{"id": "<SUBSCRIPTION-ID>"}' <br />

For Example:

 curl -i -X DELETE -H 'X-OpenStack-Ironic-API-Version: latest' -H 'X-Auth-Token: $TOKEN' -H 'Content-Type: application/json' -H 'Accept: application/json'  http://1.1.1.1/baremetal/v1/nodes/fccf4d02-7b86-4211-975e-95bc584e29b5/vendor_passthru?method=delete_subscription -d '{"id": "de39420d4d714ef894007377548fdd1d"}'

See Vendor Passthru Methods for more information

Anaconda (ISO) based OS provisioning for RHEL OSes

Deployment with ``anaconda`` deploy interface is supported by ``sdflex-redfish`` hardware type and works with ``pxe`` boot interface. See Deploy with anaconda deploy interface for more information

Usage procedure:
1. Ironic Configuration:
Add "anaconda" as deploy interface in ironic.conf and restart the ironic conductor service

 [DEFAULT]
 enabled_deploy_interfaces = direct,anaconda

2. Enroll Superdome Flex nPar as a baremetal node with "sdflex-redfish" as driver and "anaconda" as a deploy interface
3. Complete steps 1 through 8 of Installing the Operating System using OpenStack Ironic
4. set boot_option to kickstart

 openstack baremetal node set --instance-info capabilities='{"boot_option": "kickstart"}' <NODE-UUID>

5. Mount the required RHEL OS ISO on to a directory in a web server location 6. update the instance-info details

 openstack baremetal node set --instance-info image_source=http://<WEB-SERVER-IP-AND-PORT>/<MOUNT-POINT-OF-THE-ISO>/ <NODE-UUID>
 openstack baremetal node set --instance-info root_gb=25 <NODE-UUID>
 openstack baremetal node set --instance-info kernel=http://<WEB-SERVER-IP-AND-PORT>/<MOUNT-POINT-OF-THE-ISO>/images/pxeboot/vmlinuz --instance-info ramdisk=http://<WEB-SERVER-IP-AND-PORT>/<MOUNT-POINT-OF-THE-ISO>/images/pxeboot/initrd.img <NODE-UUID>
 openstack baremetal node set --instance-info ks_template=http://<WEB-SERVER-IP-AND-PORT>/ks.cfg.template <NODE-UUID>

For Example:

 openstack baremetal node set --instance-info image_source=http://1.1.1.1:8010/RHEL85mnt/ fccf4d02-7b86-4211-975e-95bc584e29b5
 openstack baremetal node set --instance-info root_gb=25 fccf4d02-7b86-4211-975e-95bc584e29b5
 openstack baremetal node set --instance-info kernel=http://1.1.1.1:8010/RHEL85mnt/images/pxeboot/vmlinuz --instance-info ramdisk=http://1.1.1.1:8010/RHEL85mnt/images/pxeboot/initrd.img fccf4d02-7b86-4211-975e-95bc584e29b5
 openstack baremetal node set --instance-info ks_template=http://1.1.1.1:8010/ks.cfg.template fccf4d02-7b86-4211-975e-95bc584e29b5

NOTE:

  • See sample ks.cfg.template
  • No need to provide the deploy_ramdisk and deploy_kernel in the driver_info as it is Anaconda based OS provisioning
6. Continue with the steps (steps 11 to 14) of Installing the Operating System using OpenStack Ironic. Once the provisioning is complete, installed OS boots up on the node.
NOTE:
  • Anaconda based provisioning is also supported for Directed LAN boot and Secureboot. The above procedure remains same for Directed LAN Boot and Secure boot apart from the specific steps stated in the respective sections of Directed LAN boot and Secure boot.
⚠️ **GitHub.com Fallback** ⚠️