Motr deployment using motr_setup on Threenode VM. - Seagate/cortx-motr GitHub Wiki

Pre requisites:

  • VM should have a enough data disks based on cluster configuration (data+parity+spare) and metadata disk for each io service. for ex :
    • for 4+2+2 on three node, should have atleast 2 (io service) X 2 data disk on each node and 2 (ioservice) X 1 metadata disk = Minimum 6 disk should be available.
    • for 1+0, there should be minimum 2 disk, 1 for data disk and 1 for metadata disk. Recommended to configure 6 disks ( 2 data disk for each io service, 1 metadata disk for each io service)

Run below steps on all nodes.

Installl 3rd prty components

curl -s http://cortx-storage.colo.seagate.com/releases/cortx/third-party-deps/rpm/install-cortx-prereq.sh | bash

create passwordless login between all the nodes.

all the nodes should have access to each other

follow the below steps to deploy motr and hare component on all nodes.

Add yum repos for motr, hare, consul , replace with the required build.

yum-config-manager --add-repo=http://cortx-storage.colo.seagate.com/releases/cortx/github/integration-custom-ci/release/centos-7.8.2003/custom-build-510/cortx_iso/
yum-config-manager --add-repo=http://cortx-storage.colo.seagate.com/releases/cortx/github/integration-custom-ci/release/centos-7.8.2003/custom-build-510/3rd_party/
yum-config-manager --add-repo=http://cortx-storage.colo.seagate.com/releases/cortx/github/integration-custom-ci/release/centos-7.8.2003/custom-build-510/3rd_party/lustre/custom/tcp/

Install dependent rpm, cortx-motr and cortx-hare - run this on all nodes.

yum install -y cortx-motr cortx-hare cortx-py-utils  --nogpgcheck

create a machine-id - Required only for VM, not on HW - run this on all nodes

rm -f /etc/machine-id /var/lib/dbus/machine-id
dbus-uuidgen --ensure=/etc/machine-id
dbus-uuidgen --ensure
systemctl status network
cat /etc/machine-id

Modify templates with the below changes on all nodes.

post_install

  • TMPL_MACHINE_ID - Machine ID, refer Create Machind ID for details
  • TMPL_NAME - Name of the node, for ex : srvnode-1, srvnode-2, etc
  • TMPL_DATADEVICE_00 - first Data device for group 0, default: /dev/sdc
  • TMPL_DATADEVICE_01 - second Data device for group 0, default: /dev/sdd
  • TMPL_METADATADEVICE_00 - first Metadata disk device for group 0, default:/dev/sdb
  • TMPL_DATADEVICE_10 - first Data device for group 1, default:/dev/sdf
  • TMPL_DATADEVICE_11- second Data device for group 1, default:/dev/sdg
  • TMPL_METADATADEVICE_10 - first Metadata disk device for group 1, default:/dev/sde
  • TMPL_CVG_NR_GROUP - Number of disk groups, default:2
  • TMPL_CLUSTER_ID - Cluster ID, default:5c427765-ecf5-4387-bfa4-d6d53494b159
  • TMPL_POOL_DATA - Pool configuration or cluster configuration for data units, default: 4
  • TMPL_POOL_PARITY - Pool configuration or cluster configuration for parity units, default: 2
  • TMPL_POOL_SPARE - Pool configuration or cluster configuration for spare units, default: 0

prepare

  • TMPL_MACHINE_ID - Machine ID, refer Create Machind ID for details
  • TMPL_NAME - Name of the node, for ex : srvnode-1, srvnode-2, etc
  • TMPL_DATADEVICE_00 - first Data device for group 0, default: /dev/sdc
  • TMPL_DATADEVICE_01 - second Data device for group 0, default: /dev/sdd
  • TMPL_METADATADEVICE_00 - first Metadata disk device for group 0, default:/dev/sdb
  • TMPL_DATADEVICE_10 - first Data device for group 1, default:/dev/sdf
  • TMPL_DATADEVICE_11- second Data device for group 1, default:/dev/sdg
  • TMPL_METADATADEVICE_10 - first Metadata disk device for group 1, default:/dev/sde
  • TMPL_CVG_NR_GROUP - Number of disk groups, default:2
  • TMPL_IFACE_TYPE - Interface type(tcp, o2ib, etc), default:tcp
  • TMPL_XPORT_TYPE - Transport type(lnet or libfabric), Default: lnet
  • TMPL_CLUSTER_ID - Cluster ID
  • TMPL_POOL_DATA - Pool config for data units, Default 4
  • TMPL_POOL_PARITY - Pool config for parity units, Default 2
  • TMPL_POOL_SPARE - Pool config for spare units, Default 0

config

  • TMPL_TYPE - Node type "HW or VM", Default: VM

test

  • TMPL_HOSTNAME - Hostname of the current node using hostname --fqdn

Run mini provisioner on all nodes

/opt/seagate/cortx/motr/bin/motr_setup post_install --config yaml:///opt/seagate/cortx/motr/conf/motr.post_install.tmpl
/opt/seagate/cortx/motr/bin/motr_setup prepare --config yaml:///opt/seagate/cortx/motr/conf/motr.prepare.tmpl
/opt/seagate/cortx/motr/bin/motr_setup config --config yaml:///opt/seagate/cortx/motr/conf/motr.config.tmpl
/opt/seagate/cortx/motr/bin/motr_setup test --config yaml:///opt/seagate/cortx/motr/conf/motr.test.tmpl
  • References:

create cdf file

  • Create a cdf file, refer threenode.yaml for more details
  • bootstrap cluster.
#Run this command only on Primary node
hctl bootstrap --mkfs /root/threenode.yaml

Run m0crate test

Test IO Testing: #Run this command only on Primary node

dd if=/dev/urandom of=/tmp/128M bs=1M count=128
/opt/seagate/cortx/hare/libexec/m0crate-io-conf > /tmp/m0crate-io.yaml
m0crate -S /tmp/m0crate-io.yaml

threenode cluster definition file.

update hostname, data_iface, data devices

[root@ssc-vm-1409 ~]# cat /root/threenode.yaml
# Cluster Description File (CDF).
nodes:
  - hostname: {HOSTNAME1} # [user@]hostname
    data_iface: eth1        # name of data network interface
    m0_servers:
      - runs_confd: true
        io_disks:
          data: []
      - io_disks:
          meta_data: /dev/vg_srvnode-1_md1/lv_raw_md1
          data:
            - /dev/sdc
            - /dev/sdd
      - io_disks:
          meta_data: /dev/vg_srvnode-1_md2/lv_raw_md2
          data:
            - /dev/sdf
            - /dev/sdg
    m0_clients:
      s3: 0         # number of S3 servers to start
      other: 2      # max quantity of other Motr clients this host may have
  - hostname: {HOSTNAME2} # [user@]hostname
    data_iface: eth1        # name of data network interface
    m0_servers:
      - runs_confd: true
        io_disks:
          data: []
      - io_disks:
          meta_data: /dev/vg_srvnode-2_md1/lv_raw_md1
          data:
            - /dev/sdc
            - /dev/sdd
      - io_disks:
          meta_data: /dev/vg_srvnode-2_md2/lv_raw_md2
          data:
            - /dev/sdf
            - /dev/sdg
    m0_clients:
      s3: 0         # number of S3 servers to start
      other: 2      # max quantity of other Motr clients this host may have
  - hostname: {HOSTNAME3} # [user@]hostname
    data_iface: eth1        # name of data network interface
    m0_servers:
      - runs_confd: true
        io_disks:
          data: []
      - io_disks:
          meta_data: /dev/vg_srvnode-3_md1/lv_raw_md1
          data:
            - /dev/sdc
            - /dev/sdd
      - io_disks:
          meta_data: /dev/vg_srvnode-3_md2/lv_raw_md2
          data:
            - /dev/sdf
            - /dev/sdg
    m0_clients:
      s3: 0         # number of S3 servers to start
      other: 2      # max quantity of other Motr clients this host may have
pools:
  - name: the pool
    type: sns  # optional; supported values: "sns" (default), "dix", "md"
    data_units: 4
    parity_units: 2

Recommended Default values that can be replaced in template

MACHINEID1=`cat /etc/machine-id`
CLUSTER_ID=5c427765-ecf5-4387-bfa4-d6d53494b159
HOSTNAME1=`hostname`
nr_grp="2"
data="4"
parity="2"
spare="0"

sed -i "s#TMPL_MACHINE_ID#$MACHINEID1#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_NAME#srvnode-1#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_DATADEVICE_00#/dev/sdc#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_DATADEVICE_01#/dev/sdd#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_METADATADEVICE_00#/dev/sdb#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_DATADEVICE_10#/dev/sdf#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_DATADEVICE_11#/dev/sdg#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_METADATADEVICE_10#/dev/sde#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_CVG_NR_GROUP#'$nr_grp'#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_CLUSTER_ID#$CLUSTER_ID#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_POOL_DATA#$data#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_POOL_PARITY#$parity#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl
sed -i "s#TMPL_POOL_SPARE#$spare#" /opt/seagate/cortx/motr/conf/motr.post_install.tmpl

sed -i "s#TMPL_MACHINE_ID#$MACHINEID1#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_DATADEVICE_00#/dev/sdc#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_DATADEVICE_01#/dev/sdd#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_METADATADEVICE_00#/dev/sdb#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_DATADEVICE_10#/dev/sdf#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_DATADEVICE_11#/dev/sdg#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_CVG_NR_GROUP#'$nr_grp'#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_METADATADEVICE_10#/dev/sde#" /opt/seagate/cortx/motr/conf/motr.post_prepare.tmpl
sed -i "s#TMPL_IFACE_TYPE#tcp#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_INTERFACE#eth1#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_XPORT_TYPE#lnet#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_CLUSTER_ID#$CLUSTER_ID#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_POOL_DATA#'$data'#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_POOL_PARITY#'$parity'#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl
sed -i "s#TMPL_POOL_SPARE#'$spare'#" /opt/seagate/cortx/motr/conf/motr.prepare.tmpl


sed -i "s#TMPL_MACHINE_ID#$MACHINEID1#" /opt/seagate/cortx/motr/conf/motr.config.tmpl
sed -i "s#TMPL_TYPE#VM#" /opt/seagate/cortx/motr/conf/motr.config.tmpl

sed -i "s#TMPL_MACHINE_ID#$MACHINEID1#" /opt/seagate/cortx/motr/conf/motr.test.tmpl
sed -i "s#TMPL_HOSTNAME#$HOSTNAME1#" /opt/seagate/cortx/motr/conf/motr.test.tmpl