VMs Instalation - charlesfg/TPCx-V_setup GitHub Wiki

Help on how to install, clone and manipulate the VMs

Conventions

TODO: Explaing the VM naming rules that will help the automation scripts regex: tpc-(t[1-6])?g[1-4][ab][12]?

Command that worked, had to install everything interactively

virt-install \
    --name tpc0 \
    --memory 4096 \
    --vcpus 2 \
    --disk size=6 \
    --location /home/charles/CentOS-7-x86_64-DVD-1511.iso \
    --os-type linux \
    --os-variant centos7.0 \
    --network bridge=br10,model=virtio \
    --network bridge=br200,model=virtio \
    --graphics none \
    --extra-args 'console=ttyS0,115200n8 serial' \
    --dry-run \
    -d 

Prepare /etc/hosts and reserve the IP-Addresses

I've decided to use a private network for the TPCx-V cluster environment. So I reserved the IP addresses and created the following /etc/hosts into the base VM.

10.0.0.20     tpc0
10.0.0.30     tpc-drive
10.0.0.31     tpc-g1a
10.0.0.32     tpc-g1b1
10.0.0.33     tpc-g1b2
10.0.0.34     tpc-g2a
10.0.0.35     tpc-g2b1
10.0.0.36     tpc-g2b2
10.0.0.37     tpc-g3a
10.0.0.38     tpc-g3b1
10.0.0.39     tpc-g3b2
10.0.0.40     tpc-g4a
10.0.0.41     tpc-g4b1
10.0.0.42     tpc-g4b2

This approach will help to ease all the automations when cloning the virtual machines.

Clone a VM

virt-clone --original tpc0 --name tpc-g1b1 --auto-clone

obs: Note that the virt-clone copy all disks attached on the VM, so for the flat-files (that are the same for all machines), there's no need to clone it, so remember to detach it first.

Prepare the target VM

Using the [libguestfs|http://libguestfs.org/]

# Append the flatfiles

virsh attach-disk tpc-g1b1 --source /var/lib/libvirt/images/tpc-flatfiles.img --target vdb --persistent

# Append the the db data Drive

/opt/tpc/libguestfs-1.34.2/run virt-customize \
--domain tpc-g1b1 \
--hostname tpc-g1b1 \
--edit /etc/sysconfig/network-scripts/ifcfg-eth0:'s/10.131.6.20/10.131.6.32/g' \
--dry-run


virt-customize \
--domain tpc-g1b1 \
--hostname tpc-g1b1 \
--edit /etc/sysconfig/network-scripts/ifcfg-eth0:'s/BOOTPROTO=dhcp/BOOTPROTO=static/g' \
--append-line '/etc/sysconfig/network-scripts/ifcfg-eth0:"IPADDR=10.131.6.32\nNETMASK=255.255.255.0\nDNS1=10.131.6.10"' \
--append-line '/etc/fstab:"/dev/fstab:/dev/vdb1\t/vgenstore\text4\tdefaults\t0\t1"}' \
--dry-run \
--verbose



Configure a Tier B Virtual Machine

The ideia here is to setup a base tier B vm to later clone it programatically using the scripts left in the base tier B vm. We decided to use separeted disk as the main storage for the databases, since each group have different capacity necessity and we can automate the disk partition and formating. Before this step, you should create each disk that will store the dbstore folder. (check the storage page).

Before cloning its better to not had the disk attached permanentelly in the xml configuration of the guest machine.

We will create some scripts that automate the process described on the section "2.5 Configure a Tier B Virtual Machine" of the user's manual.

Setup the First VM

First interactively follow the steps in the user guide taking into account the following steps to further automate the creation of the remaining tier's B vm.

Configure the dbstore foldes

In the root folder create the script setup_dbstore_folders.sh :

#!/bin/bash

mkdir -p /dbstore/tpcv-data
mkdir /dbstore/tpcv-index
mkdir /dbstore/tpcv-temp
chown -R postgres:postgres /dbstore

Configure the postgres

In the root folder create the script setup_postgres.sh :

#!/bin/bash -x
PGDATA=/dbstore/tpcv-data
GUEST_IP=`ifconfig | grep 10.131.6 | awk '{print $2}'`

cat << EOF  >>/etc/systemd/system/postgresql-9.3.service
.include /lib/systemd/system/postgresql-9.3.service
Environment=PGDATA=/dbstore/tpcv-data
EOF

mkdir /etc/systemd/system/postgresql-9.3.service.d

cat << EOF  >> /etc/systemd/system/postgresql-9.3.service.d/restart.conf
[Service]
Environment=PGDATA=/dbstore/tpcv-data
EOF

systemctl enable postgresql-9.3.service
/usr/pgsql-9.3/bin/postgresql93-setup initdb
systemctl start postgresql-9.3.service

sed -i 's/peer\|ident/trust/g' /dbstore/tpcv-data/pg_hba.conf


sed -i 's/127.0.0.1\/32/0.0.0.0\/0/g' /dbstore/tpcv-data/pg_hba.conf


sed -i "s/^#listen.*/listen_addresses = '*'/g" /dbstore/tpcv-data/postgresql.conf

sed -i 's/^shared_buffers.*/shared_buffers = 1024MB/g' /dbstore/tpcv-data/postgresql.conf

sed -i 's/^#wal_sync.*/wal_sync_method = open_datasync/g' /dbstore/tpcv-data/postgresql.conf

sed -i 's/^#wal_wri.*/wal_writer_delay = 10ms/g' /dbstore/tpcv-data/postgresql.conf

sed -i 's/^#checkpoint_seg.*/checkpoint_segments = 30/g' /dbstore/tpcv-data/postgresql.conf

sed -i 's/^#checkpoint_comple.*/checkpoint_completion_target = 0.9/g' /dbstore/tpcv-data/postgresql.conf

systemctl restart postgresql-9.3.service

In the postgres home folder create the script create_database.sh :

#!/bin/bash -x

sed -i "s/Scaling\[1-4\]/Scaling[1-$(hostname | cut -c6)]/" /opt/VDb/pgsql/scripts/linux/env.sh
cd /opt/VDb/pgsql/scripts/linux
./setup.sh
cd -

virsh attach-disk tpc-g1b1 --source /var/lib/libvirt/images/tpc_g1b1-dbstore.img --target vdc

--edit /etc/fstab:'eof && do{print "$_"; print "/dev/vdb1\t/vgenstore\text4\tdefaults\t0\t1\n"}' \
--edit /etc/fstab:'eof && do{print "$_"; print "/dev/vdc1\t/dbstore\text4\tnofail,noatime,nodiratime,nobarrier\t0\t1\n"}' \
 

perl -p -e 's/BOOTPROTO=static/BOOTPROTO=staasdadtic/g' /etc/sysconfig/network-scripts/ifcfg-eth0

perl -pe 'eof && do{chomp; print "$_ 20"; exit}' file.txt

Automatically generate the others Tier B VMs.

The tier B vm, using our convention, is the tpc-g1b1 so, at this stage, it already have all the following content. Use the script clone_all_tierBvms.sh in the host machine to clone all the necessary tier B vms.

⚠️ obs Remember to attach permanently the disks on the base TierB VM and changet the /etc/fstab

    virsh attach-disk tpc-g1b1 --source /var/lib/libvirt/images/tpc-flatfiles.img \
        --target vdb --persistent
    virsh attach-disk tpc-g1b1 --source /var/lib/libvirt/images/tpc-g1b1-dbstore.img \
        --target vdc --persistent
    /opt/tpc/libguestfs-1.34.2/run virt-customize \
            --domain tpc-g1b1 \
            --edit /etc/fstab:'eof && do{print "$_"; print "/dev/vdb1\t/vgenstore\text4\tdefaults\t0\t1\n"}' \
            --edit /etc/fstab:'eof && do{print "$_"; print "/dev/vdc1\t/dbstore\text4\tnofail,noatime,nodiratime,nobarrier\t0\t1\n"}'

Setup the virst VM Tier A

virt-clone --original tpc0 --name tpc-g1a --auto-clone

/opt/tpc/libguestfs-1.34.2/run virt-customize \
--domain tpc-g1a \
--hostname tpc-g1a \
--edit /etc/sysconfig/network-scripts/ifcfg-eth0:'s/10.131.6.20/10.131.6.31/g' \
--edit /
--dry-run
  • Follow the instructions on the User guide with the following caveat:
    • When trying to check that you can reach the same database via ODBC use the ./traderesult instead of ./tradestatus. The later issue the SQL Failed error without any further information, but being able to connect to the database . I lost too much time trying to figure out what was going wrong. After running tradestatus I've checked that some queries had SQL Failed erros but others had SQL Success status

Change the Two Data Sourses

Usign the SSH passwordless we will setup the new cloned vms using the following scripts under the root folder of the new vm.

change_vma_datasource.sh

#!/bin/bash

set -o verbose
set -o errexit

for i in  1 2; do 
        H=$(hostname | sed "s/a/b$i/g") ; 
        sed  -i "s/tpc-g1b$i/$H/g" /etc/odbc.ini
done

tpc-gXa_connectivity_check.sh

#!/bin/bash
set -x

cd 
rm -fv /opt/VDb/pgsql/dml/test_programs/traderesult
sed -i 's/DSN=PSQL[1-3]/DSN=PSQL2/g' /opt/VDb/pgsql/dml/test_programs/traderesult.c
for i in  1 2;
do 
    export PGHOST=$(hostname | sed "s/a/b$i/g")

    psql tpcv -c "select count(*) from sector"    
    cd /opt/VDb/pgsql/dml/test_programs
    make traderesult
    ./traderesult 2>&1 | grep Success
    rm -fv traderesult    
    sed -i 's/DSN=PSQL[1-3]/DSN=PSQL3/g' /opt/VDb/pgsql/dml/test_programs/traderesult.c
done

Setup the driver VM

  • Run the following commands to check if the ssh will work (accept the hosts) and check the connectivity

for i in `cat /etc/hosts | grep tpc-g | awk '{print $2}' `; do ssh -o 'StrictHostKeyChecking no'  $i hostname && whoami; done

for i in `cat /etc/hosts | grep tpc-g | awk '{print $2}' `; do su postgres -c "ssh -o 'StrictHostKeyChecking no'  $i hostname && whoami"; done

Caveat about changing the vcfg.properties

  • Avoid the use of localhost for some component of the architecture that seem to run locally. The documentation its no so clear about some parameters and it's not clear if it would only be used locally or accessed by the vconnectors as well.

Problems when running the runme.sh

  • You also have to copy the testbed.properties to /opt/VDriver/jar/. You find a copy of this file on :
  • for some reason the checksum step failed and you should rsync the /opt/VDriver folder on the tpc-gXbY vms.

Failed Runs

  • I've noticed that no Trade Request was received:
2016-11-10 14:58:27.000 MEE-0-0 Lsnr:   Trade Requests received: 0
2016-11-10 14:58:27.000 MEE-1-0 Lsnr:   Trade Requests received: 0
2016-11-10 14:58:27.000 MEE-2-0 Lsnr:   Trade Requests received: 0
2016-11-10 14:58:27.000 MEE-3-0 Lsnr:   Trade Requests received: 0
2016-11-10 14:58:27:674 VMee-0 Main: Waiting for the command to shut
  • I've found those suspect logging output that I couldn't (yet) discorver what they mean, maybe they can help in spot what could be wrong.
  • They were collected from the direct output of the runme.sh script
1: WARNING: num_found = 20, num_updated = 10 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 11 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 12 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 13 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 14 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 16 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 1 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 2 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 3 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 4 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 5 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 6 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 7 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 8 (numrows = 20)
1: WARNING: num_found = 20, num_updated = 9 (numrows = 20)

Unexpected response status received: -1012.
Unexpected response status received: -211.
Unexpected response status received: 641.
Unexpected response status received: -721.

Also I've noticed that there's no transaction sent to the vmee. I've sniffed the ports where the vmee were listen:


TODO:

  • Configurar os clones do vg[1-4]b[12]
 for i in `cat /etc/hosts | grep 'tpc-g' | awk '{print $2}'`; do  echo $i; ssh $i 'ssh -o "StrictHostKeyChecking no" 10.131.6.30 "hostname && whoami"'; done


for i in `cat /etc/hosts | grep 'tpc-g' | awk '{print $2}'`; do  
    echo $i; 
    rsync -avz $i:/etc/hosts
done

for i in `cat /etc/hosts | grep 'tpc-g' | awk '{print $2}'`; do  
    echo $i; 
 scp /opt/VDriver/scripts/rhel6/runme.sh  postgres@$i:/opt/VDriver/scripts/rhel6/runme.sh 
done

for i in *.log; do echo :::::::::: $i ::::::::::::::; grep -A 25 'Iteration: 1' $i ; done

for i in cat /etc/hosts | grep 'tpc-g' | awk '{print $2}'; do
rsync -avz /opt/VDriver/ $i:/opt/VDriver/ done

⚠️ **GitHub.com Fallback** ⚠️