HOWTO: Debug initial setup - ibmcb/cbtool GitHub Wiki

  • The first place to look for errors is the log files, located at /var/log/cloudbench/<USERNAME>_*. The Orchestrator-side operations (e.g., vmattach) will be recorded to /var/log/cloudbench/<USERNAME>_operations.log. When Virtual Application's configuration scripts are executed, the result is logged on /var/log/cloudbench/<USERNAME>_remotescripts.log. In case of failure it is useful to restart CBTOOL with the option "-v 8"

  • Debugging single-VM attachment errors (e.g. vmattach tinyvm):

    • First of all, instruct CBTOOL to deploy the instances without attempting to establishing contact to these, by running the command vmdev.

    • After that, just run vmattach tinyvm once. CBTOOL will output each command that would have been executed against the VM. At this point, you can just execute the commands yourself and check for error messages.

    • Once done, do not forget to disable the debug mode by issuing the command vmundev

    • NOTE: There is a small utility, called "cbssh", that could be used to directly connect to the VMs. To try it, just run - on a bash prompt - cd ~/cbtool; ~/cbtool/cbssh vm_1, and you should be able to login on the instance.

    • EXAMPLE:

rdu37@repo-1:~/cbtool$ ~/cbtool/cb --soft_reset
Cbtool version is "3fed040"
Parsing "cloud definitions" file..... "/home/rdu37/cbtool/lib/auxiliary//../..//configs/rdu37_cloud_definitions.txt" opened and parsed successfully.
Checking "Object Store".....An Object Store of the kind "Redis" (shared) on node 10.37.0.1, TCP port 6379, database id "0" seems to be running.
Checking "Log Store".....A Log Store of the kind "rsyslog" (private) on node 10.37.0.1, UDP port 5138 seems to be running.
Checking "Metric Store".....A Metric Store of the kind "MongoDB" (shared) on node 10.37.0.1, TCP port 27017, database id "metrics" seems to be running.
Checking "File Store".....A File Store of the kind "rsync" (private) on node 10.37.0.1, TCP port 10039 seems to be running.
Executing "hard" reset: (killing all running toolkit processes and flushing stores) before starting the experiment......
Killing all processes... done
Flushing Object Store... done
Flushing Log Store... done

Flushing Metric Store... done
Checking for a running API service daemon.....API Service daemon was successfully started. The process id is ['16371'](http://10.37.0.1:7037).
Checking for a running GUI service daemon.....GUI Service daemon was successfully started. The process id is ['16674', '16675'], listening on port 8037. Full url is "http://10.37.0.1:8037".
 status: VPN configuration for this cloud already generated: /home/rdu37/cbtool/lib/auxiliary//../../configs/generated/RDU37_server-cb-openvpn.conf
Checking for a running VPN daemon.....VPN daemon was successfully started. The process id is [].
 status: OpenStack connection parameters: username=admin, password=<omitted>, tenant=admin, cacert=None, insecure=False, region_name=RegionOne, access_url=http://10.22.1.38:5000/v2.0/, endpoint_type=publicURL

 OpenStack status: Checking if the ssh key pair "rdu37_default_cbtool_rsa" is created on VMC RegionOne.... done
 OpenStack status: Checking if the security group "default" is created on VMC RegionOne.... done

 OpenStack status: Checking if the floating pool "ext_net" can be found on VMC RegionOne... done
 OpenStack status: Checking if the network "private1" can be found on VMC RegionOne... done. This tenant network will be used as the default for provisioning.
 OpenStack status: Checking if the imageids associated to each "VM role" are registered on VMC RegionOne.... done
 status: WARNING Image id for VM roles "driver_tradelite,client_tradelite": "cb_tradelite" is NOT registered (attaching VMs with any of these roles will result in error).
         WARNING Image id for VM roles "windows,client_windows": "cb_windows" is NOT registered (attaching VMs with any of these roles will result in error)
 status: VMC "RegionOne" was successfully tested.

The "osk" cloud named "RDU37" was successfully attached to this experiment.
The experiment identifier is EXP-02-23-2016-07-33-30-PM-UTC

 status: Removing all VMs previously created on VMC "RegionOne" (only VM names starting with "cb-rdu37-RDU37").....
 status: Removing all VVs previously created on VMC "RegionOne" (only VV names starting with "cb-rdu37-RDU37").....
 status: Discovering hosts on VMC "RegionOne".....
 status: Starting a new Host OS performance monitor daemon (gmetad.py)......
 status: Host OS performance monitor daemon (gmetad.py) started successfully. The process id is ['18297'](using ports 8637 and 8737).
All VMCs successfully attached to this experiment.
(RDU37) vmdev
The global object "vm_defaults" on Cloud RDU37 was modified:
|"sub-attribute" (key)                |old value                          |new value
|check_boot_complete                  |tcp_on_22                          |wait_for_0
|transfer_files                       |True                               |false
|run_generic_scripts                  |True                               |false
|debug_remote_commands                |False                              |true

(RDU37) vmattach tinyvm
 status: Starting an instance on OpenStack, using the imageid "cb_nullworkload" (<Image: cb_nullworkload> qemu) and size "m1.tiny" (<Flavor: m1.tiny>), connected to networks "private1", on VMC "RegionOne", under tenant "default" (ssh key is "rdu37_default_cbtool_rsa" and userdata is "auto")
 status: Attempting to add a floating IP to vm_1...
 status: Waiting for vm_1 (cloud-assigned uuid 244077cc-d1ae-4650-b881-e2e2d9ef454c) to start...
 status: Trying to establish network connectivity to vm_1 (cloud-assigned uuid 244077cc-d1ae-4650-b881-e2e2d9ef454c), on IP address 10.37.1.126...
 status: Checking ssh accessibility on vm_1 (ssh [email protected])...
 status: This is the command that would have been executed from the orchestrator :
         ssh  -i /home/rdu37/cbtool/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l fedora 10.37.1.126 "/bin/true"
 status: Bootstrapping vm_1 (creating file cb_os_paramaters.txt in "fedora" user's home dir on 10.37.1.126)...
 status: This is the command that would have been executed from the orchestrator :
         ssh  -i /home/rdu37/cbtool/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l fedora 10.37.1.126 "mkdir -p /home/fedora/cloudbench;echo '#OSKN-redis' > /home/fedora/cb_os_parameters.txt;echo '#OSHN-10.37.0.1' >> /home/fedora/cb_os_parameters.txt;echo '#OSPN-6379' >>  /home/fedor
a/cb_os_parameters.txt;echo '#OSDN-0' >>  /home/fedora/cb_os_parameters.txt;echo '#OSTO-240' >>  /home/fedora/cb_os_parameters.txt;echo '#OSCN-RDU37' >>  /home/fedora/cb_os_parameters.txt;echo '#OSMO-controllable' >>  /home/fedora/cb_os_parameters.txt;echo '#OSOI-TEST_rdu37:RDU37' >>  /home/fedora/cb_os_parameters.txt;echo '#VMUUID-A92E5A88-01D8-5A9F-B85E-8939F8
2585CA' >>  /home/fedora/cb_os_parameters.txt;sudo chown -R fedora:fedora /home/fedora/cb_os_parameters.txt;sudo chown -R fedora:fedora  /home/fedora/cloudbench"
 status: Sending a copy of the code tree to vm_1 (10.37.1.126)...
 status: This is the command that would have been executed from the orchestrator :
         rsync -e "ssh  -i /home/rdu37/cbtool/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l fedora " --exclude-from '/home/rdu37/cbtool/lib/auxiliary//../../exclude_list.txt' -az --delete --no-o --no-g --inplace -O /home/rdu37/cbtool/lib/auxiliary//../../* 10.37.1.126:~/cloudbench/
 status: Performing generic VM post_boot configuration on vm_1 (10.37.1.126)...
 status: This is the command that would have been executed from the orchestrator :
         ssh  -i /home/rdu37/cbtool/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l fedora 10.37.1.126 "~/cloudbench/scripts/common/cb_post_boot.sh"
VM object A92E5A88-01D8-5A9F-B85E-8939F82585CA (named "vm_1") sucessfully attached to this experiment. It is ssh-accessible at the IP addresses 10.37.1.126 and 10.10.1.118 (cb-rdu37-RDU37-vm1-tinyvm).
  • Debugging single-VApp attachment errors (e.g. aiattach nullworkload):

    • First of all, instruct CBTOOL to deploy the Virtual Application instances without attempting to run the configuration scripts on its multiple VMs, by running the command appdev.

    • After that, just run aiattach nullworkload once. CBTOOL will output each command that would have been executed against each one of the VMs that compose the VApp. At this point, you can just execute the command yourself and check what could be wrong. IMPORTANT:Please note that some commands should be executed in parallel (or at least in a non-sequential manner) and therefore might require multiple prompts for overlapping execution.

    • Once done, do not forget to disable the debug mode by issuing the command appundev

    • EXAMPLE:

(TESTSIMCLOUD) aiattach cassandra_ycsb
 status: Waiting for vm_41 (cloud-assigned uuid 102D975B-114B-508F-8B46-557A6220DB37) to start...
 status: Waiting for vm_42 (cloud-assigned uuid DFD0163F-405D-5CF3-92E4-791B8DA5854D) to start...
 status: Waiting for vm_40 (cloud-assigned uuid C574CE93-7EF3-503C-BFE5-E7AF19D25B9F) to start...
 status: Trying to establish network connectivity to vm_41 (cloud-assigned uuid 102D975B-114B-508F-8B46-557A6220DB37), on IP address 158.77.154.174...
 status: Trying to establish network connectivity to vm_42 (cloud-assigned uuid DFD0163F-405D-5CF3-92E4-791B8DA5854D), on IP address 27.128.131.80...
 status: Trying to establish network connectivity to vm_40 (cloud-assigned uuid C574CE93-7EF3-503C-BFE5-E7AF19D25B9F), on IP address 247.238.144.48...
 status: Checking ssh accessibility on vm_42 (ssh [email protected])...
 status: Bootstrapping vm_42 (creating file cb_os_paramaters.txt in "cbuser" user's home dir on 27.128.131.80)...
 status: Checking ssh accessibility on vm_41 (ssh [email protected])...
 status: Sending a copy of the code tree to vm_42 (27.128.131.80)...
 status: Bootstrapping vm_41 (creating file cb_os_paramaters.txt in "cbuser" user's home dir on 158.77.154.174)...
 status: Sending a copy of the code tree to vm_41 (158.77.154.174)...
 status: Checking ssh accessibility on vm_40 (ssh [email protected])...
 status: Bootstrapping vm_40 (creating file cb_os_paramaters.txt in "cbuser" user's home dir on 247.238.144.48)...
 status: Sending a copy of the code tree to vm_40 (247.238.144.48)...
 status: Starting a new "performance emitter" for ai_11
 status: Performing generic application instance post_boot configuration on all VMs belonging to ai_11...
 status: Running application-specific "setup" configuration on all VMs belonging to ai_11...
 status: QEMU Scraper will NOT be automatically started during the deployment of ai_11...
 status: This is the command that would have been executed from the orchestrator on STEP 1 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 247.238.144.48 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 1 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 158.77.154.174 "~/cb_restart_seed.sh"
 status: This is the command that would have been executed from the orchestrator on STEP 1 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 27.128.131.80 "~/cb_restart_seed.sh"
 status: This is the command that would have been executed from the orchestrator on STEP 2 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 247.238.144.48 "~/cb_setup_ycsb.sh"
 status: This is the command that would have been executed from the orchestrator on STEP 2 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 158.77.154.174 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 2 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 27.128.131.80 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 3 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 247.238.144.48 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 3 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 158.77.154.174 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 3 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 27.128.131.80 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 4 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 247.238.144.48 "~/cb_start_load_manager.sh"
 status: This is the command that would have been executed from the orchestrator on STEP 4 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 158.77.154.174 "/bin/true"
 status: This is the command that would have been executed from the orchestrator on STEP 4 :
         ssh  -p 22  -i /home/msilva/cloudbench/lib/auxiliary//../../credentials/cbtool_rsa  -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null  -o BatchMode=yes  -l cbuser 27.128.131.80 "/bin/true"
AI object 2E3844CE-7FE4-5794-8240-8BBE7066C2ED (named "ai_11") sucessfully attached to this experiment. It is ssh-accessible at the IP address 223.17.156.219 (C574CE93-7EF3-503C-BFE5-E7AF19D25B9F.simcloud.com).
  • Debugging absence of performance metrics post single-VApp attachment (e.g. aiattach nullworkload):

    • First of all, instruct CBTOOL to fully deploy the Virtual Application instance, but do not start the actual load generation by running the command appnoload.

    • After that, just run aiattach nullworkload once. At the very end, CBTOOL will output a message such as "Load Manager will NOT be automatically started on VM NAME during the deployment of VAPP NAME...". After the Virtual Application attachment completes, just move to another terminal and login on the VM name with cd ~cbtool; ~/cbtool/cbssh <VM NAME>, and then run /usr/local/bin/cbloadman debug.

    • Once done, do not forget to disable the debug mode by issuing the command appload

⚠️ **GitHub.com Fallback** ⚠️