VM Enabled Pods - padogrid/padogrid GitHub Wiki

◀️ Tested Vagrant Boxes 🔗 Multitenancy ▶️


A VM-enabled pod is a PadoGrid pod created in a VM-enabled workspace. The clusters created with any of the VM-enabled pods automatically inherit the workspace VM configuration. This means any cluster you created with or without pod association in a VM-enabled workspace is automatically wired to run in a PadoGrid VM environment, allowing you to manage them from your host OS or any of the guest OS VMs.

In this section, we show how to create a VM-enabled pod and attach it to a cluster.

Required Software

  • PadoGrid 0.9.7+
  • Vagrant
  • VirtualBox
  • JDK (Linux)
  • IMDG (Linux)

Installation Steps

  1. Download and install the required Linux software products, i.e., JDK and IMDG in the host OS file system. For our example, we assume they are installed in the host OS file system as follows.
Product Host OS File System Path
JDK /Users/dpark/Padogrid/products/linux/java-se-8u41-ri
IMDG /Users/dpark/Padogrid/products/linux/apache-geode-1.13.1
  1. Create workspace with VM enabled. Specify the guest OS paths using the -vm options. By convention, PadoGrid should be in /home/vagrant/ and all others should be in /home/vagrant/products/. Also, note that the -vm-user must be specified with vagrant, the default user of all Vagrant VMs. All product installations in the Vagrant VMs are done in the vagrant user's home directory, i.e., /home/vagrant.
Option Description
-name Worspace name
-product Absolute path of IMDG home directory in the host OS file system
-vm Specify it to enable VM. Do not enter values.
-vm-java Guest host OS JDK absolute path, i.e., JAVA_HOME
-vm-product Guest host OS IMDG absolute path
-vm-padogrid Guest host OS PadoGrid absolute path
-vm-user Guest host OS user. The user name is always 'vagrant'.
# Create workspace named 'ws-geode'
create_workspace -quiet \
  -name ws-geode \
  -product /Users/dpark/Padogrid/products/apache-geode-1.13.1 \
  -vm \
  -vm-java /home/vagrant/products/java-se-8u41-ri \
  -vm-product /home/vagrant/products/apache-geode-1.13.1 \
  -vm-padogrid /home/vagrant/padogrid_0.9.7-SNAPSHOT \
  -vm-user vagrant

# Switch into 'ws-geode'
switch_workspace ws-geode
  1. Create pod.
# Directory specified by '-dir' is the host OS directory where the JDK and IMDG are installed.
# Configure 1024 MiB for primary node and 2048 MiB for data nodes. Enable Avahi to allow hostname
# lookup of *.local hostnames via mDNS, i.e., pnode.local, node-01.local, etc.
create_pod -quiet \
  -ip 192.168.56.10 \
  -avahi \
  -pod mypod \
  -pm 1024 \
  -nm 2048 \
  -count 2 \
  -box hashicorp/bionic64 \
  -dir /Users/dpark/Padogrid/products/linux

The create_pod command above creates a pod as follows. Note that you can replaced the default node names using the -primary and -prefix options. Run create_pod -? see the usage.

Item Value
First node IP address 192.168.56.10
Avahi Enabled true
Pod Name mypod
VM Names mypod.pnode, mypod.node-01, mypod.node-02
Primary Node pnode.local
Primary Node Memory 1024 MiB
Data Nodes node-01.local,node-02.local
Data Node Memory 2048 MiB
Vagrant Box hashicorp/bionic64
Host OS Product Directory /Users/dpark/Padogrid/products/linux

❗️ If Avahi is disabled (default) then you need to use the IP addresses instead.

  1. Build and start pod (Vagrant VMs).
# Build and start pod. This downloads and installs the Vagrant box.
build_pod -pod mypod

# Show pod status
show_pod -pod mypod -long
  1. Create and start cluster.
# Create cluster with 'mypod' associated
create_cluster -cluster mycluster -pod mypod

# Switch to the cluster
switch_cluster mycluster

# Run the cluster
start_cluster

You can also manage the cluster from any of the VMs (primary and data nodes).

# Primary Node
ssh [email protected]
switch_cluster mycluster
show_cluster

# Data Node
ssh [email protected]
switch_cluster mycluster
show_cluster

Running Apps

Your apps can connnect to the cluster from either the host OS or any of the guest OS VMs. For our cluster example, we can create and run the perf_test app as follows.

create_app
cd_app perf_test
vi etc/client-cache.xml

Set locator host to pnode.local in the etc/client-cache.xml file as shown below.

<hazelcast-client>
...
    <pool name="serverPool">
         <locator host="pnode.local" port="10334" />
    </pool>
...
</hazelcast-client>

Run the test_ingestion script to ingest data into the cluster.

cd bin_sh
./test_ingestion -run

Tips

  1. When I start the cluster, I'm getting the following identification error.
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:uY6GIjFdI9qTC4QYb980QRk+WblJF9cd5glr3SmmL+w.
Please contact your system administrator.
Add correct host key in /Users/dpark/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /Users/dpark/.ssh/known_hosts:40
ECDSA host key for pnode.local has changed and you have requested strict checking.
Host key verification failed.

Answer: This is due to the conflicting public keys in the ~/.ssh/known_hosts file. Edit this file and remove all occurances of host names (nodes) that match VMs. In our example, they are pnode.local, node-01.local, and node-02.local.


◀️ Tested Vagrant Boxes 🔗 Multitenancy ▶️

⚠️ **GitHub.com Fallback** ⚠️