Development VM - RENCI-NRIG/orca5 GitHub Wiki

It contains an ORCA source tree (based on the aydan-recovery branch), an installation of ORCA that is configured to run in emulation, and an Eclipse development environment that has been configured to be suitable for working with the checked-out ORCA source.

What follows is directions on how to get it running, and a description of contents.

First, you will need VirtualBox installed, if you do not already have it; you can get it from: https://www.virtualbox.org/wiki/Downloads

Once it is installed, download the VM image, in OVA format, from: http://geni-images.renci.org/images/vjo/geni-dev.ova

The image is approximately 2.7 GB in size (when considering where to download it). The VM that it will create has a dynamically-resized hard disk image that can consume up to 25 GB, but is currently only consuming 8 GB. The VM is configured to use 4GB of RAM (in order to ensure that the ORCA actors have enough space to run in, and to ensure that Eclipse’s performance is adequate).

Once the OVA has been downloaded, you can simply double-click on it to install it. You will need to click next (after optionally choosing whether to re-initialize the MAC address for the virtual NIC - which is unnecessary), and then you will need to agree to the license (Eclipse/GENI public) before VirtualBox will proceed to import the VM.

Once the VM is imported, start it up. It will boot to a GNOME login screen, with the “geni-orca” user selected.

Log in as the geni-orca user, using the password:

idreamofgeni

Once you are logged in, you will be presented with a GNOME desktop environment, that has a link to Eclipse on the desktop. The geni-orca user has full sudo rights, without requiring a password.

The root user shares the same password as the geni-orca user, in case you need that.

The ORCA source is checked out in: ~/workspace-orca/orca If you double-click on Eclipse, it will run, and will prompt you for the workspace to use; the correct workspace should already be present.

You can now proceed with development. ;)

Running in emulation:

The VM has been configured to be able to run ORCA in emulation. It does so in a framework that greatly resembles an ExoGENI head node.

In order to start all the actors, in a clean set of containers, run (as root, or via sudo):

/etc/init.d/orca_am+broker-12080 clean-restart /etc/init.d/orca_sm-14080 clean-restart /etc/init.d/orca_controller-11080 restart

All ORCA configuration is under /etc/orca for these containers, and there are symlinks to the log directories for each container in this directory structure as well.

Deploying your own code in emulation:

Let’s presume you’ve implemented $NEW_FEATURE in your copy of the VM, and you want to test it in emulation, before checking it in. How do you install and test your new code?

First - stop whatever containers you have running, thus (as root, or via sudo):

/etc/init.d/orca_am+broker-12080 stop /etc/init.d/orca_sm-14080 stop /etc/init.d/orca_controller-11080 stop

Next, you’ll need to build and install RPMs of ORCA. This can be accomplished by a script that has been checked into the source. Simply open a terminal window, and change directories to: ~/workspace-orca/orca

Then execute: ./redhat/buildrpm.sh -i

Issuing this command will automatically build ORCA RPMs from your source, and install them for you in your VM. The RPMs are built and stored underneath the following directory structure: ~/orca-build

The buildrpm.sh script takes 3 optional arguments: -i = Install the RPMs. By default, buildrpm.sh will merely build the RPMs, but not install them.

-f = Delete the Maven repository that is used for building the RPMs. The Maven repository used to build the RPMs is kept under ~/orca-build as well. This is done so that you can manage the Maven repository that Eclipse uses in ~/.m2 as you choose (different settings.xml, adding your own jars/poms before putting them in Nexus, etc.). When you execute buildrpm.sh with -f as one of the options, you ensure that ORCA is built fresh - with only jars/poms that it can download from our Nexus at the time of build. Specifying this option causes the build to take longer, because you’re having to fetch all artifacts anew from Nexus.

-c = Clean all RPM build artifacts. By default, buildrpm.sh will build RPMs - but not delete them; it retains all RPMs that were previously built (but overwrites those built on the same day, with the same Subversion globalrev). If you specify -c as one of the options to buildrpm.sh, all previously built RPMs will be cleaned out (so that you do not waste precious VM disk space storing them).

Once your RPMs have been successfully built and installed, simply “clean-restart" the containers, as described above - and you will be able to test your code changes in emulation.

Please let me know if any of these directions are unclear. It is my intent to convert these off-the-cuff directions into a wiki page that we can publish, along with the VM image, as an artifact of the ExoGENI project - so, any suggestion I implement to clarify, simplify, or improve is welcomed. ;)