Metashape Pro v1.7.3 on UA HPC (updated 2021 06 28) - cyverse-gis/suas-metadata GitHub Wiki

On your local computer

Install Metashape

# linux OS Metashape-Pro IDE
wget https://s3-eu-west-1.amazonaws.com/download.agisoft.com/metashape-pro_1_8_4_amd64.tar.gz

# python module linux
wget https://s3-eu-west-1.amazonaws.com/download.agisoft.com/Metashape-1.8.4-cp35.cp36.cp37.cp38-abi3-macosx_11_0_universal2.macosx_10_13_x86_64.whl

Agisoft Metashape Floating Licenses

Using Agisoft Metashape v1.7.3 for Linux and Mac OS X.

Agisoft requires you to have a valid license to their product. You can have single-seat licenses, or you can do what we're showing here - using a server.lic file to access floating licenses that you 'checkout' every time you start the program on a computer. You can run multiple instances of the Metashape GUI or network configuration on a single computer with a single license.

We have 30 floating licenses which require you to be on the UA VPN or physically on the UA campus internal network.

Start the Metashape GUI on your local computer

Download the latest version of Agisoft Metashape v1.7.3 for your OS from the official website.

Note: these instructions will likely not work on Windows 10 The HPC uses Linux OS with Linux file system and permissions. It is possible to try using a Windows Subsystem for Linux 2.0 version of this workflow

You will need to add the server.lic file in the /metashape-pro directory. Contact [email protected] for the server.lic file.

Enable Networking Mode

In Tools > Preferences > Network click the "Enable Network Processing" option.

In the following steps, you will start up a second machine (HPC / Server / VM) running Metashape from the command line mode to use as your jobs server.

Start the job Monitor

You can run the Metashape Monitor and connect it to your server to monitor it graphically

# start the monitor from a terminal
/metashape-pro/monitor.sh

The monitor should open in a new window

Connect the monitor to the IP address or DNS of the Host machine running Metashape Server.

On a remote server or VM

Start Metashape Server mode

To run Network mode you need to run a copy of Metashape as a server.

You can use the computer you're running the GUI for this step, but we suggest that you do this from a remote server. This is because the server must have uninterrupted access to the internet that connects to all the processing nodes during batch processing in Network Mode. If your laptop were to go to sleep, or have a power or internet disruption the job would be lost.

# use screen or tmux to keep terminal alive if you lose connection
tmux

#start metashape in command line mode as a server
~/metashape-pro/metashape.sh --server --host <your IP address> --root /xdisk --platform offscreen

Once the server is running, it will wait for connections to other processing nodes.

If you have the Monitor running, you will see the same logs that are echoed in the terminal in the Monitor console.

Move images and project files to the HPC file system

This can be accomplished via a couple different methods:

https://public.confluence.arizona.edu/display/UAHPC/Transferring+Files

The files for processing must be on the HPC file system to conduct the network processing.

Mount your /xdisk space to your Metashape server

We're going to do a little bit of magic here to make our project files and images visible across the different machines.

The UA HPC has a shared file system, on which you can create your own /xdisk scratch space.

Specifically, we want to put all of our images and our Metashape project file into the /xdisk/<username> directory so that they can be seen and accessed by the processing nodes and by Metashape on our local computers.

Install sshfs on Linux

sudo apt-get update 
sudo apt-get install -y sshfs

Install sshfs on Mac OS X

brew install sshfs

You will have to add permissions to the sshfs program in your Mac OS X Security & Privacy configuration before this will work.

The fuse-sshfs fuse mounts your /xdisk/<username> space to the HPC /xdisk/<username>:

sshfs -o sshfs_debug [email protected]:/xdisk/$USER /xdisk/$USER

If you have DUO or dual-factor authentication it will ask you for a push or text message after you've entered your UA Net ID password

Request processing nodes on the HPC

Prerequisite: You must have an account for your University ID on the UA HPC.

ssh [email protected]
#select puma, ocelote, or elgato
puma

For Puma with Slurm:

srun --nodes=1 --ntasks-per-node=2 --mem-per-cpu=4GB --time=01:00:00 --job-name=metashape-worker1 --account=<accountID> --partition=standard --gres=gpu:1 --pty bash -i

For Ocelote with PBS:

#Request GPU interactive node (with display)
qsub -X -I -N metashape-singularity -m bea -W group_list=tswetnam -q standard -l select=1:ncpus=28:mem=224gb:ngpus=1 -l cput=672:0:0 -l walltime=24:0:0

For El Gato with PBS:

#Request GPU interactive node (with display)
qsub -X -I -N metashape-singularity -m bea -W group_list=tswetnam -q standard -l select=1:ncpus=16:mem=224gb:ngpus=1 -l cput=384:0:0 -l walltime=24:0:0

Wait a minute (or ten) until the worker node becomes available and your prompt changes to show that you're off the login now and now on the processing node.

Using Singularity

In a terminal window on the HPC, load Singularity (v3.7.) and CUDA (11.)

module load singularity
module load cuda11

Remember, running the Host from HPC not suggested because of potentially being preempted.


singularity exec --nv agisoft-metashape.sif /opt/metashape-pro/metashape.sh --server --host ${HOSTNAME} --root /xdisk/$USER

Start Singularity as a worker node on HPC or cloud


singularity exec --nv agisoft-metashape.sif /opt/metashape-pro/metashape.sh --node --host <host-server-IP> --capability any --root /xdisk/$USER --platform offscreen
⚠️ **GitHub.com Fallback** ⚠️