Set up VS Code on OpenMind compute node (Method 02) - sensein/sensein.github.io GitHub Wiki

This page describes an alternative method to connect to the compute node on Openmind without having to note down the node name every time as described here. Using this method, you don't even have to log on to Openmind first. Everything can be done from your local machine.

Prerequisites:

a. https://github.mit.edu/MGHPCC/OpenMind/wiki/Configuring-password-less-SSH-login

One Time

  1. On your local machine create a script named tunnel.sh (or whatever name you prefer) with the following contents (please feel free to change resources according to your requirements).
#!/bin/bash
#SBATCH --job-name="tunnel"
#SBATCH --partition=gablab
#SBATCH --nodes=1
#SBATCH --cpus-per-task=1
#SBATCH --gres=gpu:1
#SBATCH --mem=4G
#SBATCH --time=00:15:00     # walltime
#SBATCH --output="tunnel.out"
#SBATCH --error="tunnel.err"
#SBATCH --open-mode=append
#SBATCH [email protected]
#SBATCH --mail-type=BEGIN

PORT=$(python -c 'import socket; s=socket.socket(); s.bind(("", 0)); print(s.getsockname()[1]); s.close()')

scontrol update JobId="$SLURM_JOB_ID" Comment="$PORT"

echo "********************************************************************"
echo "Starting sshd in Slurm"
echo "Date:" $(date)
echo "Allocated node:" $(hostname)
echo "Listening on:" $PORT
echo "********************************************************************"

/usr/sbin/sshd -D -p ${PORT} -f /dev/null -h ${HOME}/.ssh/id_rsa

NOTE: The private key file id_rsa could be whatever you named it to in your machine.

  1. Open the ${HOME}/.ssh/config file on your local machine and add the following lines
Host openmind-node   # name it whatever you want
    ProxyCommand ssh openmind "nc \$(squeue --me --name=tunnel --states=R -h -O NodeList,Comment)"  # see note below
    StrictHostKeyChecking no
    User hgazula  # of course, please change this to your user name
    IdentitiesOnly=yes
    PreferredAuthentications publickey
    PasswordAuthentication no
    IdentityFile ~/.ssh/id_rsa  # id_rsa should be replaced with your private key file name is

NOTE (Important):

  • The first part of ProxyCommand should be the same command you use to connect to openmind. For some of you, it may be ssh {username}@openmind.mit.edu or however you configured it in your config file. In my case, it is ssh openmind.
  • Notice the --name=tunnel is the job name specified in the tunnel.sh file.

Every time (you want to connect to openmind)

  1. On your local machine, open a terminal (vscode or wherever), navigate to the folder where you have your tunnel.sh and run the following command
scp tunnel.sh openmind:/destination/folder && scp openmind "sbatch /destination/folder/tunnel.sh"

If I want to copy the file to my home directory, the command would look like scp tunnel.sh openmind:~ && ssh openmind "sbatch tunnel.sh". (There are ways to create one SSH connection and run both the copy and sbatch commands but let's ignore it for now). Jot down the JOBID (just in case) as it may come in handy later when you finish your work.

  1. Now go to VSCode and connect to the openmind-node and you should be on the compute node. As a quick test (if you requested a GPU), run nvidia-smi at the command line to check the expected output.

Final notes:

  1. Note that you may not have access to the compute node as soon as you run the scp ... && ssh ... command. Hence, I added the following lines in tunnel.sh to notify me when the job began (i.e. when the node is available) so step 2 doesn't fail. Of course, please use your e-mail ID so I don't get your notifications.
#SBATCH [email protected]
#SBATCH --mail-type=BEGIN
  1. You have to run scp tunnel.sh openmind:~ every time you change the resources requested. If not, you can simply call ssh openmind "sbatch /path/to/tunnel.sh" and move on to VSCode.

  2. When you are done with your work, you don't need the sshd listening anymore. Simply run `ssh openmind "cancel {JOBID}" to cancel the slurm job. This is important because releasing the resources is what a responsible user would do. :)