Using Bede - cemac/cemac_generic GitHub Wiki
Bede is the N8’s Tier 2 Power and GPU-based high-performance computing (HPC) platform.
- 512GB DDR4 RAM
- 2x IBM POWER9 CPUs (and two NUMA nodes), with
- 4x NVIDIA V100 GPUs (2 per CPU)
- Each CPU is connected to its two GPUs via high-bandwidth, low-latency NVLink interconnects (helps if you need to move lots of data to/from GPU memory)
- Equipped with T4 GPUs for inference tasks.
- 2x Visualisation nodes
- 2x Login nodes
The filestore layout is similar to that of the ARC system. In addition to the locations below, there is also a 20GB project folder that persists.
-
/home
- 4.9TB shared (Lustre) drive for all users. -
/nobackup
- 2PB shared (Lustre) drive for all users. -
/tmp
- Temporary local node SSD storage.
ssh -l <username> bede.dur.ac.uk
Add the following line to your .bashrc
file.
export cemac='<your project name>'
srun -A $cemac --job-name=”prototyping" --gres=gpu:1 --time=00:20:00 --pty bash
Options and submission scripts info can be found here
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-ppc64le.sh;
bash Miniconda3-latest-Linux-ppc64le.sh
module load cuda/10.2.89;
module load llvm/11.0.0;
mkdir /nobackup/projects/$cemac/$USER ;
cd $_;
conda create --prefix ./gpuenv python=3.7 --yes;
conda activate /nobackup/projects/$cemac/$USER/gpuenv;
conda install ipython;
conda config --add default_channels https://repo.anaconda.com/pkgs/main
conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda/
You can check which linux-ppc64lpython packages are available by looking by visiting here .
conda install -c conda-forge bazel
conda install tensorflow-gpu
conda install tensorflow-estimator --no-deps
This also works for non-python packages such as the AWS command-line interface
conda install -c conda-forge awscli
conda install -c conda-forge bazel
conda install tensorflow-probability
conda install tensorflow-gpu
conda install tensorflow-estimator --no-deps
pip install gpflow — use-deprecated=legacy-resolver