conda environment for pnl‐predict - pnlbwh/old-pnlpipe GitHub Wiki
pnlpipe/environment36_gpu.yml has been our universal GPU environment so far. However, it is incompatible with new NVIDIA driver and A6000 GPUs.
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.116.04 Driver Version: 525.116.04 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA RTX A6000 Off | 00000000:31:00.0 Off | Off |
| 33% 62C P2 123W / 300W | 31784MiB / 49140MiB | 99% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA RTX A6000 Off | 00000000:4B:00.0 Off | Off |
| 30% 62C P2 152W / 300W | 1982MiB / 49140MiB | 72% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA RTX A2000 Off | 00000000:CA:00.0 Off | Off |
| 30% 42C P8 11W / 70W | 203MiB / 6138MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
So we tried to build the above environment in pnl-predict. But conda fails to resolve all dependencies of the above recipe. So we followed a ramified approach of installing whatever builds as part of the recipe and installing the rest outside the recipe via pip. @RyanZurrin discovered that CNN-Diffusion-MRIBrain-Segmentation/environment_gpu.yml can be used as the new base recipe and pytorch for HD-BET can be installed on of it via pip. Tashrif adopted Ryan's idea as follows:
Use the following recipe
name: pnlpipe3-cuda10
channels:
- conda-forge
dependencies:
- python==3.6
- tensorflow-gpu==1.12.0
- cudatoolkit==9.0
- cudnn==7.6.0
- keras==2.2.4
- nibabel>=2.2.1
- pip
- pip:
- ipython
- prompt-toolkit==2.0.1
- gputil
- scikit-image==0.16.2
- git+https://github.com/pnlbwh/conversion.git
conda env create -f above_recipe.yaml
conda activate pnlpipe3-cuda10
Install pytorch and other requirements
# Ryan discovered the use of -f and the link following
pip install torch==1.10.2+cu111 torchvision==0.11.3+cu111 -f https://download.pytorch.org/whl/torch_stable.html
# these are pnlNipype's requirement
pip install SimpleITK dipy==1.2.0 luigi==3.0.3 plumbum sqlalchemy psutil
After successful building, add conda activate pnlpipe3-cuda10 to the pnlpipe3/bashrc3* pertinent to pnl-predict. Log out and log back in and source this new bashrc3*. Now that you have the above environment sourced, run a set of three tests to confirm everything is working:
- Test eddy_cuda:
cd /path/to/sub-12345/ses-1/dwi/
eddy_cuda10.2 \
--imain=sub-12345_ses-1_acq-AP_dir-107_desc-Xc_dwi.nii.gz \
--mask=sub-12345_ses-1_acq-AP_dir-107_desc-dwiXcUnCNNQc_mask.nii.gz \
--bvecs=sub-12345_ses-1_acq-AP_dir-107_desc-Xc_dwi.bvec \
--bvals=sub-12345_ses-1_acq-AP_dir-107_desc-Xc_dwi.bval \
--out=/tmp/eddy_corrected_output \
--acqp=/path/to/acqp.txt \
--index=/path/to/index.txt \
--data_is_shelled \
--repol \
--verbose
- Test HD-BET:
cd /path/to/sub-12345/ses-1/anat
hd-bet -i sub-12345_ses-1_desc-Xc_T1w.nii.gz -o hd-bet -mode fast
- Test dwi_masking:
cd /path/to/sub-12345/ses-1/dwi
dwi_masking.py -i caselist.txt -f /path/to/pnlpipe3/CNN-*/model_folder/
"sub-12345" and "ses-1" are placeholders; you should replace them with actual subject and session identifiers.
Please replace /path/to/ with the appropriate directory path for your setup.
We verified that the new environment also works in older GPU machine pnl-oracle.