Segmentation of microscopy images - NRC-Lund/multipark-aiml GitHub Wiki
- Download the Python scripts from this GitHub repository, for example by using the git command:
git clone https://github.com/NRC-Lund/multipark-aiml.git
- Create a Python environment, for example by using Anaconda:
conda create -n image-segmentation python=3.11
This will create a new environment called "image-segmentation" using Python version 3.11.
- Activate the environment and install the necessary packages:
conda activate image-segmentation pip install --upgrade pip pip install -r ./multipark-aiml/image-segmentation/requirements.txt
-
Please follow this guide to get access to COSMOS.
-
Download the Python scripts from this GitHub repository, for example by using the git command:
git clone https://github.com/NRC-Lund/multipark-aiml.git
- Create a conda environment
module load Anaconda3 source config_conda.sh conda create -n image-segmentation python=3.11 conda activate image-segmentation pip install --upgrade pip pip install -r ./multipark-aiml/image-segmentation/requirements.txt
- Create a shell script
mytask.sh
with the following content (see the LUNARC documentation for explanations):
#!/bin/bash # wall time #SBATCH -t 00:50:00 # output and error file #SBATCH -o result_%j.out #SBATCH -e result_%j.err #SBATCH -p gpua100 cat $0 # module load statements module load Anaconda3 source config_conda.sh # python environment conda activate image-segmentation # run python ~/multipark-aiml/image-segmentation/train_yolo.py \ --dataset-dir './th-cells/datasets/th-stained-dopamine-neurons-3' \ --model '/lunarc/nobackup/projects/multipark-aiml/ImageSegmentation/models/th-stained-dopamine-neurons-v3-medium-exported.pt'
- Commit the task to the COSMOS queuing system:
sbatch mytask.sh
The necessary environment is already set up on the Linux server Miman (alap700.srv.lu.se), which is managed by Pär Halje. This server has no GPUs, so it is not good for training, but it works fine for inference.
-
E-mail Pär Halje to get a login to the server.
-
Using a terminal, login to the server:
ssh alap700.srv.lu.se
- Go to the correct folder:
cd /srv/data/Analysis/ImageSegmentation
- Activate the Python environment:
source /srv/data/Resources/Python/anaconda3/bin/activate conda activate multipark
Below is an example showing how to use a model trained on identifying TH-stained cells in substantia nigra:
python ./toolbox/infer_yolo.py --model './models/th-stained-dopamine-neurons-v3-medium.pt' --input-path './th-cells/images' --output-path './th-cells/inference1' --sliding-window --no-boxes --no-colors --no-display --save-image --save-geojson --conf '0.5'
The option --input-path './th-cells/images'
tells the script to process all images in the folder /srv/data/Analysis/ImageSegmentation/th-cells/images
The option --output-path './th-cells/inference1'
tells the script to save the results to the folder /srv/data/Analysis/ImageSegmentation/th-cells/inference1
You can type python ./toolbox/infer_yolo.py --help
to get more information on what the other options do.