Training DiSTNet2D - jeanollion/bacmman GitHub Wiki
this page is under construction. for improvement, please add your comments to this discussion.
See this page to test DiSTNet2D on test datasets. Source code
To train DiSTNet2D it is required to have a GPU (more that 16Gb is advised) and docker installed with GPU pass-though
Generate a curated training set
Here are a few resources:
- data visualization
- data curation
- create a selection of the parent object class (that can be viewfield in the case of the examples from DiSTNet, but could also be microchannels in the case of microfluidic devices) that contains only curated objects. For tracking applications such as DiSTNet2D, try to have long contiguous parent tracks.
Configure Training
When data is curated, open the training tab from the run menu, choose a working directory on the right panel and choose DiSTNet2DTraining
as the method on the left. On the right panel you have a panel to export a training set as hdf5 file.
On the left panel, choose the previously exported dataset file in the dataset list.
Configure training parameters (this will require more documentation):
- setup data augmentation
- model architecture parameters (in particular number of downsampling, so that objects in the downsampled image are as small as possible but still occupy a few pixels. Downsamling is is 2 for the bacterial example and 3 for the eukaryotic cells example) etc. Note that data augmentation can be tested with the dedicated button.
- number of epochs and steps, learning rate.
Train and export
If your machine has several GPUs, set the GPU used for training in the Docker Options
panel on the right.
Start training by clicking on the button.
When training is finished model needs to be exported. Currently the tensorflow 2.7.1 image needs to be used for export to be able to use the saved model for predictions in bacmman.