Train and use PixMClass - jeanollion/bacmman GitHub Wiki
PixMClass : a multiclass pixel classifier
PixMClass is a segmentation method based on neural network that classifies pixel into background/forground/contour categories with very low training set requirement.
It has been described and used in the work Near-infrared co-illumination of fluorescent proteins reduces photobleaching and phototoxicity. Please cite this work when using this method.
It is similar to Ilastik pixel classification procedure, but uses a deep neural network.
Make sure Docker is installed.
Training Set
For cell segmentation, one typically defines 3 categories of pixels: inside cells, outside cells and cell outer edges (third category allows a better separation of neighboring cells).
Generate a training set using BACMMAN software:
- create a new dataset from the online library,
username = jeanollion
,folder = Seg3Class
,configuration = TrainingSet
. For help see the doc. - import images:
- from the configuration tab set the import method that correspond to your images;
- run the command
import/re-link images
from the menuRun
: position should appear in the position list
- generate the training set by drawing areas of
Background
andContour
object classes- from the
data browsing
tab right click on a position and chooseopen hyperstack
- choose the object class to edit by pressing
i
- use the selection brush tool to draw areas for each object class. Contours should be closed, and with a line-width of typically 1-3 pixels (double click on the tool to set the size). See the doc on manual edition. Important: contours must not be drawn on foreground areas. Between cells in contact draw a contour of 1 pixel.
- use
Shift + A
to display all object classes with one color per object class. - foreground can be generated by filling automatically the contours: from the
home
tab select the position that have been edited, choseSegment and Track
in the task panel andFilled Contours
in the object panel. from theRun
menu chooseRun selected Tasks
.
- from the
Model Training
- Make sure Docker is started.
- From the menu
Run
chooseTrain Neural Network
- Set a
Working Directory
that will be used to train the neural network (preferably an empy directory) - In the
Configuration
panel right-click onMethod
and choosePixMClass
. - Export the previously generated training set (from the
Extract Dataset
panel):- Select the channel image(s) to export
- Select the 3 object classes of the training set
- To export all the images that contain at least one segmented object, leave the
Selection
parameter toNEW
and no position selected, otherwise specific position or Selections can be chosen. - Click on the
Extract
button
- Set the training parameters:
- set a name for the trained model (
Model Name
parameter of theTraining
section) - select the previously exported file (
File Path
in the first element of theDataset List
section) - refer to the help of other parameter to tune them. In particular, the scaling parameter of the
Dataset List
>Dataset
>Data Augmentation
section should be adjusted properly to image type / pixel distribution ) - click on
Set + Write
button to save the parameters
- set a name for the trained model (
- Start the training by clicking on the
Start
button of the Training panel - If training is stopped before the end, model can be exported by clicking on the
Save Model
button. - When training stops click on
Export to Library
to export the trained model to your online library
Prediction
- This step currently requires to install Tensorflow and can be performed with or without a GPU.
- Create a new dataset from the online library,
username = jeanollion
,folder = Seg3Class
,configuration = Prediction
. - Import some images. Samples can be downloaded from the menu
Import/Export > Sample Datasets > Segmentation > Neutrophils
- From the
Configuration Test
tab, chooseProcessing
in theStep
panel and right-click onTensorflow Model
to chooseConfigure from Library
as in the screenshot. The library will open, choose the previously uploaded model and click onConfigure Parameter
- To test the prediction, right-click on
Segmentation algorithm
and chooseTest Segmenter