Train and use PixMClass - jeanollion/bacmman GitHub Wiki

PixMClass : a multiclass pixel classifier

PixMClass is a segmentation method based on neural network that classifies pixel into background/forground/contour categories with very low training set requirement.

It has been described and used in the work Near-infrared co-illumination of fluorescent proteins reduces photobleaching and phototoxicity. Please cite this work when using this method.

It is similar to Ilastik pixel classification procedure, but uses a deep neural network.

Make sure Docker is installed.

Training Set

For cell segmentation, one typically defines 3 categories of pixels: inside cells, outside cells and cell outer edges (third category allows a better separation of neighboring cells).

Generate a training set using BACMMAN software:

  • create a new dataset from the online library, username = jeanollion, folder = Seg3Class, configuration = TrainingSet. For help see the doc.
  • import images:
    • from the configuration tab set the import method that correspond to your images;
    • run the command import/re-link images from the menu Run : position should appear in the position list
  • generate the training set by drawing areas of Background and Contour object classes
    • from the data browsing tab right click on a position and choose open hyperstack
    • choose the object class to edit by pressing i
    • use the selection brush tool to draw areas for each object class. Contours should be closed, and with a line-width of typically 1-3 pixels (double click on the tool to set the size). See the doc on manual edition. Important: contours must not be drawn on foreground areas. Between cells in contact draw a contour of 1 pixel.
    • use Shift + A to display all object classes with one color per object class.
    • foreground can be generated by filling automatically the contours: from the home tab select the position that have been edited, chose Segment and Track in the task panel and Filled Contours in the object panel. from the Run menu choose Run selected Tasks.

Model Training

  • Make sure Docker is started.
  • From the menu Run choose Train Neural Network
  • Set a Working Directory that will be used to train the neural network (preferably an empy directory)
  • In the Configuration panel right-click on Method and choose PixMClass.
  • Export the previously generated training set (from the Extract Dataset panel):
    • Select the channel image(s) to export
    • Select the 3 object classes of the training set
    • To export all the images that contain at least one segmented object, leave the Selection parameter to NEW and no position selected, otherwise specific position or Selections can be chosen.
    • Click on the Extract button
  • Set the training parameters:
    • set a name for the trained model (Model Name parameter of the Training section)
    • select the previously exported file (File Path in the first element of the Dataset List section)
    • refer to the help of other parameter to tune them. In particular, the scaling parameter of the Dataset List > Dataset > Data Augmentation section should be adjusted properly to image type / pixel distribution )
    • click on Set + Write button to save the parameters
  • Start the training by clicking on the Start button of the Training panel
  • If training is stopped before the end, model can be exported by clicking on the Save Model button.
  • When training stops click on Export to Library to export the trained model to your online library

Prediction

  • This step currently requires to install Tensorflow and can be performed with or without a GPU.
  • Create a new dataset from the online library, username = jeanollion, folder = Seg3Class, configuration = Prediction.
  • Import some images. Samples can be downloaded from the menu Import/Export > Sample Datasets > Segmentation > Neutrophils
  • From the Configuration Test tab, choose Processing in the Step panel and right-click on Tensorflow Model to choose Configure from Library as in the screenshot. The library will open, choose the previously uploaded model and click on Configure Parameter
  • To test the prediction, right-click on Segmentation algorithm and choose Test Segmenter