HomeDraft
ZeroCostDL4Mic is a toolbox for the training and implementation of common Deep Learning approaches to microscopy imaging. It exploits the ease-of-use and access to GPU provided by Google Colab.
Training data can be uploaded to the Google Drive from where it can be used to train models using the provided Colab notebooks in a web-browser. Inference (predictions) on unseen data can then also be performed within the same notebook, therefore not requiring any local hardware or software set-up.
Running a ZeroCostDL4Mic notebook | Example data in ZeroCostDL4Mic | Romain's talk @ Aurox conference | Talk @ SPAOM |
---|---|---|---|
ZeroCostDL4Mic provides fully annotated Google Colab optimised Jupyter Notebooks for popular pre-existing networks. These cover a range of important image analysis tasks (e.g. segmentation, denoising, restoration, label-free prediction). There are 3 types of implemented networks:
- Fully supported - considered mature and considerably tested by our team.
- Under beta-testing - an early prototype of networks which may not be stable yet.
- Contributed - networks following the ZeroCostDL4Mic guidelines and contributed by community members. Although the core ZeroCostDL4Mic team does not maintain these networks, we synergise with the developers with the goal of providing researchers with a similar workflow experience and quality control.
Both fully supported and beta-testing versions of the individual notebooks can be directly opened from GitHub into Colab by clicking one of the respective links in the table below. You will need to create a local copy to your Google Drive in order to save and modify the notebooks. Once opened in Colab, follow the instructions described in the specific notebook that you selected to install the relevant packages, load the training dataset, train, check on test datasets and perform inference and predictions on unseen data.
With the exception of the U-net training data, we provide training and test datasets that were generated by our labs. These can be downloaded from Zenodo using the various links below. The U-net data was obtained from the ISBI segmentation contest.
!! For U-net example dataset, it seems that the ISBI website is currently broken, so please use the alternative link provided (more info on this on the dedicated U-Net page) !!
Network | Paper(s) | Task | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|---|---|
U-Net (2D) | here and here | Segmentation | ISBI challenge or here | |
U-Net (3D) | here | Segmentation | EPFL dataset | |
StarDist (2D) | here and here | Nuclei segmentation | here | |
StarDist (3D) | here and here | Nuclei segmentation | from Stardist github | |
Noise2Void (2D) | here | Denoising | here | |
Noise2Void (3D) | here | Denoising | here | |
CARE (2D) | here | Denoising | here | |
CARE (3D) | here | Denoising | here | |
Label-free prediction (fnet) 2D | here | Artificial labelling | here | |
Label-free prediction (fnet) 3D | here | Artificial labelling | here | |
Deep-STORM | here | Single Molecule Localization Microscopy (SMLM) image reconstruction from high-density emitter data | Training data simulated in the notebook or available from here | |
CycleGAN | here | Unpaired Image-to-Image Translation | here | |
pix2pix | here | Paired Image-to-Image Translation | here | |
YOLOv2 | here | Object detection (bounding boxes) | here |
Network | Paper(s) | Task | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|---|---|
DenoiSeg | here | Joint denoising and segmentation | Available soon | |
3D-RCAN | here | Denoising | Available soon | |
SplineDist | here | Instance segmentation | Coming soon! | |
Detectron2 | here | Object detection (bounding boxes) | here | |
RetinaNet | here | Object detection (bounding boxes) | here | |
DRMIME | here | Affine or perspective image registration | Coming soon! | |
Cellpose (2D) | here | Cells or Nuclei segmentation | Coming soon! | |
DecoNoising (2D) | here | Denoising | here | |
Interactive Segmentation - Kaibu | here | Interactive instance segmentation | Coming soon! | |
MaskRCNN | here | Instance segmentation | Coming soon! | |
U-Net (2D) multilabel | here and here | Semantic segmentation | here |
Networks that are compatible with BioImage.IO and can be used in ImageJ via deepImageJ.
Network | Paper(s) | Task | Link to example training and test dataset | Direct link to notebook in Colab |
---|---|---|---|---|
StarDist (2D) with DeepImageJ export | StarDist: here and here, and DeepImageJ | Nuclei segmentation | here | |
Deep-STORM with DeepImageJ export | Deep-STORM and DeepImageJ | Single Molecule Localization Microscopy (SMLM) image reconstruction from high-density emitter data | Training data simulated in the notebook or available from here | |
U-Net (2D) with DeepImageJ export | U-Net and DeepImageJ | Segmentation | ISBI challenge or here | |
U-Net (3D) with DeepImageJ export | 3D U-Net and DeepImageJ | Segmentation | EPFL dataset |
Network | Paper(s) | Task | Link to example training and test dataset | Direct link to the notebook in Colab |
---|---|---|---|---|
Augmentor | here | Image augmentation | None | |
Quality Control | Available soon | Error mapping and quality metrics estimation | None |
We welcome network contributions from the research community. If you wish to contribute, please read our guidelines first.
Available soon...
The figure below shows some of the representative datasets which we provide as examples to use the notebooks with. The description on how to acquire similar test datasets is described in the pages shown in the sidebar of this page, as well as in the Supplementary Information of our paper here.
If you want to get the latest fully supported and beta-testing releases of all the notebooks as a set of individual files, it can be downloaded from here. All the notebooks are included in the compressed folder.
- Lucas von Chamier
- Johanna Jukkala
- Christoph Spahn
- Martina Lerche
- Sara Hernández-Pérez
- Pieta K. Mattila
- Eleni Karinou
- Seamus Holden
- Ahmet Can Solak
- Alexander Krull
- Tim-Oliver Buchholz
- Florian Jug
- Loïc A Royer
- Mike Heilemann
- Romain F. Laine
- Guillaume Jacquemet
- Ricardo Henriques
Main:
- Home
- Step by step "How to" guide
- How to contribute
- Tips, tricks and FAQs
- Data augmentation
- Quality control
- Running notebooks locally
- Running notebooks on FloydHub
- BioImage Modell Zoo user guide
- ZeroCostDL4Mic over time
Fully supported networks:
- U-Net
- StarDist
- Noise2Void
- CARE
- Label free prediction (fnet)
- Object Detection (YOLOv2)
- pix2pix
- CycleGAN
- Deep-STORM
Beta notebooks
Other resources: