Skip to content

CycleGAN

guijacquemet edited this page Aug 5, 2020 · 3 revisions

Unpaired image-to-image translation using CycleGAN

CycleGAN is a method that can capture the characteristics of one image domain and figure out how these characteristics could be translated into another image domain, all in the absence of any paired training examples (ie transform a horse into zebra or apples into oranges). While CycleGAN can potentially be used for any type of image-to-image translation, we illustrate that it can be used to predict what a fluorescent label would look like when imaged using another imaging modalities.

Important disclaimer

Our cycleGAN notebook is based on the following paper:

The source code of the CycleGAN PyTorch implementation can be found in: https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix

Please also cite this original paper when using or developing our notebook.

Data required to train CycleGAN

To train CycleGAN, you only need two folders containing PNG images. The images do not need to be paired. The provided training dataset is already split in two folders called Training_source and Training_target.

  • While you do not need paired images to train CycleGAN, if possible, we strongly recommend that you generate a paired dataset. This means that the same image needs to be acquired in the two conditions. These images can be used to assess the quality of your trained model (Quality control dataset). The quality control assessment can be done directly in the notebook.

  • Please note that you currently can only use .PNG files!

Sample preparation and image acquisition

Coming soon...

Training CycleGAN in Google Colab

Network Link to example training and test dataset Direct link to notebook in Colab
CycleGAN here Open In Colab

or:

To train CycleGAN in Google Colab: