Skip to content

pix2pix

guijacquemet edited this page Sep 16, 2020 · 7 revisions

Paired image-to-image translation using pix2pix:

pix2pix is a deep-learning method that can be used to translate one type of images into another. While pix2pix can potentially be used for any type of image-to-image translation, we demonstrate that it can be used to predict a fluorescent image from another fluorescent image.

Important disclaimer

Our pix2pix notebook is based on the following paper:

Please also cite this original paper when using or developing our notebook.

Data required to train pix2pix

For pix2pix to train, it needs to have access to a paired training dataset. This means that the same image needs to be acquired in the two conditions and provided with an indication of correspondence. Therefore, the data structure is important. It is necessary that all the input data are in the same folder and that all the output data is in a separate folder. The provided training dataset is already split in two folders called Training_source and Training_target.

  • We strongly recommend that you generate extra paired images. These images can be used to assess the quality of your trained model (Quality control dataset). The quality control assessment can be done directly in the pix2pix notebook.

  • Additionally, the corresponding input and output files need to have the same name.

  • Please note that you currently can only use RGB .PNG files!

Sample preparation and image acquisition

Coming soon...

Training pix2pix in Google Colab

Network Link to example training and test dataset Direct link to notebook in Colab
pix2pix here Open In Colab

or:

To train pix2pix in Google Colab: