Skip to content
lucpaul edited this page Mar 27, 2020 · 14 revisions

Restore images using CARE:

Content aware image restoration (CARE) is a deep-learning method capable of image restoration from corrupted bio-images. The network allows image denoising and resolution improvement in 2D and 3D images, in a supervised training manner. The function of the network is essentially determined by the set of images provided in the training dataset. For instance, if noisy images are provided as input and high signal-to-noise ratio images are provided as targets, the network will perform denoising.

This page contains information to help you train CARE networks in google colab using your own images.

Important disclaimer

CARE was described in 2018 by Weigert et al. in Nature Methods

CARE original code and documentation is freely available in GitHub.

Please also cite this original papers when training CARE with our notebooks.

Data required to train CARE

To train a CARE network you need a dataset containing matching images, for instance low signal-to-noise ratio (SNR) images and high SNR images (see example below).

Sample preparation and image acquisition

The dataset provided as an example with our notebooks was generated to denoise live-cell structured illumination microscopy (SIM) imaging data. Briefly, DCIS.COM lifeact-RFP cells (Jacquemet et al, 2017) were plated on high tolerance glass-bottom dishes (MatTek Corporation, coverslip #1.7) pre-coated first with Poly-L lysine (10 mg/ml, 1 h at 37 C) and allowed to reach confluence. Cells were then fixed and permeabilized simultaneously using a solution of 4% (wt/vol) paraformaldehyde and 0.25% (vol/vol) Triton X-100 for 10 min. Cells were then washed with PBS, quenched using a solution of 1 M glycine for 30 min, and incubated with phalloidin-488 (1/200 in PBS; Cat number: A12379; Thermo Fisher scientific) at 4 C until imaging (overnight). Just before imaging using SIM, samples were washed three times in PBS and mounted in vectashield (Vectorlabs).

The SIM used was DeltaVision OMX v4 (GE Healthcare Life Sciences) fitted with a 60x Plan-Apochromat objective lens, 1.42 NA (immersion oil RI of 1.516) used in SIM illumination mode (5 phases 3 three rotations). Emitted light was collected on a front illuminated pco.edge sCMOS (pixel size 6.5 mm, readout speed 95 MHz; PCO AG) controlled by SoftWorx.

In the provided dataset, the high signal-to-noise ratio images were acquired from the phalloidin-488 staining using acquisition parameters optimal to obtain the best SIM images possible (in this case, 50 ms exposure time, 10% laser power). In contrast the low signal-to-noise ratio images were acquired from the lifeact-RFP channel using acquisition parameters more suitable for live-cell imaging (in this case, 100 ms exposure time, 1% laser power). The dataset provided with the 2D CARE notebooks are maximum intensity projections of the collected data.

Training CARE in Google Colab

To train CARE in Google Colab :

Network Link to example training and test dataset Direct link to notebook in Colab
CARE (2D) here Open In Colab
CARE (3D) here Open In Colab

or: