Architecture Research - TobiasSchmidtDE/DeepL-MedicalImaging GitHub Wiki
Architectures
ChestX-ray8
This paper presents the ChestX-ray8 dataset and compares different models that were pretrained on ImageNet.
The AlexNet-, GoogLeNet-, VGGNet- and ResNet models were modified, by removing the classification layer and appending a transition layer (transforms the activations from pre-trained models to a uniform dimension), a global pooling layer (GAP) as well as a final prediction layer.
The model based on ResNet-50 gave the best results in their exeperiments.
CheXNet
CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rayswith Deep Learning (25-12-2017)
CheXNet was the first DenseNet [1] that was used to perform pneumonia detection on chest x-rays.
The model consists of a total of 121 layers and the weights were initialized using a model pretrained on ImageNet. The final fully connected layer was replaced with single output layer.
Exploiting dependencies among labels
Learning to diagnose from scratch by exploiting dependencies among labels (01-02-2018)
This paper used a densely connected convolutional network, similar to DenseNets as an image encoder (modified for medical applications by eg. using a higher resolution for input images) and an LSTM Decoder. The input image is fed into the encoder, where the input is encoded into a vector. These vector capture higher-order semantics that are used in the decoding task.
The model was trained from scratch to ensure better application-specific feature capturing.
ChestXNeXt
This paper uses a 121-layer DenseNet architecture and tries to improve the CheXNet network.
CheXpert
CheXpert: A Large Chest Radiograph Datasetwith Uncertainty Labels and Expert Comparison (21-01-2019)
This paper introduced a new Dataset and experimented with several convolutional neural network architectures including ResNet152, DenseNet121, Inception-v4, and SEResNeXt101, and found that the DenseNet121 architecture produced the best results.
Multi-task Learning
Multi-task Learning for Chest X-ray Abnormality Classification on Noisy Labels (05-05-2019)
The architecture in this paper was inspired by the DenseNet architecure, it uses 5 dense blocks and a total of 121 layers. The DenseNet performs the encoding (classification net) and is followed by a global upsampling, from which the masks are predicted using a connected decoder.