Learnings & Challenges - TobiasSchmidtDE/DeepL-MedicalImaging GitHub Wiki
-
putting in the effort and extra time in the beginning to get Docker to work has proven to save us some time later in the project - especially when moving to another server we had to spend no time reconfiguring
-
putting in the effort and extra time to create a benchmark and experiment class made extensive training easier later, we could always be sure our results were comparable and every member of the team was training with the same configurations
-
while the ready-to-use models as well as functionality such as .fit() and .evaluate() in Tensorflow are nice and can give you a head-start, it does not justify the many, many, many issues we had with the framework down the line, when adding anything custom. Next time, just use PyTorch!
-
it is hard to reproduce results from papers, even if the authors provide their code
-
sometimes datasets can have flaws, even if they come from stanford (really noise labels, no 'Fracture' in test set, imbalance)
-
effective usage of the GPU should be a priority right from the start
-
we should have splitted a small, test set from the public "train" dataset which we only divided into train and validation. The "test" set provided by the Chexpert team was completely out of distribution for most of the classes.