Brain tumor identification is crucial to prevent long term disabilities. Severe cases such as High Grade Glioma may be fatal. Magnetic Resonance Imaging (MRI) is a powerful non-invasive tool for obtaining these brain scans. MRI scans can provide key information such as the location, shape, size and the growth stage of the brain tumor. To perform any medical image analysis using deep learning techniques, a sufficient volume of data with variability is required. However, traditional image augmentation methods such as scale, rotation, crop etc. create highly correlated images which are unable to capture the underlying features of the source images. In addition, they might change the pattern useful for diagnosis. Class imbalance is another reason to apply augmentation. Generative Adversarial Network (GAN) models have shown promising results in generating synthetic data with good generalization ability to large datasets.
In this work we use the Aggregation GAN (AGGrGAN) model to capture both the unique features and localized information of a source image using style transfer and also the shared information among the different latent representation of multiple images. We then perform an ablation study to quantitatively evaluate (using PSNR and SSIM scores) the generated images and also to study the impact of aggregation followed by style transfer. For a qualitative analysis, we train a classification network using both real images and a mixture of real and fake images to study the effectiveness of the images generated by our models. All our experiments have been performed on the BraTS 2020 dataset
AGGrGAN uses an aggregation logic to merge the results from DCGAN, WGAN and UNet GAN. Style transfer technique was used to improve the similarity of MRI images. We also perform a qualitative and quantitative analysis of results.
For more details on the internal workings of AGGrGAN and the following post-processing steps, please read the full report