Robustness of AirNET to training library for sparse-view CT
Diamond, Zachary Michael
MetadataShow full item record
The advances of artificial intelligence and deep learning applied to medical physics are giving rise to numerous applications, ranging from improvements in clinical workflow to the usage of computer-aided diagnosis in preliminary patient screenings. One such advance comes in the form of reconstructing sparsely sampled medical images, whereby a sufficiently trained convolutional neural network would be able to recreate an image. AirNET is a neural network that reconstructs sparsely sampled CT images by referencing several CT-SIM training libraries for a given case. To test the robustness (the ability of the model to reproduce a correct image given any input) of AirNET, patient libraries of prostate, lung, and abdominal cancers were created, trained, and tested to quantify how well the model accurately predicted the given sparsely-view image. Tests on such networks were performed by running AirNET with different training libraries and different model hyperparameters. Resulting absolute differences between predicted and ground-truth images were taken and shown to be fairly minimal. Additional anatomical images were analyzed on a pixel-by-pixel basis for minute differences in pixel intensities. Image comparison metrics were obtained for each of the tests, as well as their time dependencies. Maxima and minima of such metrics were found to be dependent on both the training library used and the model hyperparameters.