A Study of Fine-Tuning CNN Models Based on Thermal Imaging for Breast Cancer Classification

This paper discusses the initial experiments on fine-tuning the convolutional neural network (CNN) models of ResNet101, DenseNet, MobileNetV2, and ShuffleNetV2 for breast cancer detection. These models have been tested using the ImageNet database and shown their high performance on the image classif...

Full description

Saved in:
Bibliographic Details
Published in2019 IEEE International Conference on Cybernetics and Computational Intelligence (CyberneticsCom) pp. 77 - 81
Main Authors Roslidar, Roslidar, Saddami, Khairun, Arnia, Fitri, Syukri, Maimun, Munadi, Khairul
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.08.2019
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:This paper discusses the initial experiments on fine-tuning the convolutional neural network (CNN) models of ResNet101, DenseNet, MobileNetV2, and ShuffleNetV2 for breast cancer detection. These models have been tested using the ImageNet database and shown their high performance on the image classification. Here, the dataset used to train the model are thermal breast images downloaded from the Database for Mastology Research (DMR). There are only two classification classes here, cancer and healthy. During the training, we used three epochs for the training iteration, 10, 20, and 30. For this preliminary study, we set the learning rate to 0.001, momentum to 0.9, learning rate factor for weight and bias each to 10, and minibatch size to 10. The training results showed that ResNet101 and DenseNet with deep networks have 100% accuracy only in 10 epochs. Whereas, MobileNetV2 and ShuffleNetV2 respectively need 20 epochs and 30 epochs of training to achieve 100% accuracy. Finally, we evaluated the performance of each pretrained model using testing results. The DenseNet was able to classify all the testing dataset correctly. ResNet101 and MobileNetV2 have the same performance, correctly classified static dataset while slightly missed in classifying dynamic dataset with 0.996 of accuracy. ShuffleNetV2 has a little lower performance with only 0.98 of accuracy. In term of training time, ShuffleNetV2 spent a short duration, but MobileNetV2 with competitive elapse time was able to show equal performance to ResNet101.
DOI:10.1109/CYBERNETICSCOM.2019.8875661