An experimental comparison of the widely used pre-trained deep neural networks for image classification tasks towards revealing the promise of transfer-learning

Yükleniyor...
Küçük Resim

Tarih

2022

Dergi Başlığı

Dergi ISSN

Cilt Başlığı

Yayıncı

Wiley

Erişim Hakkı

info:eu-repo/semantics/closedAccess

Özet

The easiest way to propose a solution based on deep neural networks is using the pre-trained models through the transfer-learning technique. Deep learning platforms provide various pre-trained deep neural networks that can be easily applied for image classification tasks. So, Which pre-trained model provides the best performance for image classification tasks? is a question that instinctively comes to mind and should be shed light on by the research community. To this end, we propose an experimental comparison of the six popular pre-trained deep neural networks, namely, (i) VGG19, (ii) ResNet50, (iii) DenseNet201, (iv) MobileNetV2, (v) InceptionV3, and (vi) Xception by employing them through the transfer-learning technique. Then, the proposed benchmark models were both trained and evaluated under the same configurations on two gold-standard datasets, namely, (i) CIFAR-10 and (ii) Stanford Dogs to benchmark them. Three evaluation metrics were employed to measure performance differences between the employed pre-trained models as follows: (i) Accuracy, (ii) training duration, and (iii) inference time. The key findings that were obtained through the conducted a wide variety of experiments were discussed.

Açıklama

Anahtar Kelimeler

Convolutional Neural Network; Deep Learning; Deep Neural Network; Keras; Tensorflow; Transfer-Learning

Kaynak

Concurrency and Computation-Practice & Experience

WoS Q Değeri

Q3

Scopus Q Değeri

Q3

Cilt

34

Sayı

24

Künye