Performance Analysis of Different Optimizers, Batch Sizes, and Epochs on Convolutional Neural Network for Image Classification
The important thing in the field of deep learning is to find out the appropriate hyper-parameter for image classification. In this study, the main objective is to investigate the performance of various hyper-parameters in a convolutional neural network model based on the image classification problem...
Saved in:
Published in | Journal of Agriculture & Life Science Vol. 55; no. 2; pp. 99 - 107 |
---|---|
Main Authors | , , , , , , , |
Format | Journal Article |
Language | English |
Published |
경상국립대학교 농업생명과학연구원
30.04.2021
농업생명과학연구원 |
Subjects | |
Online Access | Get full text |
ISSN | 1598-5504 2383-8272 |
DOI | 10.14397/jals.2021.55.2.99 |
Cover
Abstract | The important thing in the field of deep learning is to find out the appropriate hyper-parameter for image classification. In this study, the main objective is to investigate the performance of various hyper-parameters in a convolutional neural network model based on the image classification problem. The dataset was obtained from the Kaggle dataset. The experiment was conducted through different hyper-parameters. For this proposal, Stochastic Gradient Descent without momentum (SGD), Adaptive Moment Estimation (Adam), Adagrad, Adamax optimizer, and the number of batch sizes (16, 32, 64, 120), and the number of epochs (50, 100, 150) were considered as hyper-parameters to determine the losses and accuracy of a model. In addition, Binary Cross-entropy Loss Function (BCLF) was used for evaluating the performance of a model. In this study, the VGG16 convolutional neural network was used for image classification. Empirical results demonstrated that a model had minimum losses obtain by Adagrad optimizer in the case of 16 batch sizes and 50 epochs. In addition, the SGD with a 32 batch sizes and 150 epochs and the Adam with a 64 batch sizes and 50 epochs had the best performance based on the loss value during the training process. Interestingly, the accuracy was higher while performing the Adagrad and Adamax optimizer with a 120 batch sizes and 150 epochs. In this study, the Adagrad optimizer with a 120 batch sizes and 150 epochs performed slightly better among those optimizers. In addition, an increasing number of epochs can improve the performance of accuracy. It can help to create a broader scope for further experiments on several datasets to perceive the suitable hyper-parameters for the convolutional neural network. Dataset: https://www.kaggle.com/c/dogs-vs-cats/data |
---|---|
AbstractList | The important thing in the field of deep learning is to find out the appropriate hyper-parameter for image classification. In this study, the main objective is to investigate the performance of various hyper-parameters in a convolutional neural network model based on the image classification problem. The dataset was obtained from the Kaggle dataset. The experiment was conducted through different hyper-parameters. For this proposal, Stochastic Gradient Descent without momentum (SGD), Adaptive Moment Estimation (Adam), Adagrad, Adamax optimizer, and the number of batch sizes (16, 32, 64, 120), and the number of epochs (50, 100, 150) were considered as hyper-parameters to determine the losses and accuracy of a model. In addition, Binary Cross-entropy Loss Function (BCLF) was used for evaluating the performance of a model. In this study, the VGG16 convolutional neural network was used for image classification.
Empirical results demonstrated that a model had minimum losses obtain by Adagrad optimizer in the case of 16 batch sizes and 50 epochs. In addition, the SGD with a 32 batch sizes and 150 epochs and the Adam with a 64 batch sizes and 50 epochs had the best performance based on the loss value during the training process. Interestingly, the accuracy was higher while performing the Adagrad and Adamax optimizer with a 120 batch sizes and 150 epochs. In this study, the Adagrad optimizer with a 120 batch sizes and 150 epochs performed slightly better among those optimizers. In addition, an increasing number of epochs can improve the performance of accuracy. It can help to create a broader scope for further experiments on several datasets to perceive the suitable hyper-parameters for the convolutional neural network. Dataset: https://www.kaggle.com/c/dogs-vs-cats/data KCI Citation Count: 0 The important thing in the field of deep learning is to find out the appropriate hyper-parameter for image classification. In this study, the main objective is to investigate the performance of various hyper-parameters in a convolutional neural network model based on the image classification problem. The dataset was obtained from the Kaggle dataset. The experiment was conducted through different hyper-parameters. For this proposal, Stochastic Gradient Descent without momentum (SGD), Adaptive Moment Estimation (Adam), Adagrad, Adamax optimizer, and the number of batch sizes (16, 32, 64, 120), and the number of epochs (50, 100, 150) were considered as hyper-parameters to determine the losses and accuracy of a model. In addition, Binary Cross-entropy Loss Function (BCLF) was used for evaluating the performance of a model. In this study, the VGG16 convolutional neural network was used for image classification. Empirical results demonstrated that a model had minimum losses obtain by Adagrad optimizer in the case of 16 batch sizes and 50 epochs. In addition, the SGD with a 32 batch sizes and 150 epochs and the Adam with a 64 batch sizes and 50 epochs had the best performance based on the loss value during the training process. Interestingly, the accuracy was higher while performing the Adagrad and Adamax optimizer with a 120 batch sizes and 150 epochs. In this study, the Adagrad optimizer with a 120 batch sizes and 150 epochs performed slightly better among those optimizers. In addition, an increasing number of epochs can improve the performance of accuracy. It can help to create a broader scope for further experiments on several datasets to perceive the suitable hyper-parameters for the convolutional neural network. Dataset: https://www.kaggle.com/c/dogs-vs-cats/data |
Author | Moon, Byeong-Eun Kim, Na-Eun Kim, Hyeon-Tae Sihalath, Thavisack Arulmozhi, Elanchezhian Lee, Doeg-Hyun Bhujel, Anil Basak, Jayanta Kumar |
Author_xml | – sequence: 1 givenname: Thavisack surname: Sihalath fullname: Sihalath, Thavisack – sequence: 2 givenname: Jayanta Kumar surname: Basak fullname: Basak, Jayanta Kumar – sequence: 3 givenname: Anil surname: Bhujel fullname: Bhujel, Anil – sequence: 4 givenname: Elanchezhian surname: Arulmozhi fullname: Arulmozhi, Elanchezhian – sequence: 5 givenname: Byeong-Eun surname: Moon fullname: Moon, Byeong-Eun – sequence: 6 givenname: Na-Eun surname: Kim fullname: Kim, Na-Eun – sequence: 7 givenname: Doeg-Hyun surname: Lee fullname: Lee, Doeg-Hyun – sequence: 8 givenname: Hyeon-Tae surname: Kim fullname: Kim, Hyeon-Tae |
BackLink | https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002709138$$DAccess content in National Research Foundation of Korea (NRF) |
BookMark | eNotkU9PGzEQxS0UJFLgC3CyxKlSd-s_69g-pmloI0WAKD1bXq9NTDZ2ZG-KwoHPjpP09Gak38y80fsCRiEGC8ANRjVuqOTfX3Wfa4IIrhmrSS3lGRgTKmglCCcjMMZMioox1FyA65x9ixDGjeCCjsHHo00upo0OxsJp0P0--wyjgz-9czbZMMCH7eA3_t2m_A3-0INZwT-lK40OHZxvo1mVgQBnMfyL_W7wsWyB93aXjjK8xbSG5QRcbPSLhbNeFwvOG30gr8C5K-bt9X-9BH_v5s-z39Xy4ddiNl1WBiMmqqalvJGd5S3BpZy0gjGKeWe0to4hbgxzRNsGtc4R0glGOZKM8FZSLFxn6CX4etobklNr41XU_qgvUa2Tmj49L5TknDdYFPb2xK5jsjqbVex1Uq9xl8pjWTVoMpnwQpETZVLMOVmntslvdNorjNQxFnWIRR1iUYwpoqSknzyihGM |
ContentType | Journal Article |
DBID | AAYXX CITATION KROLR ACYCR |
DOI | 10.14397/jals.2021.55.2.99 |
DatabaseName | CrossRef 코리아스칼라 Korean Citation Index |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
EISSN | 2383-8272 |
EndPage | 107 |
ExternalDocumentID | oai_kci_go_kr_ARTI_9777418 406667 10_14397_jals_2021_55_2_99 |
GroupedDBID | AAYXX ALMA_UNASSIGNED_HOLDINGS CITATION M~E KROLR ACYCR |
ID | FETCH-LOGICAL-c1058-4b3749de7b21b376b855317dcaaef507cc5f2ae40bff22d853709527b9318fdc3 |
ISSN | 1598-5504 |
IngestDate | Tue Nov 21 21:30:17 EST 2023 Sun Jun 22 04:11:49 EDT 2025 Tue Jul 01 00:23:33 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 2 |
Keywords | Image Classification Convolutional Neural Network Epoch Optimizer Batch Size |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c1058-4b3749de7b21b376b855317dcaaef507cc5f2ae40bff22d853709527b9318fdc3 |
PageCount | 9 |
ParticipantIDs | nrf_kci_oai_kci_go_kr_ARTI_9777418 koreascholar_journals_406667 crossref_primary_10_14397_jals_2021_55_2_99 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 20210430 |
PublicationDateYYYYMMDD | 2021-04-30 |
PublicationDate_xml | – month: 04 year: 2021 text: 20210430 day: 30 |
PublicationDecade | 2020 |
PublicationTitle | Journal of Agriculture & Life Science |
PublicationYear | 2021 |
Publisher | 경상국립대학교 농업생명과학연구원 농업생명과학연구원 |
Publisher_xml | – name: 경상국립대학교 농업생명과학연구원 – name: 농업생명과학연구원 |
SSID | ssib001148783 ssib008451562 ssib009283157 ssib044743270 |
Score | 2.1426752 |
Snippet | The important thing in the field of deep learning is to find out the appropriate hyper-parameter for image classification. In this study, the main objective is... |
SourceID | nrf koreascholar crossref |
SourceType | Open Website Aggregation Database Index Database |
StartPage | 99 |
SubjectTerms | 자연과학일반 |
TableOfContents | Abstract Introduction Materials and Methods 1. Experiment 2. Batch Size and Epoch 3. Loss function evaluation criteria Results and Discussion 1. Loss evaluation 2. Accuracy evaluation 3. Performance of increasing batch size and poch References |
Title | Performance Analysis of Different Optimizers, Batch Sizes, and Epochs on Convolutional Neural Network for Image Classification |
URI | http://db.koreascholar.com/Article/Detail/406667 https://www.kci.go.kr/kciportal/ci/sereArticleSearch/ciSereArtiView.kci?sereArticleSearchBean.artiId=ART002709138 |
Volume | 55 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
ispartofPNX | 농업생명과학연구, 2021, 55(2), , pp.99-107 |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwrV3db9MwELe28YKQEAgQg4EshJ-6lNb5sh-bNmibNEBikybxYDlOQke7ZOoHEnvY386d47aphhAgXhLX-ah955_vzjnfEfI2CmPd75UYCtFEYKD4gZdpUXoiA2HQy8rA17h3-PRDdHQenFyEFzu7X1peS8tF1jU3v9xX8i9chTrgK-6S_QvOrl8KFVAG_sIROAzHP-Lxp5bXfzu6yMhlPVl0PsKMcHV54xKhJTDvjjuf4fd85bWZXtdmbD8ZDOvqu2stsA2DdtiT9RK3zojHV-jfY7Noon_RhqVOt2VpwkTExIilQyZjJkIsCJ_JEV4aDGzNgCU-S4YsHTEZMindzUnPXgrZJhbkGQYQmGszgRaP9VQv1ivXJ_oHDAjdsf7h0Ku5Xu83GlSX004yXn4r1q4j6RQINC5uxjiXDWbL6VUN5fZyB--3vtzgALVt4SwZuR6I_qp1TVciewm6O0RPkU1X7D2y1_lfpLA1CWtSna5khxQeGHzN-khh60Ah8j3B4y2B08QldsDiLenRpIpyeojLBnxHxKEGaXMrTDHaPO93w7DLu6tHt0KHB2icxrvkHo9j68NweptudGUwhNufcEUAqm0rdqQExbMViCkIQNfkNvniuqNuExq26N2d9mwpeg8mNe6xaNZrQI2rZmVLjTt7RB46-4sOGjA9JjtF9YTctoBEV0CidUnXQKIbIB1SCyNqYXRIAUS0ARGtK7oFItqAiDoQUfgLakFEt0H0lJy_T8-GR55LTOIZMEeEF2R-HMi8iDPeh2KUiRBEWZwbrYsSDCxjwpLrIoC5ruQ8B404BkuGx5kECVrmxn9G9qq6Kp4Taopc5zz3sTYwOtS9Upqe9kUel2VU8H3SWVFRXTfxZxTa7UhzhTRXSHMVhoorKffJQZvQys1Vc9WMhH3yBgivJuZSYVR5PH-t1WSmwHY-VmAJYiirF79_x0tyfwPLA7K3mC2LV6CXL7LXdnz9BBx3ybU |
linkProvider | ISSN International Centre |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Performance+Analysis+of+Different+Optimizers%2C+Batch+Sizes%2C+and+Epochs+on+Convolutional+Neural+Network+for+Image+Classification&rft.jtitle=%EB%86%8D%EC%97%85%EC%83%9D%EB%AA%85%EA%B3%BC%ED%95%99%EC%97%B0%EA%B5%AC&rft.au=Thavisack+Sihalath&rft.au=Jayanta+Kumar+Basak&rft.au=Anil+Bhujel&rft.au=Elanchezhian+Arulmozhi&rft.date=2021-04-30&rft.pub=%EA%B2%BD%EC%83%81%EA%B5%AD%EB%A6%BD%EB%8C%80%ED%95%99%EA%B5%90+%EB%86%8D%EC%97%85%EC%83%9D%EB%AA%85%EA%B3%BC%ED%95%99%EC%97%B0%EA%B5%AC%EC%9B%90&rft.issn=1598-5504&rft.eissn=2383-8272&rft.volume=55&rft.issue=2&rft.spage=99&rft.epage=107&rft_id=info:doi/10.14397%2Fjals.2021.55.2.99&rft.externalDocID=406667 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1598-5504&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1598-5504&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1598-5504&client=summon |