Learning Transferable Architectures for Scalable Image Recognition
Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an archi...
Saved in:
Published in | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 8697 - 8710 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2018
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset. |
---|---|
AbstractList | Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model architectures directly on the dataset of interest. As this approach is expensive when the dataset is large, we propose to search for an architectural building block on a small dataset and then transfer the block to a larger dataset. The key contribution of this work is the design of a new search space (which we call the "NASNet search space") which enables transferability. In our experiments, we search for the best convolutional layer (or "cell") on the CIFAR-10 dataset and then apply this cell to the ImageNet dataset by stacking together more copies of this cell, each with their own parameters to design a convolutional architecture, which we name a "NASNet architecture". We also introduce a new regularization technique called ScheduledDropPath that significantly improves generalization in the NASNet models. On CIFAR-10 itself, a NASNet found by our method achieves 2.4% error rate, which is state-of-the-art. Although the cell is not searched for directly on ImageNet, a NASNet constructed from the best cell achieves, among the published works, state-of-the-art accuracy of 82.7% top-1 and 96.2% top-5 on ImageNet. Our model is 1.2% better in top-1 accuracy than the best human-invented architectures while having 9 billion fewer FLOPS - a reduction of 28% in computational demand from the previous state-of-the-art model. When evaluated at different levels of computational cost, accuracies of NASNets exceed those of the state-of-the-art human-designed models. For instance, a small version of NASNet also achieves 74% top-1 accuracy, which is 3.1% better than equivalently-sized, state-of-the-art models for mobile platforms. Finally, the image features learned from image classification are generically useful and can be transferred to other computer vision problems. On the task of object detection, the learned features by NASNet used with the Faster-RCNN framework surpass state-of-the-art by 4.0% achieving 43.1% mAP on the COCO dataset. |
Author | Le, Quoc V. Zoph, Barret Vasudevan, Vijay Shlens, Jonathon |
Author_xml | – sequence: 1 givenname: Barret surname: Zoph fullname: Zoph, Barret – sequence: 2 givenname: Vijay surname: Vasudevan fullname: Vasudevan, Vijay – sequence: 3 givenname: Jonathon surname: Shlens fullname: Shlens, Jonathon – sequence: 4 givenname: Quoc V. surname: Le fullname: Le, Quoc V. |
BookMark | eNotzMFKAzEQgOEoCtbaswcv-wJbM0mTnTnWYrVQUGr1WrLJpEbarGTXg28vqKf_8MF_Kc5yl1mIa5BTAEm3i7fnzVRJwKmUJJsTMaEGwWi0dqYknYoRSKtrS0AXYtL3H1JKZVHjzIzE3ZpdySnvq21xuY9cXHvgal78exrYD1-F-yp2pXrx7vBLq6Pbc7Vh3-1zGlKXr8R5dIeeJ_8di9fl_XbxWK-fHlaL-bpOagZDTdFxAB-UQQoKGMn4EIFCtLGhllpURA2QaYND6xElQlTGee-1Rgh6LG7-vomZd58lHV353qFpSEqjfwD9v0zt |
CODEN | IEEPAD |
ContentType | Conference Proceeding |
DBID | 6IE 6IH CBEJK RIE RIO |
DOI | 10.1109/CVPR.2018.00907 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan (POP) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library (IEL) - NZ IEEE Proceedings Order Plans (POP) 1998-present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences |
EISBN | 9781538664209 1538664208 |
EISSN | 1063-6919 |
EndPage | 8710 |
ExternalDocumentID | 8579005 |
Genre | orig-research |
GroupedDBID | 6IE 6IH 6IL 6IN AAWTH ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK IJVOP OCL RIE RIL RIO |
ID | FETCH-LOGICAL-i241t-9faed1cd2589d21e895cdf19df6f79b9b82997195bda86c88081f25accc3381d3 |
IEDL.DBID | RIE |
IngestDate | Wed Aug 27 02:52:16 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i241t-9faed1cd2589d21e895cdf19df6f79b9b82997195bda86c88081f25accc3381d3 |
PageCount | 14 |
ParticipantIDs | ieee_primary_8579005 |
PublicationCentury | 2000 |
PublicationDate | 2018-06 |
PublicationDateYYYYMMDD | 2018-06-01 |
PublicationDate_xml | – month: 06 year: 2018 text: 2018-06 |
PublicationDecade | 2010 |
PublicationTitle | 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition |
PublicationTitleAbbrev | CVPR |
PublicationYear | 2018 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssj0002683845 ssj0003211698 |
Score | 2.6301503 |
Snippet | Developing neural network image classification models often requires significant architecture engineering. In this paper, we study a method to learn the model... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 8697 |
SubjectTerms | Aerospace electronics Computational modeling Computer architecture Convolution Microprocessors Search methods |
Title | Learning Transferable Architectures for Scalable Image Recognition |
URI | https://ieeexplore.ieee.org/document/8579005 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELZKJ6YCLeItD4ykjeP4NUJFVZCKqoqibpWfCAEpounCr8d20vIQA1ucyJZlK_7ufN99B8A5Jv4ExIgmjqU6yZlyieISJQ4xxxHVOI01I0d3dDjNb2dk1gAXm1wYa20kn9lueIyxfLPQq3BV1uOEiShYuuUdtypXa3OfklGOeR0hC23sPRsqeK3mg1LR6z-MJ4HLFciTIv1ZTiWiyaAFRut5VCSS5-6qVF398Uui8b8T3QGdr7w9ON4g0i5o2GIPtGpDE9a_8bINrmpV1UcYocp3DQlU8PJbUGEJvTXru8iX-Onm1Z87cLJmGy2KDpgOru_7w6QuppA8eZAuE-GkNUibjHBhMmS5INo4JIyjjgklFPfAxJAgykhONQ8VOVxGpNbae7HI4H3QLBaFPQBQYGty5AyhkuYMc5l7qyFX3vNQ1o8qD0E7LMn8rdLLmNercfT362OwHTalol-dgGb5vrKnHuhLdRZ3-BO1Cqbw |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NT8IwFG8IHvSECsZvd_DoYF3Xrj0qkYACIQQMN9JPY9RhZFz86227gR_x4G3d0qZps_7e6_u93wPgEmF7AiJIQpNGMkxSYUJBOQwNTA2FRKLI14wcDEl3mtzN8KwCrja5MFprTz7TTffoY_lqIVfuqqxFccq8YOmWxX0cF9lamxuVmFBEyxiZayPr2xBGSz0fGLFW-2E0dmwuR59k0c-CKh5POjUwWM-koJE8N1e5aMqPXyKN_53qLmh8Ze4Fow0m7YGKzvZBrTQ1g_JHXtbBTamr-hh4sLJdXQpVcP0trLAMrD1ru_AX_6n3ak-eYLzmGy2yBph2biftbliWUwifLEznITNcKyhVjClTMdSUYakMZMoQkzLBBLXQlEKGheKUSOpqcpgYcyml9WOhQgegmi0yfQgChrRKoFGYcJKkiPLE2g2JsL6H0HZUfgTqbknmb4VixrxcjeO_X1-A7e5k0J_3e8P7E7DjNqggY52Cav6-0mcW9nNx7nf7E53Cqjo |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2018+IEEE%2FCVF+Conference+on+Computer+Vision+and+Pattern+Recognition&rft.atitle=Learning+Transferable+Architectures+for+Scalable+Image+Recognition&rft.au=Zoph%2C+Barret&rft.au=Vasudevan%2C+Vijay&rft.au=Shlens%2C+Jonathon&rft.au=Le%2C+Quoc+V.&rft.date=2018-06-01&rft.pub=IEEE&rft.eissn=1063-6919&rft.spage=8697&rft.epage=8710&rft_id=info:doi/10.1109%2FCVPR.2018.00907&rft.externalDocID=8579005 |