LRNAS: Differentiable Searching for Adversarially Robust Lightweight Neural Architecture
The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversari...
Saved in:
Published in | IEEE Transactions on Neural Networks and Learning Systems Vol. 36; no. 3; pp. 5629 - 5643 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.03.2025
Institute of Electrical and Electronics Engineers (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 2162-237X 2162-2388 2162-2388 |
DOI | 10.1109/TNNLS.2024.3382724 |
Cover
Loading…
Abstract | The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance. |
---|---|
AbstractList | The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance. The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance.The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires compromising with the network size. Existing approaches to addressing this problem mainly focus on the combination of model compression and adversarial training. However, their performance heavily relies on neural architectures, which are typically manual designs with extensive expertise. In this article, we propose a lightweight and robust neural architecture search (LRNAS) method to automatically search for adversarially robust lightweight neural architectures. Specifically, we propose a novel search strategy to quantify contributions of the components in the search space, based on which the beneficial components can be determined. In addition, we further propose an architecture selection method based on a greedy strategy, which can keep the model size while deriving sufficient beneficial components. Owing to these designs in LRNAS, the lightness, the natural accuracy, and the adversarial robustness can be collectively guaranteed to the searched architectures. We conduct extensive experiments on various benchmark datasets against the state of the arts. The experimental results demonstrate that the proposed LRNAS method is superior at finding lightweight neural architectures that are both accurate and adversarially robust under popular adversarial attacks. Moreover, ablation studies are also performed, which reveals the validity of the individual components designed in LRNAS and the component effects in positively deciding the overall performance. |
Author | An, Fengping Feng, Yuqi Chen, Hongyang Gao, Shangce Sun, Yanan Lv, Zeqiong |
Author_xml | – sequence: 1 givenname: Yuqi surname: Feng fullname: Feng, Yuqi email: feng770623@gmail.com organization: College of Computer Science, Sichuan University, Chengdu, China – sequence: 2 givenname: Zeqiong orcidid: 0000-0002-1276-6711 surname: Lv fullname: Lv, Zeqiong email: zq_lv@stu.scu.edu.cn organization: College of Computer Science, Sichuan University, Chengdu, China – sequence: 3 givenname: Hongyang orcidid: 0000-0002-7626-0162 surname: Chen fullname: Chen, Hongyang email: dr.h.chen@ieee.org organization: Research Center for Graph Computing, Zhejiang Laboratory, Hangzhou, China – sequence: 4 givenname: Shangce orcidid: 0000-0001-5042-3261 surname: Gao fullname: Gao, Shangce email: gaosc@eng.u-toyama.ac.jp organization: Faculty of Engineering, University of Toyama, Toyama, Japan – sequence: 5 givenname: Fengping surname: An fullname: An, Fengping email: anfengpingwxl@gmail.com organization: School of Automation and Software Engineering, Shanxi University, Taiyuan, China – sequence: 6 givenname: Yanan orcidid: 0000-0001-6374-1429 surname: Sun fullname: Sun, Yanan email: ysun@scu.edu.cn organization: College of Computer Science, Sichuan University, Chengdu, China |
BackLink | https://cir.nii.ac.jp/crid/1872553967968247296$$DView record in CiNii https://www.ncbi.nlm.nih.gov/pubmed/38568761$$D View this record in MEDLINE/PubMed |
BookMark | eNpNkEtP6zAQRi0E4v0H0BXK4i7YtPgVO2ZX8ZaiIlGQ2FmOPQZfpQnXTkD8exJaELOYmcX5RqOzhzabtgGEjgieEoLV6cN8Xi6mFFM-ZaygkvINtEuJoBPKimLzZ5dPO-gwpX94KIFzwdU22mFFLgopyC56Ku_ns8VZdhG8hwhNF0xVQ7YAE-1LaJ4z38Zs5t4gJhODqeuP7L6t-tRlZXh-6d5h7Nkc-mjqbDZmOrBdH-EAbXlTJzhcz330eHX5cH4zKe-ub89n5cRyjruJtDmjjCnvnMtNhQ2ANIIqr4wzwmNOLC8wNU5yz1WlCklzqmwlsXOEScL20cnq7mts__eQOr0MyUJdmwbaPmmGGcMEE6YG9HiN9tUSnH6NYWnih_62MQB0BdjYphTB_yAE69G6_rKuR-t6bX0I_V2FmhC0DWMn45c5U0IqUVAuqRID9meFBQD4dZcrTHLCPgHkG4d1 |
CODEN | ITNNAL |
Cites_doi | 10.1109/TPAMI.2022.3181116 10.1109/CVPR.2017.243 10.1109/TPAMI.2021.3137605 10.1111/j.1365-2818.1984.tb02501.x 10.1109/TCYB.2020.2983860 10.1109/CVPR.2016.90 10.1109/TNNLS.2021.3105698 10.1109/WACV56688.2023.00159 10.1109/CVPR42600.2020.00071 10.1214/aoms/1177731020 10.1007/s40747-023-01139-8 10.1007/s11063-019-10116-7 10.1109/TNNLS.2021.3096658 10.1109/TPAMI.2022.3183586 10.1109/TNNLS.2021.3072950 10.1109/ICCV.2019.00020 10.1109/IJCNN55064.2022.9892654 10.1609/aaai.v37i1.25118 10.1109/ICCV.2019.00138 10.1109/CVPR52688.2022.01159 10.1109/CVPRW56347.2022.00312 10.1109/TPAMI.2021.3066410 10.5772/6340 10.1515/9781400881970-018 10.1109/SP.2017.49 10.1038/nature14539 10.1109/ICCV48922.2021.01210 10.1109/CVPR46437.2021.00613 10.1007/978-3-031-20065-6_8 10.1109/TPAMI.2021.3127346 10.1109/CVPR.2009.5206848 10.1109/CVPR.2018.00474 10.1109/TCYB.2022.3209175 10.1016/j.neunet.2023.03.008 10.1145/3370748.3406585 10.1007/978-3-030-58601-0_5 10.1109/TNNLS.2018.2886017 10.1109/TNNLS.2020.3039295 10.1109/TNNLS.2021.3100554 10.1007/978-3-030-58517-4_32 10.1007/978-3-030-01264-9_8 10.1007/978-3-7908-2604-3_16 |
ContentType | Journal Article |
DBID | 97E RIA RIE RYH AAYXX CITATION NPM 7X8 |
DOI | 10.1109/TNNLS.2024.3382724 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CiNii Complete CrossRef PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2162-2388 |
EndPage | 5643 |
ExternalDocumentID | 38568761 10_1109_TNNLS_2024_3382724 10490151 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: National Key Research and Development Program of China grantid: 2022YFB4500300 – fundername: National Natural Science Foundation of China grantid: 62276175; 62375102; 62271452 funderid: 10.13039/501100001809 – fundername: Open Fund of the Key Laboratory of the Ministry of Culture and Tourism of Zhejiang Conservatory of Music grantid: X002A4772234 – fundername: Research and Develop Program of West China Hospital of Stomatology Sichuan University grantid: RD-03-202403 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS RYH AAYXX CITATION RIG NPM 7X8 |
ID | FETCH-LOGICAL-c440t-7c532339fddd5ab0aee7a629f9ada6f041c4802ad74f49b9872529cb70dd13713 |
IEDL.DBID | RIE |
ISSN | 2162-237X 2162-2388 |
IngestDate | Fri Jul 11 06:52:45 EDT 2025 Mon Jul 21 06:03:48 EDT 2025 Tue Jul 01 05:28:50 EDT 2025 Thu Jun 26 22:43:38 EDT 2025 Wed Aug 27 01:46:38 EDT 2025 |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 3 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c440t-7c532339fddd5ab0aee7a629f9ada6f041c4802ad74f49b9872529cb70dd13713 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0001-5042-3261 0000-0002-7626-0162 0000-0002-1276-6711 0000-0001-6374-1429 |
PMID | 38568761 |
PQID | 3033010139 |
PQPubID | 23479 |
PageCount | 15 |
ParticipantIDs | pubmed_primary_38568761 nii_cinii_1872553967968247296 proquest_miscellaneous_3033010139 crossref_primary_10_1109_TNNLS_2024_3382724 ieee_primary_10490151 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2025-Mar |
PublicationDateYYYYMMDD | 2025-03-01 |
PublicationDate_xml | – month: 03 year: 2025 text: 2025-Mar |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE Transactions on Neural Networks and Learning Systems |
PublicationTitleAbbrev | TNNLS |
PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
PublicationYear | 2025 |
Publisher | IEEE Institute of Electrical and Electronics Engineers (IEEE) |
Publisher_xml | – name: IEEE – name: Institute of Electrical and Electronics Engineers (IEEE) |
References | ref13 ref56 ref15 ref59 ref14 ref53 ref11 Simonyan (ref52) ref55 ref54 Madry (ref6) ref17 ref16 Dong (ref61) 2020 ref18 Xu (ref41) Ghorbani (ref38); 33 ref51 ref50 Krizhevsky (ref47) 2009 Croce (ref31) ref46 ref45 Real (ref57) ref42 ref44 LeCun (ref10) 1998 ref43 ref8 ref3 Dong (ref68) ref5 ref40 ref35 ref34 Pang (ref65) ref37 ref36 ref30 Goodfellow (ref4) ref33 ref32 ref2 Guo (ref69); 31 ref1 Wong (ref67) Xie (ref58) Sehwag (ref9); 33 Zoph (ref19) Brock (ref22) Wang (ref28) Li (ref64); 34 Ning (ref27) 2020 Netzer (ref48) ref24 Zhang (ref7) ref23 ref26 Lundberg (ref39); 30 ref25 ref20 ref63 ref66 Liu (ref21) Le (ref49) 2015; 7 Fu (ref12) ref29 ref60 ref62 |
References_xml | – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref6 article-title: Towards deep learning models resistant to adversarial attacks – ident: ref37 doi: 10.1109/TPAMI.2022.3181116 – ident: ref54 doi: 10.1109/CVPR.2017.243 – ident: ref18 doi: 10.1109/TPAMI.2021.3137605 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref19 article-title: Neural architecture search with reinforcement learning – ident: ref45 doi: 10.1111/j.1365-2818.1984.tb02501.x – ident: ref20 doi: 10.1109/TCYB.2020.2983860 – volume: 30 start-page: 4765 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref39 article-title: A unified approach to interpreting model predictions – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref68 article-title: NAS-Bench-201: Extending the scope of reproducible neural architecture search – volume-title: Learning multiple layers of features from tiny images year: 2009 ident: ref47 – volume: 33 start-page: 5922 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref38 article-title: Neuron Shapley: Discovering the responsible neurons – ident: ref53 doi: 10.1109/CVPR.2016.90 – ident: ref59 doi: 10.1109/TNNLS.2021.3105698 – ident: ref14 doi: 10.1109/WACV56688.2023.00159 – ident: ref51 doi: 10.1109/CVPR42600.2020.00071 – ident: ref44 doi: 10.1214/aoms/1177731020 – ident: ref26 doi: 10.1007/s40747-023-01139-8 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref65 article-title: Bag of tricks for adversarial training – ident: ref13 doi: 10.1007/s11063-019-10116-7 – year: 2020 ident: ref27 article-title: Discovering robust convolutional architecture at targeted capacity: A multi-shot approach publication-title: arXiv:2012.11835 – start-page: 2902 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref57 article-title: Large-scale evolution of image classifiers – start-page: 1 volume-title: Proc. NIPS Workshop Deep Learn. Unsupervised Feat. Learn. ident: ref48 article-title: Reading digits in natural images with unsupervised feature learning – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref28 article-title: Rethinking architecture selection in differentiable NAS – ident: ref60 doi: 10.1109/TNNLS.2021.3096658 – ident: ref36 doi: 10.1109/TPAMI.2022.3183586 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref41 article-title: PC-DARTS: Partial channel connections for memory-efficient architecture search – volume: 7 start-page: 3 issue: 7 year: 2015 ident: ref49 article-title: Tiny ImageNet visual recognition challenge publication-title: CS 231N – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref52 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref43 doi: 10.1109/TNNLS.2021.3072950 – start-page: 2206 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref31 article-title: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks – volume: 33 start-page: 19655 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref9 article-title: HYDRA: Pruning adversarially robust neural networks – ident: ref8 doi: 10.1109/ICCV.2019.00020 – ident: ref25 doi: 10.1109/IJCNN55064.2022.9892654 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref4 article-title: Explaining and harnessing adversarial examples – ident: ref23 doi: 10.1609/aaai.v37i1.25118 – ident: ref40 doi: 10.1109/ICCV.2019.00138 – ident: ref42 doi: 10.1109/CVPR52688.2022.01159 – ident: ref15 doi: 10.1109/CVPRW56347.2022.00312 – ident: ref11 doi: 10.1109/TPAMI.2021.3066410 – ident: ref46 doi: 10.5772/6340 – ident: ref35 doi: 10.1515/9781400881970-018 – ident: ref30 doi: 10.1109/SP.2017.49 – ident: ref1 doi: 10.1038/nature14539 – ident: ref34 doi: 10.1109/ICCV48922.2021.01210 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref58 article-title: SNAS: Stochastic neural architecture search – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref21 article-title: DARTS: Differentiable architecture search – ident: ref5 doi: 10.1109/CVPR46437.2021.00613 – ident: ref62 doi: 10.1007/978-3-031-20065-6_8 – ident: ref24 doi: 10.1109/TPAMI.2021.3127346 – ident: ref50 doi: 10.1109/CVPR.2009.5206848 – volume: 31 start-page: 242 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref69 article-title: Sparse DNNs with improved adversarial robustness – ident: ref55 doi: 10.1109/CVPR.2018.00474 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref22 article-title: SMASH: One-shot model architecture search through hypernetworks – ident: ref33 doi: 10.1109/TCYB.2022.3209175 – ident: ref3 doi: 10.1016/j.neunet.2023.03.008 – ident: ref16 doi: 10.1145/3370748.3406585 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref67 article-title: Fast is better than free: Revisiting adversarial training – ident: ref63 doi: 10.1007/978-3-030-58601-0_5 – year: 2020 ident: ref61 article-title: Adversarially robust neural architectures publication-title: arXiv:2009.00902 – ident: ref2 doi: 10.1109/TNNLS.2018.2886017 – volume-title: The Mnist Database of Handwritten Digits year: 1998 ident: ref10 – volume: 34 start-page: 29578 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref64 article-title: Neural architecture dilation for adversarial robustness – ident: ref32 doi: 10.1109/TNNLS.2020.3039295 – ident: ref17 doi: 10.1109/TNNLS.2021.3100554 – start-page: 7472 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref7 article-title: Theoretically principled trade-off between robustness and accuracy – ident: ref29 doi: 10.1007/978-3-030-58517-4_32 – start-page: 3492 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref12 article-title: Double-win quant: Aggressively winning robustness of quantized deep neural networks via random precision training and inference – ident: ref56 doi: 10.1007/978-3-030-01264-9_8 – ident: ref66 doi: 10.1007/978-3-7908-2604-3_16 |
SSID | ssj0000605649 ssib030786659 ssib058575017 ssib053393622 |
Score | 2.4895859 |
Snippet | The adversarial robustness is critical to deep neural networks (DNNs) in deployment. However, the improvement of adversarial robustness often requires... |
SourceID | proquest pubmed crossref nii ieee |
SourceType | Aggregation Database Index Database Publisher |
StartPage | 5629 |
SubjectTerms | Adversarial attack adversarial robustness Computer architecture lightweight neural architecture neural architecture search (NAS) Optimization Perturbation methods Robustness Search problems search space Sun Training |
Title | LRNAS: Differentiable Searching for Adversarially Robust Lightweight Neural Architecture |
URI | https://ieeexplore.ieee.org/document/10490151 https://cir.nii.ac.jp/crid/1872553967968247296 https://www.ncbi.nlm.nih.gov/pubmed/38568761 https://www.proquest.com/docview/3033010139 |
Volume | 36 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dS90wFA_OJ1-mTrfVzZHB3kavaZImzd5EJzKuffAD7ltJmhQuk17xtsj21--ctNVNEPZS-lCSNOck53e-Cfnigs8aqUSqhAipVD6kBeCO1FsunQtKaI8e3YtSnd_IH4t8MSarx1yYEEIMPgszfI2-fL-qezSVwQmXKL5A2XkFfDYkaz0aVBgAcxXhLs8UT7nQiylJhpmj67KcX4E6yOUMlDKuOTbkEUWu4DbI_pFJsckKSJp2uXwZdUbpc7ZNymndQ9DJz1nfuVn9-1lJx__-sR3yesSh9HhgnF2yEdo3ZHvq8UDHI79HFvPL8vjqGz0dG6nAheBuAx2ilEHsUQC9NHZ1Xlvk5dtf9HLl-nVH56j2P0TLK8USIHG2J6_FPrk5-359cp6O3RjSWkrWpbrOBRfCNN773DpmQ9BWcdMY661qmMxqWTBuvZaNNM4Umufc1E4z7zMBuvBbstmu2vCeUGY1oEhvjG04elZt0DVvMkw713kjfUK-TvSo7oaiG1VUVpipIiErJGQ1EjIh-7ijf305bGZCDoGGVb3EZ4bLyYVBs1nBJagUKiGfJ-pWcKbQUWLbsOrXFYh1gbX3hEnIu4Hsj6NPPHPwwqwfyBbHFsExTO0j2ezu-3AIuKVznyK__gHtVuNx |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwELZQOcCF8iiQQsFI3FCWxHbsmFsFVAukObRbaW-WHTvSiiqL2EQV_HpmnKQ8pEpcohwi2_GMPd-8CXntgs9bIXkqOQ-pkD6kJeCO1FsmnAuSK48e3dNaLi_E53WxnpLVYy5MCCEGn4UFvkZfvt82A5rK4IQLFF-g7NwGwS-KMV3r2qSSATSXEfCyXLKUcbWe02Qy_XZV19U5KIRMLEAtY4phSx5eFhLug_wvqRTbrICs6Tabm3FnlD8n-6SeVz6GnXxdDL1bND__Ker43792n9ybkCg9HlnnAbkVuodkf-7yQKdD_4isq7P6-Pwd_TC1UoErwV0GOsYpg-CjAHtp7Ou8s8jNlz_o2dYNu55WqPhfRdsrxSIgcbbffosDcnHycfV-mU79GNJGiKxPVVNwxrluvfeFdZkNQVnJdKutt7LNRN6IMmPWK9EK7XSpWMF041Tmfc5BG35M9rptF54SmlkFONJrbVuGvlUbVMPaHBPPVdEKn5A3Mz3Mt7HshonqSqZNJKRBQpqJkAk5wB3948txMxNyBDQ0zQafOS6n4BoNZyUToFTIhLyaqWvgVKGrxHZhO-wMCHaO1fe4TsiTkezXo888c3jDrC_JneXqtDLVp_rLM3KXYcPgGLT2nOz134dwBCimdy8i7_4CA9Lmvg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LRNAS%3A+Differentiable+Searching+for+Adversarially+Robust+Lightweight+Neural+Architecture&rft.jtitle=IEEE+Transactions+on+Neural+Networks+and+Learning+Systems&rft.au=Yuqi+Feng&rft.au=Zeqiong+Lv&rft.au=Hongyang+Chen&rft.au=Shangce+Gao&rft.date=2025-03-01&rft.pub=Institute+of+Electrical+and+Electronics+Engineers+%28IEEE%29&rft.issn=2162-237X&rft.eissn=2162-2388&rft.spage=1&rft.epage=15&rft_id=info:doi/10.1109%2Ftnnls.2024.3382724 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |