Student Network Learning via Evolutionary Knowledge Distillation
Knowledge distillation provides an effective way to transfer knowledge via teacher-student learning, where most existing distillation approaches apply a fixed pre-trained model as teacher to supervise the learning of student network. This manner usually brings in a big capability gap between teacher...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 4; pp. 2251 - 2263 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Knowledge distillation provides an effective way to transfer knowledge via teacher-student learning, where most existing distillation approaches apply a fixed pre-trained model as teacher to supervise the learning of student network. This manner usually brings in a big capability gap between teacher and student networks during learning. Recent researches have observed that a small teacher-student capability gap can facilitate knowledge transfer. Inspired by that, we propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge. Instead of a fixed pre-trained teacher, an evolutionary teacher is learned online and consistently transfers intermediate knowledge to supervise student network learning on-the-fly. To enhance intermediate knowledge representation and mimicking, several simple guided modules are introduced between corresponding teacher-student blocks. In this way, the student can simultaneously obtain rich internal knowledge and capture its growth process, leading to effective student network learning. Extensive experiments clearly demonstrate the effectiveness of our approach as well as good adaptability in the low-resolution and few-sample scenarios. |
---|---|
AbstractList | Knowledge distillation provides an effective way to transfer knowledge via teacher-student learning, where most existing distillation approaches apply a fixed pre-trained model as teacher to supervise the learning of student network. This manner usually brings in a big capability gap between teacher and student networks during learning. Recent researches have observed that a small teacher-student capability gap can facilitate knowledge transfer. Inspired by that, we propose an evolutionary knowledge distillation approach to improve the transfer effectiveness of teacher knowledge. Instead of a fixed pre-trained teacher, an evolutionary teacher is learned online and consistently transfers intermediate knowledge to supervise student network learning on-the-fly. To enhance intermediate knowledge representation and mimicking, several simple guided modules are introduced between corresponding teacher-student blocks. In this way, the student can simultaneously obtain rich internal knowledge and capture its growth process, leading to effective student network learning. Extensive experiments clearly demonstrate the effectiveness of our approach as well as good adaptability in the low-resolution and few-sample scenarios. |
Author | Ge, Shiming Zeng, Dan Zhang, Kangkai Zhang, Chunhui Li, Shikun |
Author_xml | – sequence: 1 givenname: Kangkai orcidid: 0000-0003-0818-5268 surname: Zhang fullname: Zhang, Kangkai email: zhangkangkai@iie.ac.cn organization: Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China – sequence: 2 givenname: Chunhui surname: Zhang fullname: Zhang, Chunhui email: zhangchunhui@iie.ac.cn organization: Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China – sequence: 3 givenname: Shikun orcidid: 0000-0003-4297-9571 surname: Li fullname: Li, Shikun email: lishikun@iie.ac.cn organization: Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China – sequence: 4 givenname: Dan orcidid: 0000-0003-1300-1769 surname: Zeng fullname: Zeng, Dan email: dzeng@shu.edu.cn organization: Department of Communication Engineering, Shanghai University, Shanghai, China – sequence: 5 givenname: Shiming orcidid: 0000-0001-5293-310X surname: Ge fullname: Ge, Shiming email: geshiming@iie.ac.cn organization: Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China |
BookMark | eNp9kMtOwzAQRS0EEm3hB2ATiXXK2LGTeAcq5SEqWLSwtRxnWrmEuDhOK_6e9CEWLNAsZqS5Zx63T45rVyMhFxSGlIK8no2m77MhA0aHCcgu2BHpUSHymDEQx10NgsY5o-KU9JtmCUB5zrMeuZmGtsQ6RC8YNs5_RBPUvrb1IlpbHY3XrmqDdbX239Fz7TYVlguM7mwTbFXpbeeMnMx11eD5IQ_I2_14NnqMJ68PT6PbSWySlIZYCi0lNaZIOTdcSI4gTJbnJdcGstTwwggsELAwcygZTwQKyYokFxQMapkMyNV-7sq7rxaboJau9XW3UrGUZ5yJPINOle9Vxrum8ThXxobdncFrWykKauuX2vmltn6pg18dyv6gK28_u8f_hy73kEXEX0DylAIkyQ_Uznj2 |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_TCSVT_2022_3178178 crossref_primary_10_1109_TNNLS_2022_3216010 crossref_primary_10_1109_JIOT_2024_3409813 crossref_primary_10_1109_TCSVT_2022_3167951 crossref_primary_10_1109_TCSVT_2023_3271124 crossref_primary_10_1109_TIFS_2025_3533906 crossref_primary_10_1109_TCSVT_2023_3250031 crossref_primary_10_1109_TCSVT_2024_3377251 crossref_primary_10_1109_JSEN_2023_3241624 crossref_primary_10_1109_TCSVT_2023_3338743 crossref_primary_10_3390_electronics11244080 crossref_primary_10_1016_j_engappai_2024_108398 crossref_primary_10_1109_TCSVT_2023_3282349 crossref_primary_10_1109_TCSVT_2024_3419933 crossref_primary_10_1109_TCSVT_2024_3400368 crossref_primary_10_1109_TCSVT_2022_3222013 crossref_primary_10_1109_TCSVT_2023_3285213 crossref_primary_10_1016_j_knosys_2024_112365 crossref_primary_10_1109_TCSVT_2024_3384756 crossref_primary_10_1109_TCSVT_2024_3393474 crossref_primary_10_1109_TCSVT_2023_3343397 crossref_primary_10_1007_s10489_024_05444_8 crossref_primary_10_1109_TNNLS_2022_3186008 crossref_primary_10_1016_j_inffus_2022_09_007 crossref_primary_10_1016_j_sysarc_2023_102912 crossref_primary_10_1109_JSTARS_2024_3490832 crossref_primary_10_1109_TCSVT_2024_3391018 crossref_primary_10_1016_j_neucom_2023_126872 crossref_primary_10_1109_TCSVT_2023_3277689 crossref_primary_10_1109_TCSVT_2023_3332353 crossref_primary_10_1109_ACCESS_2021_3107841 crossref_primary_10_1109_TCSVT_2022_3170620 crossref_primary_10_1109_TCSVT_2023_3327113 crossref_primary_10_1109_ACCESS_2023_3325842 crossref_primary_10_1109_TCDS_2023_3311171 crossref_primary_10_1109_TITS_2024_3424663 crossref_primary_10_1109_TCSVT_2022_3205375 crossref_primary_10_1016_j_asoc_2023_110412 crossref_primary_10_1109_TCSVT_2022_3175762 crossref_primary_10_1109_TCSVT_2023_3325814 crossref_primary_10_1109_TNNLS_2022_3186807 crossref_primary_10_1109_TCSVT_2023_3267115 crossref_primary_10_1016_j_patrec_2024_01_016 crossref_primary_10_14358_PERS_24_00060R2 crossref_primary_10_1109_TCSVT_2024_3399311 crossref_primary_10_1109_OJCS_2024_3505195 crossref_primary_10_3390_app14167022 crossref_primary_10_1109_TITS_2023_3329118 crossref_primary_10_1109_TCSVT_2023_3318557 crossref_primary_10_1109_TIP_2022_3141255 crossref_primary_10_1109_TCSVT_2023_3253799 crossref_primary_10_1109_TCSVT_2023_3310042 crossref_primary_10_3390_app13063816 |
Cites_doi | 10.1109/TCSVT.2020.3037661 10.1109/TNNLS.2020.2970494 10.1109/CVPR.2019.00271 10.1109/CVPR42600.2020.01294 10.1109/BTAS46853.2019.9186009 10.1109/CVPR.2018.00552 10.1007/978-3-030-01252-6_17 10.1109/TCSVT.2020.2973301 10.1109/CVPR42600.2020.00241 10.1007/978-3-030-01264-9_8 10.1109/BTAS.2017.8272759 10.1109/ICCV.2019.00143 10.1007/s10479-011-0841-3 10.1145/3097983.3098135 10.1109/CVPR42600.2020.01389 10.1109/CVPR.2016.90 10.1609/aaai.v34i04.5746 10.1145/1150402.1150464 10.1109/ICCV.2019.00511 10.1109/ICCV.2019.00145 10.1109/TCSVT.2021.3060162 10.1609/aaai.v33i01.33013779 10.1609/aaai.v34i07.6715 10.1007/978-3-642-35289-8_25 10.1109/TCSVT.2017.2770319 10.1109/CVPR.2016.518 10.1109/ICCV.2019.00381 10.1109/CVPR.2017.754 10.1109/CVPR.2019.00297 10.1109/ICCV.2017.74 10.1109/TIP.2018.2883743 10.1109/CVPR.2019.00482 10.1109/TIFS.2018.2890812 10.1109/CVPR.2018.00454 10.1109/CVPR.2019.00726 10.1109/CVPR.2017.713 10.1109/CVPR42600.2020.00165 10.1109/TCSVT.2019.2935128 10.1109/CVPR.2019.00409 10.1109/CVPR.2019.00938 10.1609/aaai.v33i01.33014886 10.1007/s11263-021-01453-z 10.1109/CVPR.2018.00716 10.1109/CVPR.2018.00455 10.1609/aaai.v34i04.5963 10.1109/BTAS.2017.8272731 10.1145/3329784 10.5244/C.30.87 10.1609/aaai.v35i9.16969 10.1109/FG.2018.00020 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2021.3090902 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 2263 |
ExternalDocumentID | 10_1109_TCSVT_2021_3090902 9461003 |
Genre | orig-research |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61772513 funderid: 10.13039/501100001809 – fundername: Youth Innovation Promotion Association, Chinese Academy of Sciences funderid: 10.13039/501100002367 – fundername: Open Research Project of the State Key Laboratory of Media Convergence and Communication, Communication University of China grantid: SKLMCC2020KF004 funderid: 10.13039/501100015749 – fundername: National Key Research and Development Plan grantid: 2020AAA0140001 – fundername: Project from the Beijing Municipal Science and Technology Commission grantid: Z191100007119002 funderid: 10.13039/501100009592 – fundername: Beijing Natural Science Foundation grantid: L192040 funderid: 10.13039/501100004826 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c361t-95a991ccb644c4594e05c788d4ac076c4bc5ebe0ebcf0d2435e592b38510cea93 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 05:18:49 EDT 2025 Tue Jul 01 00:41:15 EDT 2025 Thu Apr 24 22:55:43 EDT 2025 Wed Aug 27 02:40:50 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c361t-95a991ccb644c4594e05c788d4ac076c4bc5ebe0ebcf0d2435e592b38510cea93 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0001-5293-310X 0000-0003-4297-9571 0000-0003-1300-1769 0000-0003-0818-5268 |
PQID | 2647425870 |
PQPubID | 85433 |
PageCount | 13 |
ParticipantIDs | proquest_journals_2647425870 crossref_citationtrail_10_1109_TCSVT_2021_3090902 crossref_primary_10_1109_TCSVT_2021_3090902 ieee_primary_9461003 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-04-01 |
PublicationDateYYYYMMDD | 2022-04-01 |
PublicationDate_xml | – month: 04 year: 2022 text: 2022-04-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref56 ref59 Chen (ref3) ref14 ref58 ref53 ref11 ref10 ref54 ref16 ref19 ref18 Romero (ref12) Lan (ref15) Pereyra (ref25) ref50 Ba (ref22) ref46 ref48 ref47 ref42 ref41 ref43 Le (ref52) 2015; 7 ref49 Li (ref55) 2020 ref8 ref7 ref4 ref6 Jangho (ref45) ref5 Simonyan (ref57) Keskar (ref26) ref34 ref36 Polino (ref23) ref31 ref74 ref33 ref2 Anil (ref17) Crowley (ref30) ref71 ref70 ref73 ref72 Minar (ref1) ref24 ref68 Ruder (ref35) 2017 ref67 ref69 ref20 ref64 ref63 Tian (ref13) ref66 ref21 ref65 Krizhevsky (ref51) 2009 Shen (ref37) 2020 ref28 ref27 ref29 Yang (ref38) 2018 Song (ref39) Chung (ref40) ref60 Hinton (ref9) ref62 ref61 Zagoruyko (ref44) Furlanello (ref32) |
References_xml | – ident: ref7 doi: 10.1109/TCSVT.2020.3037661 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref17 article-title: Large scale distributed neural network training through online distillation – ident: ref63 doi: 10.1109/TNNLS.2020.2970494 – ident: ref24 doi: 10.1109/CVPR.2019.00271 – ident: ref43 doi: 10.1109/CVPR42600.2020.01294 – ident: ref71 doi: 10.1109/BTAS46853.2019.9186009 – ident: ref70 doi: 10.1109/CVPR.2018.00552 – ident: ref46 doi: 10.1007/978-3-030-01252-6_17 – ident: ref6 doi: 10.1109/TCSVT.2020.2973301 – ident: ref31 doi: 10.1109/CVPR42600.2020.00241 – ident: ref60 doi: 10.1007/978-3-030-01264-9_8 – year: 2017 ident: ref35 article-title: Knowledge adaptation: Teaching to adapt publication-title: arXiv:1702.02052 – start-page: 2765 volume-title: Proc. Conf. Neural Inf. Process. Syst. ident: ref45 article-title: Paraphrasing complex network: Network compression via factor transfer – ident: ref54 doi: 10.1109/BTAS.2017.8272759 – ident: ref20 doi: 10.1109/ICCV.2019.00143 – ident: ref64 doi: 10.1007/s10479-011-0841-3 – ident: ref34 doi: 10.1145/3097983.3098135 – ident: ref33 doi: 10.1109/CVPR42600.2020.01389 – ident: ref42 doi: 10.1109/CVPR.2016.90 – ident: ref18 doi: 10.1609/aaai.v34i04.5746 – year: 2009 ident: ref51 article-title: Learning multiple layers of features from tiny images – volume-title: Proc. Deep Learn. Represent. Learn. Workshop Neural Inf. Process. Syst. ident: ref9 article-title: Distilling the knowledge in a neural network – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref23 article-title: Model compression via distillation and quantization – year: 2020 ident: ref37 article-title: MEAL V2: Boosting vanilla ResNet-50 to 80%+ top-1 accuracy on ImageNet without tricks publication-title: arXiv:2009.08453 – ident: ref21 doi: 10.1145/1150402.1150464 – start-page: 1837 volume-title: Proc. Conf. Neural Inf. Process. Syst. ident: ref39 article-title: Collaborative learning for deep neural networks – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref12 article-title: FitNets: Hints for thin deep nets – start-page: 2654 volume-title: Proc. Conf. Neural Inf. Process. Syst. ident: ref22 article-title: Do deep nets really need to be deep? – ident: ref48 doi: 10.1109/ICCV.2019.00511 – start-page: 7528 volume-title: Proc. Conf. Neural Inf. Process. Syst. ident: ref15 article-title: Knowledge distillation by on-the-fly native ensemble – year: 2020 ident: ref55 article-title: ResKD: Residual-guided knowledge distillation publication-title: arXiv:2006.04719 – ident: ref47 doi: 10.1109/ICCV.2019.00145 – ident: ref8 doi: 10.1109/TCSVT.2021.3060162 – ident: ref49 doi: 10.1609/aaai.v33i01.33013779 – ident: ref67 doi: 10.1609/aaai.v34i07.6715 – start-page: 1607 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref32 article-title: Born again neural networks – ident: ref50 doi: 10.1007/978-3-642-35289-8_25 – ident: ref4 doi: 10.1109/TCSVT.2017.2770319 – ident: ref68 doi: 10.1109/CVPR.2016.518 – ident: ref41 doi: 10.1109/ICCV.2019.00381 – start-page: 742 volume-title: Proc. 31st Int. Conf. Neural Inf. Process. Syst. ident: ref3 article-title: Learning efficient object detection models with knowledge distillation – ident: ref14 doi: 10.1109/CVPR.2017.754 – ident: ref16 doi: 10.1109/CVPR.2019.00297 – ident: ref1 article-title: Recent advances in deep learning: An overview publication-title: arXiv:1807.08169 – ident: ref65 doi: 10.1109/ICCV.2017.74 – ident: ref62 doi: 10.1109/TIP.2018.2883743 – ident: ref73 doi: 10.1109/CVPR.2019.00482 – ident: ref74 doi: 10.1109/TIFS.2018.2890812 – ident: ref27 doi: 10.1109/CVPR.2018.00454 – ident: ref11 doi: 10.1109/CVPR.2019.00726 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref26 article-title: On large-batch training for deep learning: Generalization gap and sharp minima – ident: ref69 doi: 10.1109/CVPR.2017.713 – ident: ref61 doi: 10.1109/CVPR42600.2020.00165 – ident: ref2 doi: 10.1109/TCSVT.2019.2935128 – ident: ref10 doi: 10.1109/CVPR.2019.00409 – ident: ref28 doi: 10.1109/CVPR.2019.00938 – ident: ref36 doi: 10.1609/aaai.v33i01.33014886 – start-page: 2006 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref40 article-title: Feature-map-level online adversarial knowledge distillation – ident: ref66 doi: 10.1007/s11263-021-01453-z – volume: 7 start-page: 7 year: 2015 ident: ref52 article-title: Tiny imagenet visual recognition challenge publication-title: CS 231N – ident: ref59 doi: 10.1109/CVPR.2018.00716 – start-page: 2893 volume-title: Proc. Conf. Neural Inf. Process. Syst. ident: ref30 article-title: Moonshine: Distilling with cheap convolutions – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref57 article-title: Very deep convolutional networks for large-scale image recognition – ident: ref29 doi: 10.1109/CVPR.2018.00455 – ident: ref19 doi: 10.1609/aaai.v34i04.5963 – year: 2018 ident: ref38 article-title: Knowledge distillation in generations: More tolerant teachers educate better students publication-title: arXiv:1805.05551 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref44 article-title: Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref13 article-title: Contrastive representation distillation – ident: ref53 doi: 10.1109/BTAS.2017.8272731 – ident: ref5 doi: 10.1145/3329784 – volume-title: Proc. Int. Conf. Learn. Represent. ident: ref25 article-title: Regularizing neural networks by penalizing confident output distributions – ident: ref58 doi: 10.5244/C.30.87 – ident: ref56 doi: 10.1609/aaai.v35i9.16969 – ident: ref72 doi: 10.1109/FG.2018.00020 |
SSID | ssj0014847 |
Score | 2.6254923 |
Snippet | Knowledge distillation provides an effective way to transfer knowledge via teacher-student learning, where most existing distillation approaches apply a fixed... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 2251 |
SubjectTerms | Data mining Data models deep learning Distillation Evolution Germanium Knowledge Knowledge distillation Knowledge management Knowledge representation Knowledge transfer Learning Predictive models Teachers teacher–student learning Training |
Title | Student Network Learning via Evolutionary Knowledge Distillation |
URI | https://ieeexplore.ieee.org/document/9461003 https://www.proquest.com/docview/2647425870 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDI7GTnDgNRCDgXrgBt3SNmmbG2gMTaDtsg3tViVeihDThkY3CX49TtpOvIS45ZBIkZ3YnxP7MyHngmq0hym4MXjKZZrhKGDUjSWXOpJCKVtf0euH3RG7G_NxhVyua2G01jb5TDfN0P7lT-awNE9lLWHIwQ215wYGbnmt1vrHgMW2mRjCBc-N0Y-VBTJUtIbtwcMQQ0HfawZUmETEL07IdlX5YYqtf7ndIb1yZ3layXNzmakmvH8jbfzv1nfJdgE0nev8ZOyRip7tk61P9IM1cjXIiS2dfp4L7hRkq4_O6kk6nVVxKuXizbkvn96cG2MUpnkG3QEZ3XaG7a5bdFRwIQi9zBVcIh4EUIiCgHHBNOWAspwwCTQKgSngqFWqFaR04iOU0lz4KkBYRkFLERyS6mw-00fEUZGM_JAzrRFAxjGXHCCkfhopxlJf8jrxShEnUNCNm64X08SGHVQkVi2JUUtSqKVOLtZrXnKyjT9n14yc1zMLEddJo9RkUtzH1wRhX4TWCY3T8e-rTsimbwobbE5Og1SzxVKfItzI1Jk9Zx8wydCs |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1LTxsxEB4heigc6AMQAdr60J7QBq9jZ9eHSiAeCgRyISBuiz2ZoKooIEio6G_pX-l_69i7G_UlbkjcfLAP9oxnvrFnvgH4aCWxPRxikmPqE02aRy0tk9wZR5mz3sf6iuNeu3OqD8_N-Qz8mNbCEFFMPqNmGMa__ME1TsJT2aYN5OCyblXdpYdvHKDdfT7YZWl-Ump_r7_TSaoeAgm22uk4scYxAkL07PdRG6tJGuSwb6AdcgiP2qPhfUjyOJQDxeCBjFW-xUBEIrlAtcQG_gXjDKPK6rDpH4XOY_syBihpkrPnrEtypN3s75yc9Tn4VGmzJW1IffzD7cU-Lv8Y_-jR9l_Bz_osykSWr83J2Dfx-180kc_1sF7DQgWlxXap-29ghkZvYf43gsVF2DopqTtFr8x2FxWd7KW4_-LE3n1179ztg-jWj4tiN5i9qzJHcAlOn2QLyzA7uh7RCgifuUy1jSZiiJznxhnEtlTDzGs9VM40IK1FWmBFqB76elwVMbCStohqUAQ1KCo1aMDGdM1NSSfy6OzFINfpzEqkDVivNaeoLM5dwcA2Y_vL5nf1_6s-wMtO__ioODrodddgToUyjpiBtA6z49sJvWNwNfbvo44LuHhqPfkF7TUvBg |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Student+Network+Learning+via+Evolutionary+Knowledge+Distillation&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Zhang%2C+Kangkai&rft.au=Zhang%2C+Chunhui&rft.au=Li%2C+Shikun&rft.au=Zeng%2C+Dan&rft.date=2022-04-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=32&rft.issue=4&rft.spage=2251&rft.epage=2263&rft_id=info:doi/10.1109%2FTCSVT.2021.3090902&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2021_3090902 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |