Lyolo: a lightweight object detection algorithm integrating label enhancement for high-quality prediction boxes
In the field of object detection, most researchers overlook the relationship between predicted bounding boxes and ground truth boxes. Moreover, the downsampling of conventional convolution reduces image resolution, often sacrificing some details and edge information, impacting the precise determinat...
Saved in:
Published in | Pattern analysis and applications : PAA Vol. 28; no. 3 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
London
Springer London
01.09.2025
Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 1433-7541 1433-755X |
DOI | 10.1007/s10044-025-01528-4 |
Cover
Loading…
Abstract | In the field of object detection, most researchers overlook the relationship between predicted bounding boxes and ground truth boxes. Moreover, the downsampling of conventional convolution reduces image resolution, often sacrificing some details and edge information, impacting the precise determination of object positions. Meanwhile, the feature extraction capability of the backbone network in enhancement algorithms is crucial for the detection performance of the entire model. To address these issues, this paper proposes a high-quality prediction box based object detection algorithm LYOLO. It suppresses low-quality prediction boxes and enhances high-quality ones, devising a Label Enhancement (LE) strategy to effectively adjust the weights of positive and negative samples. Meanwhile, a lightweight downsampling method (Down) and a lightweight Feature Enhancement (FE) mechanism are designed. The former enlarges the receptive field to improve the model’s ability to determine object positions, and the latter further allocates feature weights to generate stronger feature representations for the backbone network. Experimental results on the VOC and COCO datasets demonstrate that LYOLO, across all sizes, performs exceptionally well. It achieves the highest accuracy with the lowest number of parameters and computational complexity while maintaining low latency. For example, LYOLOn achieves an
of 82.0% on the VOC dataset with only 2.28M parameters. Compared to the baseline model YOLO11n, it reduces the number of parameters by 11.9% while improving
by 3.0%. In comparison with YOLOv8n, YOLOv9t, and YOLOv10n, LYOLOn achieves
improvements of 3.4%, 2.3%, and 3.1%, respectively. The code and datasets used in this article can be obtained from
https://github.com/lingzhiy/LYOLO
. |
---|---|
AbstractList | In the field of object detection, most researchers overlook the relationship between predicted bounding boxes and ground truth boxes. Moreover, the downsampling of conventional convolution reduces image resolution, often sacrificing some details and edge information, impacting the precise determination of object positions. Meanwhile, the feature extraction capability of the backbone network in enhancement algorithms is crucial for the detection performance of the entire model. To address these issues, this paper proposes a high-quality prediction box based object detection algorithm LYOLO. It suppresses low-quality prediction boxes and enhances high-quality ones, devising a Label Enhancement (LE) strategy to effectively adjust the weights of positive and negative samples. Meanwhile, a lightweight downsampling method (Down) and a lightweight Feature Enhancement (FE) mechanism are designed. The former enlarges the receptive field to improve the model’s ability to determine object positions, and the latter further allocates feature weights to generate stronger feature representations for the backbone network. Experimental results on the VOC and COCO datasets demonstrate that LYOLO, across all sizes, performs exceptionally well. It achieves the highest accuracy with the lowest number of parameters and computational complexity while maintaining low latency. For example, LYOLOn achieves an
of 82.0% on the VOC dataset with only 2.28M parameters. Compared to the baseline model YOLO11n, it reduces the number of parameters by 11.9% while improving
by 3.0%. In comparison with YOLOv8n, YOLOv9t, and YOLOv10n, LYOLOn achieves
improvements of 3.4%, 2.3%, and 3.1%, respectively. The code and datasets used in this article can be obtained from
https://github.com/lingzhiy/LYOLO
. In the field of object detection, most researchers overlook the relationship between predicted bounding boxes and ground truth boxes. Moreover, the downsampling of conventional convolution reduces image resolution, often sacrificing some details and edge information, impacting the precise determination of object positions. Meanwhile, the feature extraction capability of the backbone network in enhancement algorithms is crucial for the detection performance of the entire model. To address these issues, this paper proposes a high-quality prediction box based object detection algorithm LYOLO. It suppresses low-quality prediction boxes and enhances high-quality ones, devising a Label Enhancement (LE) strategy to effectively adjust the weights of positive and negative samples. Meanwhile, a lightweight downsampling method (Down) and a lightweight Feature Enhancement (FE) mechanism are designed. The former enlarges the receptive field to improve the model’s ability to determine object positions, and the latter further allocates feature weights to generate stronger feature representations for the backbone network. Experimental results on the VOC and COCO datasets demonstrate that LYOLO, across all sizes, performs exceptionally well. It achieves the highest accuracy with the lowest number of parameters and computational complexity while maintaining low latency. For example, LYOLOn achieves an of 82.0% on the VOC dataset with only 2.28M parameters. Compared to the baseline model YOLO11n, it reduces the number of parameters by 11.9% while improving by 3.0%. In comparison with YOLOv8n, YOLOv9t, and YOLOv10n, LYOLOn achieves improvements of 3.4%, 2.3%, and 3.1%, respectively. The code and datasets used in this article can be obtained from https://github.com/lingzhiy/LYOLO. |
ArticleNumber | 147 |
Author | Li, Xiang She, Jianmin Wang, Chengyang Ling, Zhiyong Gao, Ruxin Liu, Qunpo |
Author_xml | – sequence: 1 givenname: Ruxin surname: Gao fullname: Gao, Ruxin organization: School of Electrical Engineering and Automation, Henan Polytechnic University, Henan International Joint Laboratory of Direct Drive and Control of Intelligent Equipment – sequence: 2 givenname: Zhiyong surname: Ling fullname: Ling, Zhiyong email: 212307020037@home.hpu.edu.cn organization: School of Electrical Engineering and Automation, Henan Polytechnic University, Henan International Joint Laboratory of Direct Drive and Control of Intelligent Equipment – sequence: 3 givenname: Chengyang surname: Wang fullname: Wang, Chengyang organization: School of Electrical Engineering and Automation, Henan Polytechnic University, Henan International Joint Laboratory of Direct Drive and Control of Intelligent Equipment – sequence: 4 givenname: Xiang surname: Li fullname: Li, Xiang organization: School of Electrical Engineering and Automation, Henan Polytechnic University, Henan International Joint Laboratory of Direct Drive and Control of Intelligent Equipment – sequence: 5 givenname: Jianmin surname: She fullname: She, Jianmin organization: Zhuzhou Tiancheng Automation Equipment Co., Ltd – sequence: 6 givenname: Qunpo surname: Liu fullname: Liu, Qunpo organization: School of Electrical Engineering and Automation, Henan Polytechnic University, Henan International Joint Laboratory of Direct Drive and Control of Intelligent Equipment |
BookMark | eNp9kEtLAzEUhYNUsK3-AVcB16N5TqbupPiCghsFdyEzc-dRpkmbpGj_vakjunNzzl2ccy58MzSxzgJCl5RcU0LUTUgqREaYzAiVrMjECZpSwXmmpHyf_N6CnqFZCGtCOOesmCK3OrjB3WKDh77t4gccFbtyDVXENcRkvbPYDK3zfew2uLcRWm9ib1s8mBIGDLYztoIN2Igb53GXJrLd3gx9POCth7ofN0r3CeEcnTZmCHDx43P09nD_unzKVi-Pz8u7VVYxxWImirxUkileV6YxquA1NVCJqiGUmyJneQ1qwUEa1UjSLPKyXBSGElXXhDe1VHyOrsbdrXe7PYSo127vbXqpOeNMCcJ4kVJsTFXeheCh0Vvfb4w_aEr0EaweweoEVn-D1SKV-FgKKWxb8H_T_7S-AIJyf4s |
Cites_doi | 10.1109/CVPR.2016.91 10.1109/CVPR.2017.690 10.1109/JSEN.2024.3418618 10.1038/s40494-025-01565-6 10.1109/TCSVT.2023.3312325 10.3389/fpls.2024.1407839 10.1016/j.knosys.2024.112204 10.1007/978-3-030-01264-9_8 10.1109/TMM.2023.3321394 10.1109/CVPR52729.2023.00721 10.1016/j.optlaseng.2024.108170 10.1007/978-3-031-73021-4_18 10.1109/CVPR.2018.00474 10.1109/ICCV.2015.169 10.1109/CVPR42600.2020.01079 10.1109/CVPR52733.2024.02617 10.1007/978-3-031-72751-1_1 10.1109/CVPR52733.2024.01605 10.1007/978-3-319-46448-0_2 10.1109/CVPR.2018.00716 10.1109/ICCV48922.2021.00349 |
ContentType | Journal Article |
Copyright | The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025. |
Copyright_xml | – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025 Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. – notice: The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025. |
DBID | AAYXX CITATION |
DOI | 10.1007/s10044-025-01528-4 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Computer Science |
EISSN | 1433-755X |
ExternalDocumentID | 10_1007_s10044_025_01528_4 |
GroupedDBID | -~C .86 .DC .VR 06D 0R~ 0VY 123 1N0 203 29O 2J2 2JN 2JY 2KG 2LR 2~H 30V 4.4 406 408 409 40D 40E 5VS 67Z 6NX 8TC 8UJ 95- 95. 95~ 96X AAAVM AABHQ AACDK AAHNG AAIAL AAJBT AAJKR AANZL AAPKM AARTL AASML AATNV AATVU AAUYE AAWCG AAYIU AAYQN AAYTO AAYZH ABAKF ABBBX ABBRH ABBXA ABDBE ABDZT ABECU ABFSG ABFTD ABFTV ABHLI ABHQN ABJNI ABJOX ABKCH ABKTR ABMNI ABMQK ABNWP ABQBU ABRTQ ABSXP ABTEG ABTHY ABTKH ABTMW ABWNU ABXPI ACAOD ACDTI ACGFO ACGFS ACHSB ACHXU ACKNC ACMDZ ACMLO ACOKC ACOMO ACPIV ACREN ACSNA ACSTC ACZOJ ADHHG ADHIR ADKFA ADKNI ADKPE ADRFC ADTPH ADURQ ADYFF ADYOE ADZKW AEFQL AEGAL AEGNC AEJHL AEJRE AEMSY AENEX AEOHA AEPYU AESKC AETLH AEVLU AEXYK AEZWR AFBBN AFDZB AFHIU AFLOW AFOHR AFQWF AFWTZ AFYQB AFZKB AGAYW AGDGC AGMZJ AGQEE AGQMX AGRTI AGWIL AGWZB AGYKE AHAVH AHBYD AHKAY AHPBZ AHWEU AHYZX AIAKS AIGIU AIIXL AILAN AITGF AIXLP AJRNO AJZVZ ALMA_UNASSIGNED_HOLDINGS ALWAN AMKLP AMTXH AMXSW AMYLF AMYQR AOCGG ARMRJ ASPBG ATHPR AVWKF AXYYD AYFIA AYJHY AZFZN B-. BA0 BGNMA BSONS CSCUP DDRTE DL5 DNIVK DPUIP DU5 EBLON EBS EIOEI ESBYG F5P FEDTE FERAY FFXSO FIGPU FNLPD FRRFC FWDCC GGCAI GGRSB GJIRD GNWQR GQ7 GQ8 GXS HF~ HG5 HG6 HMJXF HQYDN HRMNR HVGLF HZ~ I09 IHE IJ- IKXTQ IWAJR IXC IXD IXE IZIGR IZQ I~X I~Z J-C J0Z J9A JBSCW JCJTX JZLTJ KDC KOV LAS LLZTM M4Y MA- N9A NB0 NPVJJ NQJWS NU0 O93 O9J OAM P2P P9O PF0 PT4 PT5 QOS R89 R9I ROL RPX RSV S16 S1Z S27 S3B SAP SCO SDH SHX SISQX SJYHP SNE SNPRN SNX SOHCF SOJ SPISZ SRMVM SSLCW STPWE SZN T13 TSG TSK TSV TUC U2A UG4 UOJIU UTJUX UZXMN VC2 VFIZW W23 W48 WK8 YLTOR Z45 ZMTXR ~A9 -Y2 1SB 2P1 2VQ AARHV AAYXX ABQSL ABULA ACBXY ADHKG AEBTG AEKMD AFGCZ AGGDS AGJBK AGQPQ AHSBF AJBLW BDATZ CAG CITATION COF EJD FINBP FSGXE H13 N2Q O9- RIG RNI RZK |
ID | FETCH-LOGICAL-c272t-486b75273dcafa783d1aec4cf013a8626de793e5a7f50f96bb98a107dd03fd573 |
IEDL.DBID | U2A |
ISSN | 1433-7541 |
IngestDate | Thu Jul 24 16:10:41 EDT 2025 Thu Jul 31 00:53:40 EDT 2025 Thu Jul 24 02:02:01 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 3 |
Keywords | Feature enhancement Downsampling Label enhancement Object detection |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c272t-486b75273dcafa783d1aec4cf013a8626de793e5a7f50f96bb98a107dd03fd573 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
PQID | 3232740238 |
PQPubID | 2043691 |
ParticipantIDs | proquest_journals_3232740238 crossref_primary_10_1007_s10044_025_01528_4 springer_journals_10_1007_s10044_025_01528_4 |
PublicationCentury | 2000 |
PublicationDate | 2025-09-01 |
PublicationDateYYYYMMDD | 2025-09-01 |
PublicationDate_xml | – month: 09 year: 2025 text: 2025-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | London |
PublicationPlace_xml | – name: London – name: Heidelberg |
PublicationTitle | Pattern analysis and applications : PAA |
PublicationTitleAbbrev | Pattern Anal Applic |
PublicationYear | 2025 |
Publisher | Springer London Springer Nature B.V |
Publisher_xml | – name: Springer London – name: Springer Nature B.V |
References | 1528_CR9 1528_CR8 1528_CR7 1528_CR6 W-L Mao (1528_CR27) 2024; 24 1528_CR5 1528_CR4 J Shen (1528_CR37) 2025; 13 X Wu (1528_CR20) 2024; 15 1528_CR15 1528_CR16 1528_CR38 1528_CR17 Y Zhang (1528_CR31) 2025; 34 1528_CR39 1528_CR18 1528_CR19 1528_CR3 MJ Karim (1528_CR21) 2024; 300 Y Zhang (1528_CR32) 2023; 34 Y Zhang (1528_CR33) 2023; 26 1528_CR2 1528_CR1 1528_CR10 1528_CR11 1528_CR12 Y Zhang (1528_CR30) 2025; 141 1528_CR34 J Shen (1528_CR36) 2021; 71 1528_CR13 1528_CR14 J Shen (1528_CR35) 2024; 73 H Zheng (1528_CR26) 2024; 178 1528_CR28 Y Zhang (1528_CR23) 2023; 61 1528_CR29 Y Zhang (1528_CR22) 2024; 22 Y Zhang (1528_CR25) 2025; 22 1528_CR40 1528_CR41 1528_CR42 Y Zhang (1528_CR24) 2024; 62 |
References_xml | – ident: 1528_CR28 – ident: 1528_CR1 doi: 10.1109/CVPR.2016.91 – ident: 1528_CR2 doi: 10.1109/CVPR.2017.690 – volume: 24 start-page: 26877 year: 2024 ident: 1528_CR27 publication-title: IEEE Sens J doi: 10.1109/JSEN.2024.3418618 – volume: 13 start-page: 70 year: 2025 ident: 1528_CR37 publication-title: NPJ Heritage Science doi: 10.1038/s40494-025-01565-6 – volume: 34 start-page: 2775 year: 2023 ident: 1528_CR32 publication-title: IEEE Trans Circuits Syst Video Technol doi: 10.1109/TCSVT.2023.3312325 – volume: 15 start-page: 1407839 year: 2024 ident: 1528_CR20 publication-title: Front Plant Sci doi: 10.3389/fpls.2024.1407839 – volume: 300 year: 2024 ident: 1528_CR21 publication-title: Knowl-Based Syst doi: 10.1016/j.knosys.2024.112204 – volume: 62 start-page: 1 year: 2024 ident: 1528_CR24 publication-title: IEEE Trans Geosci Remote Sens – ident: 1528_CR19 – ident: 1528_CR15 doi: 10.1007/978-3-030-01264-9_8 – volume: 61 start-page: 1 year: 2023 ident: 1528_CR23 publication-title: IEEE Trans Geosci Remote Sens – volume: 26 start-page: 4183 year: 2023 ident: 1528_CR33 publication-title: IEEE Trans Multimed doi: 10.1109/TMM.2023.3321394 – ident: 1528_CR41 – ident: 1528_CR13 – ident: 1528_CR4 – volume: 34 start-page: 013005 year: 2025 ident: 1528_CR31 publication-title: J Electron Imaging – ident: 1528_CR8 doi: 10.1109/CVPR52729.2023.00721 – volume: 178 year: 2024 ident: 1528_CR26 publication-title: Opt Lasers Eng doi: 10.1016/j.optlaseng.2024.108170 – ident: 1528_CR6 – ident: 1528_CR34 doi: 10.1007/978-3-031-73021-4_18 – volume: 22 start-page: 6002405 year: 2024 ident: 1528_CR22 publication-title: IEEE Geosci Remote Sens Lett – volume: 141 year: 2025 ident: 1528_CR30 publication-title: Eng Appl Artif Intell – ident: 1528_CR10 – ident: 1528_CR42 doi: 10.1109/CVPR.2018.00474 – ident: 1528_CR12 – volume: 73 start-page: 1 year: 2024 ident: 1528_CR35 publication-title: IEEE Trans Instrum Meas – ident: 1528_CR14 doi: 10.1109/ICCV.2015.169 – ident: 1528_CR18 doi: 10.1109/CVPR42600.2020.01079 – ident: 1528_CR39 doi: 10.1109/CVPR52733.2024.02617 – ident: 1528_CR11 doi: 10.1007/978-3-031-72751-1_1 – ident: 1528_CR29 doi: 10.1109/CVPR52733.2024.01605 – ident: 1528_CR17 doi: 10.1007/978-3-319-46448-0_2 – volume: 71 start-page: 1 year: 2021 ident: 1528_CR36 publication-title: IEEE Trans Instrum Meas – ident: 1528_CR16 doi: 10.1109/CVPR.2018.00716 – ident: 1528_CR40 – volume: 22 start-page: 1 year: 2025 ident: 1528_CR25 publication-title: IEEE Geosci Remote Sens Lett – ident: 1528_CR5 – ident: 1528_CR38 doi: 10.1109/ICCV48922.2021.00349 – ident: 1528_CR3 – ident: 1528_CR7 – ident: 1528_CR9 |
SSID | ssj0033328 |
Score | 2.3886685 |
Snippet | In the field of object detection, most researchers overlook the relationship between predicted bounding boxes and ground truth boxes. Moreover, the... |
SourceID | proquest crossref springer |
SourceType | Aggregation Database Index Database Publisher |
SubjectTerms | Algorithms Boxes Computer Science Datasets Feature extraction Image resolution Labels Object recognition Original Article Parameters Pattern Recognition |
Title | Lyolo: a lightweight object detection algorithm integrating label enhancement for high-quality prediction boxes |
URI | https://link.springer.com/article/10.1007/s10044-025-01528-4 https://www.proquest.com/docview/3232740238 |
Volume | 28 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwELagXVh4Iwql8sAGlvJw4oStQi0VICYqlSmyY5silaRqgqD_nrMbq4BgYHViK8p39n13vgdC56DyBNBsSWKaxITmPidCKo9EzJc68XnAuK32-RCPxvR2Ek2apLDKRbu7K0l7Un9JdvMoJab9Kqgw4xPaRO0IbHcTyDcO-u78DcPQdlQFIhASFlG_SZX5fY3v6mjNMX9ci1ptM9xF2w1NxP0VrntoQxX7aKehjLjZkBUMua4MbuwAlfdLONCuMMczY3i_W98nLoVxuGCpaht7VWA-ey4XL_X0FbuKEfAhGGRCzbAqpkYYjOMQA6nFpqYxWaVfLvF8Ye527Bqi_FDVIRoPB4_XI9K0VSB5wIKaACiCmbprMueasySUPlc5zTWwQW4MHKlg06qIMx15Oo2FSBMOVqKUXqhlxMIj1CrKQh0jDHM0zVXCgRZSIH4i1pqKVDANPJCmuoMu3N_N5qvqGdm6TrLBIgMsMotFRjuo6wDImp1UZSFQPkYNs-igSwfK-vHfq5387_VTtBVYuTDhY13Uqhdv6gz4Ri16qN2_ebob9KyYfQKeas9E |
linkProvider | Springer Nature |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV07T8MwED5BGWChPEWhgAc2MGoaJ07ZEAIKFCYqwRTZsQ2IkqA2iMev5-zGKiAYWJ3Ecnznu-_O9wDYQZUnEWYrGrMkpiwLBJVKt2jEA2WSQLS5cNU-r-Jun53fRDdVUtjIR7v7K0knqb8ku7UYo7b9Kqow6xOahhmGNjirwczh6e3FsZfAYRi6nqoIBULKIxZUyTK_z_JdIU1Q5o-LUadvTurQ9ysdh5k87r-Ucj_7-FHE8b-_sgDzFQAlh2OOWYQpnS9BvQKjpDrqIxzy_R782DIUvXcUlQdEkIE16V-dV5UU0rpyiNKli-rKiRjcFcOH8v6J-FoUuDaC3KYHROf3ls2sS5IgXCa2WjIdJ3a-k-ehvTVyc8jiTY9WoH9yfH3UpVXDBpq1ebukSG7JbUU3lQkjeBKqQOiMZQZxprCmk9IoDnQkuIlaphNL2UkE2p9KtUKjIh6uQi0vcr0GBL8xLNOJQMDJEFLK2BgmO5IbRJisYxqw66mWPo_rcqSTCsx2e1Pc3tRtb8oa0PSETaszOkpDBJOcWczSgD1Pp8njv2db_9_r2zDbvb7spb2zq4sNmGs7stsgtSbUyuGL3kRUU8qtiok_AaLJ7Y8 |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV3fb9MwED5BK028bIMNrdCBH3hj7prGiVPeKmgZtKp4oFL3FNmxvSG6pGozwfbX7-zEaqnGA-I1PyzHd7n7fL77DuAdujyJMFvRmCUxZVkgqFS6SyMeKJMEoseFY_ucxhcz9nUezbeq-F22uz-SrGoaLEtTXp4vlTnfKnzrMkZtK1Z0ZzY-9BSazJKzN6A5-Hw5HnprHIah66-KsCCkPGJBXTjz-Ch_OqcN4tw5JHW-Z3QAws-6Sjn52bktZSe73yF0_J_POoT9GpiSQaVJz-GJzl_AQQ1SSW0C1njJ94Hw146gmNyhCf1ABFnYrf4vF20lhbQhHqJ06bK9ciIWV8XqR3l9QzxHBc6ToBbqBdH5tVU_G6okCKOJZVGmVcHnHVmu7GmSG0MWv_X6GGaj4fePF7Ru5ECzHu-VFNVAcsv0pjJhBE9CFQidscwg_hR2S6U0mgkdCW6irunHUvYTgftSpbqhUREPX0IjL3J9AgTfMSzTiUAgyhBqytgYJvuSG0SerG9a8N5LMF1WfB3phpnZLm-Ky5u65U1ZC9peyGn9767TEEEmZxbLtODMy2xz---jvfq3x9_C3rdPo3TyZTp-Dc96Tuo2d60NjXJ1q08R7JTyTa3PDxmq9nM |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Lyolo%3A+a+lightweight+object+detection+algorithm+integrating+label+enhancement+for+high-quality+prediction+boxes&rft.jtitle=Pattern+analysis+and+applications+%3A+PAA&rft.au=Gao%2C+Ruxin&rft.au=Ling%2C+Zhiyong&rft.au=Wang%2C+Chengyang&rft.au=Li%2C+Xiang&rft.date=2025-09-01&rft.issn=1433-7541&rft.eissn=1433-755X&rft.volume=28&rft.issue=3&rft_id=info:doi/10.1007%2Fs10044-025-01528-4&rft.externalDBID=n%2Fa&rft.externalDocID=10_1007_s10044_025_01528_4 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1433-7541&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1433-7541&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1433-7541&client=summon |