MIA-Net: Multi-information aggregation network combining transformers and convolutional feature learning for polyp segmentation
Accurate polyp segmentation is of immense importance for the early diagnosis and treatment of colorectal cancer. However, polyp segmentation is a difficult task, and most current methods suffer from two challenges. First, individual polyps widely vary in shape, size, and location (intra-class incons...
Saved in:
Published in | Knowledge-based systems Vol. 247; p. 108824 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
Amsterdam
Elsevier B.V
08.07.2022
Elsevier Science Ltd |
Subjects | |
Online Access | Get full text |
ISSN | 0950-7051 1872-7409 |
DOI | 10.1016/j.knosys.2022.108824 |
Cover
Loading…
Abstract | Accurate polyp segmentation is of immense importance for the early diagnosis and treatment of colorectal cancer. However, polyp segmentation is a difficult task, and most current methods suffer from two challenges. First, individual polyps widely vary in shape, size, and location (intra-class inconsistency). Second, subject to conditions such as motion blur and light reflection, polyps and their surrounding background have a high degree of similarity (inter-class indistinction). To overcome intra-class inconsistency and inter-class indistinction, we propose a multi-information aggregation network (MIA-Net) combining transformer and convolutional features. We use the transformer encoder to extract powerful global features and better localize polyps with an advanced global contextual feature extraction module. This approach reduces the influence of intra-class inconsistency. In addition, we capture fine-grained local texture features using the convolutional encoder and aggregate them with high-level and low-level information extracted by the transformer. This rich feature information makes the model more sensitive to edge information and alleviates inter-class indistinction. We evaluated the new approach quantitatively and qualitatively on five datasets using six metrics. The experimental results revealed that MIA-Net has good fitting ability and strong generalization ability. In addition, MIA-Net significantly improved the accuracy of polyp segmentation and outperformed the current state-of-the-art algorithms. |
---|---|
AbstractList | Accurate polyp segmentation is of immense importance for the early diagnosis and treatment of colorectal cancer. However, polyp segmentation is a difficult task, and most current methods suffer from two challenges. First, individual polyps widely vary in shape, size, and location (intra-class inconsistency). Second, subject to conditions such as motion blur and light reflection, polyps and their surrounding background have a high degree of similarity (inter-class indistinction). To overcome intra-class inconsistency and inter-class indistinction, we propose a multi-information aggregation network (MIA-Net) combining transformer and convolutional features. We use the transformer encoder to extract powerful global features and better localize polyps with an advanced global contextual feature extraction module. This approach reduces the influence of intra-class inconsistency. In addition, we capture fine-grained local texture features using the convolutional encoder and aggregate them with high-level and low-level information extracted by the transformer. This rich feature information makes the model more sensitive to edge information and alleviates inter-class indistinction. We evaluated the new approach quantitatively and qualitatively on five datasets using six metrics. The experimental results revealed that MIA-Net has good fitting ability and strong generalization ability. In addition, MIA-Net significantly improved the accuracy of polyp segmentation and outperformed the current state-of-the-art algorithms. |
ArticleNumber | 108824 |
Author | Li, Weisheng Zhao, Yinghui Li, Feiyan Wang, Linhong |
Author_xml | – sequence: 1 givenname: Weisheng orcidid: 0000-0002-9033-8245 surname: Li fullname: Li, Weisheng email: liws@cqupt.edu.cn – sequence: 2 givenname: Yinghui surname: Zhao fullname: Zhao, Yinghui – sequence: 3 givenname: Feiyan surname: Li fullname: Li, Feiyan – sequence: 4 givenname: Linhong surname: Wang fullname: Wang, Linhong |
BookMark | eNqFkLFu2zAURYkiBeq4-YMOBDrLeaQkU8pQwDDaJICTLs1MUNSTQFsmXZJK4Km_HtrqlCGZSJDnPrx7LsmFdRYJ-cZgwYAtr7eLnXXhGBYcOE9PVcWLT2TGKsEzUUB9QWZQl5AJKNkXchnCFiCRrJqRfw_3q-wR4w19GIdoMmM75_cqGmep6nuP_XS3GF-c31Ht9o2xxvY0emXDCUYfqLJt-rLPbhhPuBpohyqOHumAyp_5hNKDG44HGrDfo43nwV_J504NAa_-n3Py9Ovnn_Vdtvl9e79ebTKd50XMVFdq3YLOG-gErzXHUhdlywTkqslFWeTAGqG5VrXgZd6lugLaroLkoi4anc_J92nuwbu_I4Yot270adEg-bKqBVsCqxNVTJT2LgSPnTx4s1f-KBnIk2q5lZNqeVItJ9UpdvMmps1UL0kyw0fhH1MYU_1ng14GbdBqbI1HHWXrzPsDXgEBFqIC |
CitedBy_id | crossref_primary_10_1002_ima_23183 crossref_primary_10_1007_s00521_025_11144_2 crossref_primary_10_3390_cancers16071441 crossref_primary_10_1016_j_compbiomed_2024_108202 crossref_primary_10_1016_j_knosys_2023_110932 crossref_primary_10_1002_ima_23089 crossref_primary_10_3390_info14120657 crossref_primary_10_1016_j_eswa_2024_125419 crossref_primary_10_3390_bioengineering11100959 crossref_primary_10_1016_j_engappai_2023_106729 crossref_primary_10_1007_s13369_024_09762_4 crossref_primary_10_3390_jmse11040691 crossref_primary_10_1016_j_knosys_2024_112181 crossref_primary_10_1016_j_compbiomed_2023_107541 crossref_primary_10_1016_j_compeleceng_2025_110099 crossref_primary_10_1109_JBHI_2023_3270724 crossref_primary_10_1016_j_bspc_2023_105194 crossref_primary_10_1016_j_eswa_2024_123663 crossref_primary_10_1007_s10278_023_00954_2 crossref_primary_10_1016_j_bspc_2024_106487 crossref_primary_10_3390_s23104688 crossref_primary_10_1109_JBHI_2024_3370864 crossref_primary_10_1109_TCSVT_2022_3197643 crossref_primary_10_1007_s00371_024_03720_9 crossref_primary_10_1016_j_cmpb_2024_108119 crossref_primary_10_1049_ipr2_12813 crossref_primary_10_3390_bioengineering12030277 |
Cites_doi | 10.1109/TMI.2015.2487997 10.1109/ISM46123.2019.00049 10.1109/TPAMI.2019.2913372 10.1007/s11548-013-0926-3 10.1016/j.compmedimag.2015.02.007 10.18203/2320-6012.ijrms20174914 10.1145/3158674 10.1109/TPAMI.2016.2644615 10.1117/12.2254361 10.1109/TMI.2014.2314959 |
ContentType | Journal Article |
Copyright | 2022 Elsevier B.V. Copyright Elsevier Science Ltd. Jul 8, 2022 |
Copyright_xml | – notice: 2022 Elsevier B.V. – notice: Copyright Elsevier Science Ltd. Jul 8, 2022 |
DBID | AAYXX CITATION 7SC 8FD E3H F2A JQ2 L7M L~C L~D |
DOI | 10.1016/j.knosys.2022.108824 |
DatabaseName | CrossRef Computer and Information Systems Abstracts Technology Research Database Library & Information Sciences Abstracts (LISA) Library & Information Science Abstracts (LISA) ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Library and Information Science Abstracts (LISA) ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1872-7409 |
ExternalDocumentID | 10_1016_j_knosys_2022_108824 S0950705122003926 |
GroupedDBID | --K --M .DC .~1 0R~ 1B1 1~. 1~5 4.4 457 4G. 5VS 7-5 71M 77K 8P~ 9JN AACTN AAEDT AAEDW AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAXUO AAYFN ABAOU ABBOA ABIVO ABJNI ABMAC ABYKQ ACAZW ACDAQ ACGFS ACRLP ACZNC ADBBV ADEZE ADGUI ADTZH AEBSH AECPX AEKER AENEX AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ARUGR AXJTR BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EO8 EO9 EP2 EP3 FDB FIRID FNPLU FYGXN G-Q GBLVA GBOLZ IHE J1W JJJVA KOM LG9 LY7 M41 MHUIS MO0 N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. PQQKQ Q38 ROL RPZ SDF SDG SDP SES SPC SPCBC SST SSV SSW SSZ T5K WH7 XPP ZMT ~02 ~G- 29L AAQXK AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ABXDB ACNNM ACRPL ACVFH ADCNI ADJOM ADMUD ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP ASPBG AVWKF AZFZN BNPGV CITATION EJD FEDTE FGOYB G-2 HLZ HVGLF HZ~ R2- RIG SBC SET SEW SSH UHS WUQ 7SC 8FD E3H EFKBS F2A JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c334t-af5ccd0c3b0f729c2e5c45d1703ab3754301b7c2ca97253f09570df8008894bc3 |
IEDL.DBID | .~1 |
ISSN | 0950-7051 |
IngestDate | Fri Jul 25 03:00:05 EDT 2025 Tue Jul 01 00:20:21 EDT 2025 Thu Apr 24 22:57:20 EDT 2025 Fri Feb 23 02:39:22 EST 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Multi-information aggregation Transformer Colonoscopy Polyp segmentation |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c334t-af5ccd0c3b0f729c2e5c45d1703ab3754301b7c2ca97253f09570df8008894bc3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-9033-8245 |
PQID | 2689716019 |
PQPubID | 2035257 |
ParticipantIDs | proquest_journals_2689716019 crossref_primary_10_1016_j_knosys_2022_108824 crossref_citationtrail_10_1016_j_knosys_2022_108824 elsevier_sciencedirect_doi_10_1016_j_knosys_2022_108824 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-07-08 |
PublicationDateYYYYMMDD | 2022-07-08 |
PublicationDate_xml | – month: 07 year: 2022 text: 2022-07-08 day: 08 |
PublicationDecade | 2020 |
PublicationPlace | Amsterdam |
PublicationPlace_xml | – name: Amsterdam |
PublicationTitle | Knowledge-based systems |
PublicationYear | 2022 |
Publisher | Elsevier B.V Elsevier Science Ltd |
Publisher_xml | – name: Elsevier B.V – name: Elsevier Science Ltd |
References | O. Oktay, J. chlemper, L.L. Folgoc, et al. Attention U-Net: Learning Where to Look for the Pancreas, arXiv preprint Hu, Shen, Sun (b35) 2020; 42 B. Dong, W. Wang, D.P. Fan, et al. Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers, arXiv preprint D.P. Fan, C. Gong, Y. Cao, et al. Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint Wei, Wang, Huang (b37) 2020 J. Bernal, F.J. Sánchez, G. Fernández-Esparrach, et al., WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians, Comput. Med. Imaging Graph. 99–111 K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv preprint W. Wang, E. Xie, X. Li, et al. PVTv2: Improved Baselines with Pyramid Vision Transformer, arXiv preprint Liu, Huang, Wang (b36) 2018 Deng, Dong, Socher (b45) 2009 Wei, Hu, Zhan (b19) 2021 P. Brandao, E. Mazomenos, G. Ciuti, et al., Fully convolutional neural networks for polyp segmentation in colonoscopy, in: Medical Imaging 2017: Computer-Aided Diagnosis, p. 10134 Jha, Smedsrud, Riegler (b39) 2020 Rawla, Sunkara, Barsouk (b1) 2019; 14 Yu, Wang, Peng (b10) 2018 C.H. Huang, H.Y. Wu, Y.L. Lin. others, Hardnet-mseg: A simple encoder–decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps. arXiv preprint Long, Shelhamer, Darrell (b12) 2015 Zhou, Siddiquee, Tajbakhsh (b6) 2018 A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, arXiv preprint Fan, Ji, Zhou (b16) 2020 Silva, Histace, Romain (b40) 2014; 9 D. Jha, P.H. Smedsrud, M.A. Riegler, ResUNet++: An Advanced Architecture for Medical Image Segmentation, in: IEEE International Symposium on Multimedia (ISM), pp. 225–2255 He, Zhang, Ren (b15) 2016 H. Wu, Y. Liu, X. Zhan, et al. P2T: Pyramid pooling transformer for scene understanding. arXiv preprint Gao, Ye, Cao (b9) 2021; 214 Chu, Tian, Wang (b29) 2021 Fan, Cheng, Liu (b43) 2017 W. Wang, E. Xie, X. Li, et al. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, arXiv preprint Vázquez, Bernal, Sánchez (b41) 2017; 2017 Fang, Chen, Yuan (b46) 2019 Granados-Romero, Valderrama-Treviño, Contreras-Flores (b2) 2017; 5 Mamonov, Figueiredo, Figueiredo (b3) 2014; 33 Zhao, Wu (b34) 2019 , . Tajbakhsh, Gurudu, Liang (b4) 2016; 35 S. Bhojanapalli, A. Chakrabarti, D. Glasner, et al. Understanding robustness of transformers for image classification. arXiv preprint E. Xie, W. Wang, Z. Yu, et al. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. arXiv preprint Zhang, Li, Li (b17) 2020 X. Dong, J. Bao, D. Chen, et al. CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows, arXiv preprint Zheng, Lu, Zhao (b23) 2021 Z. Liu, Y. Lin, Y. Cao, et al. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint Ronneberger, Fischer, Brox (b5) 2015 Badrinarayanan, Kendall, Cipolla (b7) 2017; 39 Chao, Kao, Ruan (b21) 2019 Nguyen, Nguyen, Diep (b18) 2021 Q. Zhang, Y. Yang, ResT: An Efficient Transformer for Visual Recognition. arXiv preprint Zhang, Fu, Han (b42) 2018; 9 10.1016/j.knosys.2022.108824_b14 10.1016/j.knosys.2022.108824_b13 Tajbakhsh (10.1016/j.knosys.2022.108824_b4) 2016; 35 10.1016/j.knosys.2022.108824_b38 He (10.1016/j.knosys.2022.108824_b15) 2016 10.1016/j.knosys.2022.108824_b32 10.1016/j.knosys.2022.108824_b11 10.1016/j.knosys.2022.108824_b33 10.1016/j.knosys.2022.108824_b30 Rawla (10.1016/j.knosys.2022.108824_b1) 2019; 14 10.1016/j.knosys.2022.108824_b31 Mamonov (10.1016/j.knosys.2022.108824_b3) 2014; 33 Wei (10.1016/j.knosys.2022.108824_b37) 2020 Granados-Romero (10.1016/j.knosys.2022.108824_b2) 2017; 5 Chao (10.1016/j.knosys.2022.108824_b21) 2019 Zhao (10.1016/j.knosys.2022.108824_b34) 2019 Liu (10.1016/j.knosys.2022.108824_b36) 2018 10.1016/j.knosys.2022.108824_b8 Gao (10.1016/j.knosys.2022.108824_b9) 2021; 214 Deng (10.1016/j.knosys.2022.108824_b45) 2009 Wei (10.1016/j.knosys.2022.108824_b19) 2021 Silva (10.1016/j.knosys.2022.108824_b40) 2014; 9 Nguyen (10.1016/j.knosys.2022.108824_b18) 2021 Ronneberger (10.1016/j.knosys.2022.108824_b5) 2015 10.1016/j.knosys.2022.108824_b25 Fan (10.1016/j.knosys.2022.108824_b16) 2020 10.1016/j.knosys.2022.108824_b26 Zhou (10.1016/j.knosys.2022.108824_b6) 2018 10.1016/j.knosys.2022.108824_b24 10.1016/j.knosys.2022.108824_b27 10.1016/j.knosys.2022.108824_b28 Long (10.1016/j.knosys.2022.108824_b12) 2015 Chu (10.1016/j.knosys.2022.108824_b29) 2021 Zhang (10.1016/j.knosys.2022.108824_b42) 2018; 9 10.1016/j.knosys.2022.108824_b22 10.1016/j.knosys.2022.108824_b44 Zhang (10.1016/j.knosys.2022.108824_b17) 2020 Yu (10.1016/j.knosys.2022.108824_b10) 2018 10.1016/j.knosys.2022.108824_b20 Fang (10.1016/j.knosys.2022.108824_b46) 2019 Hu (10.1016/j.knosys.2022.108824_b35) 2020; 42 Fan (10.1016/j.knosys.2022.108824_b43) 2017 Jha (10.1016/j.knosys.2022.108824_b39) 2020 Vázquez (10.1016/j.knosys.2022.108824_b41) 2017; 2017 Badrinarayanan (10.1016/j.knosys.2022.108824_b7) 2017; 39 Zheng (10.1016/j.knosys.2022.108824_b23) 2021 |
References_xml | – start-page: 3552 year: 2019 end-page: 3561 ident: b21 article-title: Hardnet: A low memory traffic network publication-title: 2019 IEEE/CVF International Conference on Computer Vision – reference: Q. Zhang, Y. Yang, ResT: An Efficient Transformer for Visual Recognition. arXiv preprint – reference: Z. Liu, Y. Lin, Y. Cao, et al. Swin transformer: Hierarchical vision transformer using shifted windows. arXiv preprint – reference: , – reference: W. Wang, E. Xie, X. Li, et al. PVTv2: Improved Baselines with Pyramid Vision Transformer, arXiv preprint – volume: 14 start-page: 89 year: 2019 end-page: 103 ident: b1 article-title: Epidemiology of colorectal cancer: incidence, mortality, survival, and risk factors publication-title: Przegla̧d Gastroenterol. – reference: X. Dong, J. Bao, D. Chen, et al. CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows, arXiv preprint – start-page: 699 year: 2021 end-page: 708 ident: b19 article-title: Shallow attention network for polyp segmentation publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention – start-page: 688 year: 2021 end-page: 6890 ident: b23 article-title: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers publication-title: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition – start-page: 253 year: 2020 end-page: 262 ident: b17 article-title: Adaptive context selection for polyp segmentation publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention – start-page: 12321 year: 2020 end-page: 12328 ident: b37 article-title: F publication-title: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34 – volume: 9 start-page: 283 year: 2014 end-page: 293 ident: b40 article-title: Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer publication-title: Int. J. Comput. Assist. Radiol. Surg. – volume: 9 start-page: 1 year: 2018 end-page: 31 ident: b42 article-title: A review of co-saliency detection algorithms: Fundamentals, applications, and challenges publication-title: ACM Trans. Intell. Syst. Technol. (TIST) – start-page: 4548 year: 2017 end-page: 4557 ident: b43 article-title: Structure-measure: A new way to evaluate foreground maps publication-title: 2017 IEEE International Conference on Computer Vision – reference: D. Jha, P.H. Smedsrud, M.A. Riegler, ResUNet++: An Advanced Architecture for Medical Image Segmentation, in: IEEE International Symposium on Multimedia (ISM), pp. 225–2255, – volume: 35 start-page: 630 year: 2016 end-page: 644 ident: b4 article-title: Automated polyp detection in colonoscopy videos using shape and context information publication-title: IEEE Trans. Med. Imaging – volume: 42 year: 2020 ident: b35 article-title: Squeeze-and-excitation networks publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – volume: 214 year: 2021 ident: b9 article-title: Multiscale fused network with additive channel–spatial attention for image segmentation publication-title: Knowl.-Based Syst. – start-page: 1857 year: 2018 end-page: 1866 ident: b10 article-title: Learning a discriminative feature network for semantic segmentation publication-title: 2018 IEEE Conference on Computer Vision and Pattern Recognition – reference: O. Oktay, J. chlemper, L.L. Folgoc, et al. Attention U-Net: Learning Where to Look for the Pancreas, arXiv preprint – year: 2021 ident: b29 article-title: Twins: Revisiting the design of spatial attention in vision transformers publication-title: Thirty-Fifth Conference on Neural Information Processing Systems – start-page: 451 year: 2020 end-page: 462 ident: b39 article-title: Kvasir-seg: A segmented polyp dataset publication-title: International Conference on Multimedia Modeling – reference: C.H. Huang, H.Y. Wu, Y.L. Lin. others, Hardnet-mseg: A simple encoder–decoder polyp segmentation neural network that achieves over 0.9 mean dice and 86 fps. arXiv preprint – reference: E. Xie, W. Wang, Z. Yu, et al. SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers. arXiv preprint – start-page: 3085 year: 2019 end-page: 3094 ident: b34 article-title: Pyramid feature attention network for saliency detection publication-title: IEEE/CVF Conference on Computer Vision and Pattern Recognition – start-page: 234 year: 2015 end-page: 241 ident: b5 article-title: U-net: Convolutional networks for biomedical image segmentation publication-title: Int. Conf. Med. Image Comput. Compu.-Assist. Interv. – volume: 39 year: 2017 ident: b7 article-title: SegNet: A deep convolutional encoder-decoder architecture for image segmentation publication-title: IEEE Trans. Pattern Anal. Mach. Intell. – reference: B. Dong, W. Wang, D.P. Fan, et al. Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers, arXiv preprint – start-page: 302 year: 2019 end-page: 310 ident: b46 article-title: Selective feature aggregation network with area-boundary constraints for polyp segmentation publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention – volume: 33 start-page: 1488 year: 2014 end-page: 1502 ident: b3 article-title: Automated polyp detection in colon capsule endoscopy publication-title: IEEE Trans. Med. Imaging – start-page: 633 year: 2021 end-page: 643 ident: b18 article-title: Ccbanet: Cascading context and balancing attention for polyp segmentation publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention – reference: . – start-page: 385 year: 2018 end-page: 400 ident: b36 article-title: Receptive field block net for accurate and fast object detection publication-title: Proceedings of the European Conference on Computer Vision – start-page: 263 year: 2020 end-page: 273 ident: b16 article-title: Pranet: Parallel reverse attention network for polyp segmentation publication-title: International Conference on Medical Image Computing and Computer-Assisted Intervention – reference: K. Simonyan, A. Zisserman, Very Deep Convolutional Networks for Large-Scale Image Recognition, arXiv preprint – start-page: 770 year: 2016 end-page: 778 ident: b15 article-title: Deep residual learning for image recognition publication-title: 2016 IEEE Conference on Computer Vision and Pattern Recognition – reference: D.P. Fan, C. Gong, Y. Cao, et al. Enhanced-alignment measure for binary foreground map evaluation. arXiv preprint – start-page: 248 year: 2009 end-page: 255 ident: b45 article-title: Imagenet: A large-scale hierarchical image database publication-title: 2009 IEEE Conference on Computer Vision and Pattern Recognition – reference: P. Brandao, E. Mazomenos, G. Ciuti, et al., Fully convolutional neural networks for polyp segmentation in colonoscopy, in: Medical Imaging 2017: Computer-Aided Diagnosis, p. 10134, – reference: S. Bhojanapalli, A. Chakrabarti, D. Glasner, et al. Understanding robustness of transformers for image classification. arXiv preprint – volume: 5 start-page: 4667 year: 2017 end-page: 4676 ident: b2 article-title: Colorectal cancer: a review publication-title: Int. J. Res. Med. Sci. – reference: H. Wu, Y. Liu, X. Zhan, et al. P2T: Pyramid pooling transformer for scene understanding. arXiv preprint – reference: A. Dosovitskiy, L. Beyer, A. Kolesnikov, et al. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, arXiv preprint – volume: 2017 year: 2017 ident: b41 article-title: A benchmark for endoluminal scene segmentation of colonoscopy images publication-title: J. Healthc. Eng. – start-page: 3431 year: 2015 end-page: 3440 ident: b12 article-title: Fully convolutional networks for semantic segmentation publication-title: 2015 IEEE Conference on Computer Vision and Pattern Recognition – reference: W. Wang, E. Xie, X. Li, et al. Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions, arXiv preprint – start-page: 3 year: 2018 end-page: 11 ident: b6 article-title: Unet++: A nested u-net architecture for medical image segmentation publication-title: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support – reference: J. Bernal, F.J. Sánchez, G. Fernández-Esparrach, et al., WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians, Comput. Med. Imaging Graph. 99–111, – year: 2021 ident: 10.1016/j.knosys.2022.108824_b29 article-title: Twins: Revisiting the design of spatial attention in vision transformers – start-page: 1857 year: 2018 ident: 10.1016/j.knosys.2022.108824_b10 article-title: Learning a discriminative feature network for semantic segmentation – start-page: 253 year: 2020 ident: 10.1016/j.knosys.2022.108824_b17 article-title: Adaptive context selection for polyp segmentation – start-page: 688 year: 2021 ident: 10.1016/j.knosys.2022.108824_b23 article-title: Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers – volume: 214 issue: 28 year: 2021 ident: 10.1016/j.knosys.2022.108824_b9 article-title: Multiscale fused network with additive channel–spatial attention for image segmentation publication-title: Knowl.-Based Syst. – ident: 10.1016/j.knosys.2022.108824_b13 – ident: 10.1016/j.knosys.2022.108824_b44 – start-page: 3 year: 2018 ident: 10.1016/j.knosys.2022.108824_b6 article-title: Unet++: A nested u-net architecture for medical image segmentation – start-page: 3431 year: 2015 ident: 10.1016/j.knosys.2022.108824_b12 article-title: Fully convolutional networks for semantic segmentation – start-page: 302 year: 2019 ident: 10.1016/j.knosys.2022.108824_b46 article-title: Selective feature aggregation network with area-boundary constraints for polyp segmentation – volume: 35 start-page: 630 issue: 2 year: 2016 ident: 10.1016/j.knosys.2022.108824_b4 article-title: Automated polyp detection in colonoscopy videos using shape and context information publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2015.2487997 – ident: 10.1016/j.knosys.2022.108824_b25 – ident: 10.1016/j.knosys.2022.108824_b27 – start-page: 248 year: 2009 ident: 10.1016/j.knosys.2022.108824_b45 article-title: Imagenet: A large-scale hierarchical image database – ident: 10.1016/j.knosys.2022.108824_b14 doi: 10.1109/ISM46123.2019.00049 – start-page: 234 year: 2015 ident: 10.1016/j.knosys.2022.108824_b5 article-title: U-net: Convolutional networks for biomedical image segmentation – ident: 10.1016/j.knosys.2022.108824_b31 – ident: 10.1016/j.knosys.2022.108824_b33 – start-page: 385 year: 2018 ident: 10.1016/j.knosys.2022.108824_b36 article-title: Receptive field block net for accurate and fast object detection – start-page: 263 year: 2020 ident: 10.1016/j.knosys.2022.108824_b16 article-title: Pranet: Parallel reverse attention network for polyp segmentation – ident: 10.1016/j.knosys.2022.108824_b8 – volume: 42 issue: 8 year: 2020 ident: 10.1016/j.knosys.2022.108824_b35 article-title: Squeeze-and-excitation networks publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2019.2913372 – volume: 9 start-page: 283 issue: 2 year: 2014 ident: 10.1016/j.knosys.2022.108824_b40 article-title: Toward embedded detection of polyps in wce images for early diagnosis of colorectal cancer publication-title: Int. J. Comput. Assist. Radiol. Surg. doi: 10.1007/s11548-013-0926-3 – start-page: 770 year: 2016 ident: 10.1016/j.knosys.2022.108824_b15 article-title: Deep residual learning for image recognition – ident: 10.1016/j.knosys.2022.108824_b38 doi: 10.1016/j.compmedimag.2015.02.007 – volume: 5 start-page: 4667 issue: 11 year: 2017 ident: 10.1016/j.knosys.2022.108824_b2 article-title: Colorectal cancer: a review publication-title: Int. J. Res. Med. Sci. doi: 10.18203/2320-6012.ijrms20174914 – start-page: 451 year: 2020 ident: 10.1016/j.knosys.2022.108824_b39 article-title: Kvasir-seg: A segmented polyp dataset – start-page: 699 year: 2021 ident: 10.1016/j.knosys.2022.108824_b19 article-title: Shallow attention network for polyp segmentation – start-page: 3552 year: 2019 ident: 10.1016/j.knosys.2022.108824_b21 article-title: Hardnet: A low memory traffic network – start-page: 12321 year: 2020 ident: 10.1016/j.knosys.2022.108824_b37 article-title: F3Net: Fusion, feedback and focus for salient object detection – volume: 9 start-page: 1 issue: 4 year: 2018 ident: 10.1016/j.knosys.2022.108824_b42 article-title: A review of co-saliency detection algorithms: Fundamentals, applications, and challenges publication-title: ACM Trans. Intell. Syst. Technol. (TIST) doi: 10.1145/3158674 – volume: 14 start-page: 89 issue: 2 year: 2019 ident: 10.1016/j.knosys.2022.108824_b1 article-title: Epidemiology of colorectal cancer: incidence, mortality, survival, and risk factors publication-title: Przegla̧d Gastroenterol. – volume: 39 issue: 12 year: 2017 ident: 10.1016/j.knosys.2022.108824_b7 article-title: SegNet: A deep convolutional encoder-decoder architecture for image segmentation publication-title: IEEE Trans. Pattern Anal. Mach. Intell. doi: 10.1109/TPAMI.2016.2644615 – volume: 2017 issue: 4037190 year: 2017 ident: 10.1016/j.knosys.2022.108824_b41 article-title: A benchmark for endoluminal scene segmentation of colonoscopy images publication-title: J. Healthc. Eng. – start-page: 633 year: 2021 ident: 10.1016/j.knosys.2022.108824_b18 article-title: Ccbanet: Cascading context and balancing attention for polyp segmentation – ident: 10.1016/j.knosys.2022.108824_b26 – ident: 10.1016/j.knosys.2022.108824_b11 doi: 10.1117/12.2254361 – ident: 10.1016/j.knosys.2022.108824_b28 – start-page: 4548 year: 2017 ident: 10.1016/j.knosys.2022.108824_b43 article-title: Structure-measure: A new way to evaluate foreground maps – ident: 10.1016/j.knosys.2022.108824_b24 – ident: 10.1016/j.knosys.2022.108824_b22 – ident: 10.1016/j.knosys.2022.108824_b30 – ident: 10.1016/j.knosys.2022.108824_b20 – ident: 10.1016/j.knosys.2022.108824_b32 – volume: 33 start-page: 1488 issue: 7 year: 2014 ident: 10.1016/j.knosys.2022.108824_b3 article-title: Automated polyp detection in colon capsule endoscopy publication-title: IEEE Trans. Med. Imaging doi: 10.1109/TMI.2014.2314959 – start-page: 3085 year: 2019 ident: 10.1016/j.knosys.2022.108824_b34 article-title: Pyramid feature attention network for saliency detection |
SSID | ssj0002218 |
Score | 2.4658656 |
Snippet | Accurate polyp segmentation is of immense importance for the early diagnosis and treatment of colorectal cancer. However, polyp segmentation is a difficult... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 108824 |
SubjectTerms | Agglomeration Algorithms Blurring Coders Colonoscopy Feature extraction Light reflection Machine learning Multi-information aggregation Polyp segmentation Polyps Segmentation Transformer Transformers |
Title | MIA-Net: Multi-information aggregation network combining transformers and convolutional feature learning for polyp segmentation |
URI | https://dx.doi.org/10.1016/j.knosys.2022.108824 https://www.proquest.com/docview/2689716019 |
Volume | 247 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwELYqWFh4I8pLHlhNi2M3CVtVURVQuwASW2Q7dsUrjWgYWOCvc-c4IJBQJbY87Dx857vPyXd3hBw7qbjSKmeWYwkz2-NMORszY7hxkUiMFBgoPJ70Rrfi8k7etcigiYVBWmWw_bVN99Y6HOmE0eyU9_edawAHoK_gsJBflXJMuy1EjFp-8v5N8-Dcf-PDxgxbN-FznuP1WMzmb5i0m3Mk2yVc_OWefhlq732G62Q1wEbar59sg7RssUnWmpIMNMzQLfIxvuizia3OqI-sZSEvKo4-VVNYW0_r7aJmf1N4be1LRNCqQbCAB6kqcop89KCXcGdnfQZQGqpMTCk0peXs6a2kczt9DhFMxTa5HZ7fDEYs1FhgJopEBYKRxuRdE-muA5xtuJVGyPwUDIHSWB4XDICOQXAqjbmMHIxj3M0dwMwkSYU20Q5ZKmaF3SU01w6Wm7kGwIEwJdVOKZmC5I3Ef4-2TaJmaDMTEpBjHYynrGGaPWS1QDIUSFYLpE3YV6-yTsCxoH3cSC37oUgZ-IgFPQ8aIWdhIsP5XoJJtgAI7_37wvtkBfc8yTc5IEvVy6s9BChT6SOvq0dkuX9xNZp8AvVK9-0 |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV07T8MwED5BGWDhjXjjgdVqceImYasQqAXaBZDYLNuxK6CkFYSBib_OXeIggYSQ2KLYzsPf-e5zcg-AYy-10Ebn3AkqYea6gmvvEm6tsD6KUytjChQejrr9u_jyXt7PwVkTC0NulUH31zq90tbhTDvMZnv28NC-QXKA8ooGi_yrMtGdhwXKTiVbsNAbXPVHXwpZiOozH_XnNKCJoKvcvJ6K6es75e0WgvztUhH_ZqF-6OrKAF2swnJgjqxXP9wazLliHVaaqgwsLNIN-BgOenzkylNWBdfykBqVAGB6jNvrcX1c1A7gDN_cVFUiWNmQWKSETBc5I5f0IJp4Z--qJKAsFJoYM-zKZtPJ-4y9uvFzCGIqNuHu4vz2rM9DmQVuoyguERtpbd6xkel4pNpWOGljmZ-gLtCGKuSiDjAJYqezRMjI4zwmndwj00zTLDY22oJWMS3cNrDceNxx5gY5BzGVzHitZYbgW0m_H90ORM3UKhtykFMpjIlqnM0eVQ2IIkBUDcgO8K9RszoHxx_9kwY19U2WFJqJP0buNyCrsJaxvZtSni3kwrv_vvARLPZvh9fqejC62oMlaql8ftN9aJUvb-4AmU1pDoPkfgJqLPqe |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=MIA-Net%3A+Multi-information+aggregation+network+combining+transformers+and+convolutional+feature+learning+for+polyp+segmentation&rft.jtitle=Knowledge-based+systems&rft.au=Li%2C+Weisheng&rft.au=Zhao%2C+Yinghui&rft.au=Li%2C+Feiyan&rft.au=Wang%2C+Linhong&rft.date=2022-07-08&rft.issn=0950-7051&rft.volume=247&rft.spage=108824&rft_id=info:doi/10.1016%2Fj.knosys.2022.108824&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_knosys_2022_108824 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0950-7051&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0950-7051&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0950-7051&client=summon |