LCPFormer: Towards Effective 3D Point Cloud Analysis via Local Context Propagation in Transformers
Transformer with its underlying attention mechanism and the ability to capture long-range dependencies makes it become a natural choice for unordered point cloud data. However, local regions separated from the general sampling architecture corrupt the structural information of the instances, and the...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 33; no. 9; p. 1 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.09.2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Transformer with its underlying attention mechanism and the ability to capture long-range dependencies makes it become a natural choice for unordered point cloud data. However, local regions separated from the general sampling architecture corrupt the structural information of the instances, and the inherent relationships between adjacent local regions lack exploration. In other words, the transformer only focuses on the long-range dependence, while local structural information is still crucial in a transformer-based 3D point cloud model. To enable transformers to incorporate local structural information, we proposed a straightforward solution based on the natural structure of the point clouds to exploit the message passing between neighboring local regions, thus making their representations more comprehensive and discriminative. Concretely, the proposed module, named Local Context Propagation (LCP), is inserted between two transformer layers. It takes advantage of the overlapping points of adjacent local regions (statistically shown to be prevalent) as intermediaries, then re-weighs the features of these shared points from different local regions before passing them to the next layers. Finally, we design a flexible LCPFormer architecture equipped with the LCP module, which is applicable to several different tasks. Experimental results demonstrate that our proposed LCPFormer outperforms various transformer-based methods in benchmarks including 3D shape classification and dense prediction tasks such as 3D object detection and semantic segmentation. Code will be released for reproduction. |
---|---|
AbstractList | Transformer with its underlying attention mechanism and the ability to capture long-range dependencies makes it become a natural choice for unordered point cloud data. However, local regions separated from the general sampling architecture corrupt the structural information of the instances, and the inherent relationships between adjacent local regions lack exploration. In other words, the transformer only focuses on the long-range dependence, while local structural information is still crucial in a transformer-based 3D point cloud model. To enable transformers to incorporate local structural information, we proposed a straightforward solution based on the natural structure of the point clouds to exploit the message passing between neighboring local regions, thus making their representations more comprehensive and discriminative. Concretely, the proposed module, named Local Context Propagation (LCP), is inserted between two transformer layers. It takes advantage of the overlapping points of adjacent local regions (statistically shown to be prevalent) as intermediaries, then re-weighs the features of these shared points from different local regions before passing them to the next layers. Finally, we design a flexible LCPFormer architecture equipped with the LCP module, which is applicable to several different tasks. Experimental results demonstrate that our proposed LCPFormer outperforms various transformer-based methods in benchmarks including 3D shape classification and dense prediction tasks such as 3D object detection and semantic segmentation. Code will be released for reproduction. |
Author | Zhao, Zhiyou Han, Jungong Li, Banghuai Huang, Zhuoxu |
Author_xml | – sequence: 1 givenname: Zhuoxu orcidid: 0000-0002-4698-9200 surname: Huang fullname: Huang, Zhuoxu organization: Department of Computer Science, Aberystwyth University, Aberystwyth, U.K – sequence: 2 givenname: Zhiyou surname: Zhao fullname: Zhao, Zhiyou organization: Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China – sequence: 3 givenname: Banghuai surname: Li fullname: Li, Banghuai organization: School of Electronics Engineering and Computer Science, Peking University, Beijing, China – sequence: 4 givenname: Jungong orcidid: 0000-0003-4361-956X surname: Han fullname: Han, Jungong organization: Department of Computer Science, University of Sheffield, Sheffield, U.K |
BookMark | eNp9kMtOwzAQRS1UJNrCDyAWllin-O2EXRVaQIpEJQLbyEkc5CqNi-0W-vekjwViwWI0s7jnzswdgUFnOw3ANUYTjFFyl6ev7_mEIEInlDDJkTgDQ8x5HBGC-KCfEcdRTDC_ACPvlwhhFjM5BGWWLubWrbS7h7n9Uq72cNY0ugpmqyF9gAtrugDT1m5qOO1Uu_PGw61RMLOVamFqu6C_A1w4u1YfKhjbQdPB3KnONwdffwnOG9V6fXXqY_A2n-XpU5S9PD6n0yyqSCJCJEQSK8lYjPqivCxJXZaioVjqitQIx0iVFdZKaSoVlpRyoQipSZNwyVAvHIPbo-_a2c-N9qFY2o3rT_YFiQVmQiYo6VXkqKqc9d7pplg7s1JuV2BU7LMsDlkW-yyLU5Y9FP-BKhMOzwanTPs_enNEjdb61y7EEp5I-gOI4YOo |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_TCSVT_2024_3498041 crossref_primary_10_1109_TMM_2024_3349912 crossref_primary_10_3390_s23229042 crossref_primary_10_1109_TCSVT_2024_3454601 crossref_primary_10_1109_ACCESS_2025_3525739 crossref_primary_10_1109_TCSVT_2024_3420150 crossref_primary_10_1109_TCSVT_2024_3417233 crossref_primary_10_1016_j_engappai_2024_109224 crossref_primary_10_1088_2631_8695_adaabf crossref_primary_10_3390_rs16214103 crossref_primary_10_1007_s00138_024_01543_1 crossref_primary_10_1109_JSEN_2024_3452673 crossref_primary_10_1016_j_neucom_2023_127215 crossref_primary_10_1016_j_patcog_2025_111361 crossref_primary_10_1109_ACCESS_2023_3322433 crossref_primary_10_1109_ACCESS_2023_3323428 crossref_primary_10_1016_j_eswa_2024_124255 crossref_primary_10_1109_TCSVT_2024_3405069 crossref_primary_10_1016_j_jag_2024_104105 crossref_primary_10_1016_j_patcog_2023_110086 crossref_primary_10_1016_j_knosys_2023_111217 crossref_primary_10_1109_TMM_2024_3521745 crossref_primary_10_1016_j_isprsjprs_2025_01_024 crossref_primary_10_1109_TMM_2024_3521849 crossref_primary_10_1109_TIM_2024_3476569 crossref_primary_10_1109_TCSVT_2023_3331589 crossref_primary_10_1109_TGRS_2024_3407761 crossref_primary_10_3390_electronics13224355 crossref_primary_10_1007_s00371_024_03555_4 crossref_primary_10_1016_j_patcog_2024_111152 crossref_primary_10_1109_TCSVT_2024_3440517 crossref_primary_10_1016_j_patcog_2023_110061 crossref_primary_10_1109_TGRS_2024_3492008 |
Cites_doi | 10.1109/CVPR.2019.00571 10.1109/CVPR.2019.01054 10.1109/TCSVT.2021.3102025 10.1109/CVPR.2018.00472 10.1109/CVPR.2019.00287 10.1109/CVPR.2019.00344 10.1109/CVPR.2016.94 10.1109/CVPR.2017.28 10.1109/CVPR42600.2020.00047 10.1109/ICCV.2019.00166 10.1109/ICCV.2019.00937 10.1007/978-3-030-01270-0_39 10.1109/CVPR46437.2021.00494 10.1109/ICCV.2017.99 10.1109/CVPR52688.2022.00835 10.1109/CVPR.2019.00319 10.1109/ICCV48922.2021.00290 10.1109/CVPR.2019.00455 10.1109/CVPR42600.2020.01054 10.1109/ICCV48922.2021.01595 10.1109/CVPR.2018.00959 10.1109/IROS.2018.8594049 10.1109/CVPR42600.2020.01009 10.1109/CVPR.2018.00484 10.1109/IROS.2015.7353481 10.3390/s18103337 10.1109/TCSVT.2021.3100848 10.1109/CVPR.2016.609 10.1007/978-3-030-01237-3_6 10.1609/aaai.v33i01.33018778 10.1109/ICCV48922.2021.00294 10.1007/978-3-030-58610-2_19 10.1109/ICCV48922.2021.00009 10.1109/CVPR.2019.00985 10.1109/CVPR46437.2021.00738 10.1109/CVPR.2018.00526 10.1109/ACCESS.2021.3116304 10.1016/j.dsp.2019.102633 10.1109/CVPR42600.2020.01046 10.1109/CVPR42600.2020.00563 10.1109/CVPR.2018.00961 10.1109/ICCV.2015.114 10.1109/CVPR.2019.00760 10.1109/CVPR42600.2020.01112 10.1109/CVPR46437.2021.01270 10.1109/WACV48630.2021.00386 10.1109/TCSVT.2022.3171968 10.1109/ICCV48922.2021.00010 10.1109/CVPR.2018.00409 10.1109/ICCV48922.2021.00060 10.1109/TGRS.2022.3202823 10.1109/CVPR.2016.170 10.1007/978-3-030-01225-0_4 10.1145/3326362 10.1007/s41095-021-0229-5 10.1109/CVPR.2019.00910 10.1109/IROS51168.2021.9636858 10.1109/CVPR.2018.00479 10.1109/CVPR.2018.00478 10.1109/CVPR46437.2021.01564 10.1109/ICCV48922.2021.00986 10.1109/CVPR.2018.00798 10.1109/ICCV.2019.00936 10.1109/CVPR46437.2021.01625 10.1109/ICCV48922.2021.00041 10.1109/CVPR42600.2020.00170 10.1109/CVPR.2015.7298655 10.1109/ICCV.2019.00651 10.1109/TPAMI.2020.2983686 10.1109/TGRS.2022.3162582 10.1109/CVPR.2018.00979 10.1145/3197517.3201301 10.1109/CVPR.2017.16 10.1109/TCSVT.2021.3100282 10.3390/info13040198 10.1109/CVPR.2019.01298 10.1109/ICCV48922.2021.01204 10.1109/CVPR52688.2022.01150 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2023.3247506 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE/IET Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 1 |
ExternalDocumentID | 10_1109_TCSVT_2023_3247506 10049597 |
Genre | orig-research |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AGQYO AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS HZ~ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 5VS AAYXX AETIX AGSQL AI. AIBXA ALLEH CITATION EJD H~9 ICLAB IFJZH RIG VH1 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c296t-6698a7448044835bb2dbb6f317ec2d0180abc1eaae37a173356a22d2f957406f3 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 08:32:19 EDT 2025 Thu Apr 24 22:55:43 EDT 2025 Tue Jul 01 00:41:20 EDT 2025 Wed Aug 27 02:14:27 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 9 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c296t-6698a7448044835bb2dbb6f317ec2d0180abc1eaae37a173356a22d2f957406f3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0002-4698-9200 0000-0003-4361-956X 0009-0001-5015-1929 |
PQID | 2861467909 |
PQPubID | 85433 |
PageCount | 1 |
ParticipantIDs | crossref_primary_10_1109_TCSVT_2023_3247506 crossref_citationtrail_10_1109_TCSVT_2023_3247506 ieee_primary_10049597 proquest_journals_2861467909 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-09-01 |
PublicationDateYYYYMMDD | 2023-09-01 |
PublicationDate_xml | – month: 09 year: 2023 text: 2023-09-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2023 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref56 ref14 ref58 ref53 wang (ref12) 2020 ref52 ref96 ref11 ref55 ref10 ref54 qi (ref15) 2017 ref17 ref16 ref19 yuan (ref31) 2021 ref18 dosovitskiy (ref28) 2020 ref93 ref92 ref51 ref95 ref50 ref94 ref91 ref90 ref46 ref89 ref48 ref47 zhu (ref26) 2020 ref42 ref86 ref41 ref85 ref44 ref88 ref43 ref87 ref49 jiang (ref59) 2021 ref8 ref7 ref9 ref4 li (ref84) 2018; 31 woo (ref69) 2018 ref3 ref5 ref82 ref81 ref40 ref83 ref80 huang (ref61) 2019 ref35 ref79 vaswani (ref64) 2017 ref34 ref78 ref37 ref36 ref75 ref74 guan (ref45) 2021 ref33 ref77 han (ref32) 2021 yan (ref6) 2018; 18 ref2 ref1 ref39 ref38 wu (ref76) 2015 d’ascoli (ref30) 2021 ref71 ref70 ref73 qiu (ref67) 2022 ref72 ref24 ref68 ref23 carion (ref27) 2020 ref20 ref63 ref22 ref21 ref65 ref29 wang (ref25) 2022; 44 ronneberger (ref66) 2015 ref60 ref62 |
References_xml | – ident: ref54 doi: 10.1109/CVPR.2019.00571 – ident: ref51 doi: 10.1109/CVPR.2019.01054 – ident: ref43 doi: 10.1109/TCSVT.2021.3102025 – ident: ref13 doi: 10.1109/CVPR.2018.00472 – volume: 31 start-page: 820 year: 2018 ident: ref84 article-title: PointCNN: Convolution on ?-transformed points publication-title: Proc Adv Neural Inf Process Syst – ident: ref42 doi: 10.1109/CVPR.2019.00287 – ident: ref41 doi: 10.1109/CVPR.2019.00344 – ident: ref7 doi: 10.1109/CVPR.2016.94 – ident: ref39 doi: 10.1109/CVPR.2017.28 – ident: ref73 doi: 10.1109/CVPR42600.2020.00047 – year: 2021 ident: ref32 article-title: Transformer in transformer publication-title: arXiv 2103 00112 – start-page: 2475 year: 2022 ident: ref67 article-title: PU-transformer: Point cloud upsampling transformer publication-title: Proc Asian Conf Comput Vis – ident: ref89 doi: 10.1109/ICCV.2019.00166 – ident: ref18 doi: 10.1109/ICCV.2019.00937 – ident: ref4 doi: 10.1007/978-3-030-01270-0_39 – ident: ref94 doi: 10.1109/CVPR46437.2021.00494 – ident: ref79 doi: 10.1109/ICCV.2017.99 – ident: ref75 doi: 10.1109/CVPR52688.2022.00835 – ident: ref9 doi: 10.1109/CVPR.2019.00319 – ident: ref74 doi: 10.1109/ICCV48922.2021.00290 – ident: ref11 doi: 10.1109/CVPR.2019.00455 – start-page: 18 year: 2020 ident: ref12 article-title: Pillar-based object detection for autonomous driving publication-title: Proc 16th Eur Conf Comput Vis – ident: ref47 doi: 10.1109/CVPR42600.2020.01054 – ident: ref22 doi: 10.1109/ICCV48922.2021.01595 – ident: ref48 doi: 10.1109/CVPR.2018.00959 – ident: ref5 doi: 10.1109/IROS.2018.8594049 – ident: ref65 doi: 10.1109/CVPR42600.2020.01009 – ident: ref77 doi: 10.1109/CVPR.2018.00484 – start-page: 5998 year: 2017 ident: ref64 article-title: Attention is all you need publication-title: Proc Adv Neural Inf Process Syst – ident: ref38 doi: 10.1109/IROS.2015.7353481 – start-page: 234 year: 2015 ident: ref66 article-title: U-Net: Convolutional networks for biomedical image segmentation publication-title: Proc Int Conf Med Image Comput Comput -Assist Intervent – volume: 18 start-page: 3337 year: 2018 ident: ref6 article-title: SECOND: Sparsely embedded convolutional detection publication-title: SENSORS doi: 10.3390/s18103337 – ident: ref1 doi: 10.1109/TCSVT.2021.3100848 – ident: ref36 doi: 10.1109/CVPR.2016.609 – ident: ref83 doi: 10.1007/978-3-030-01237-3_6 – ident: ref86 doi: 10.1609/aaai.v33i01.33018778 – start-page: 1912 year: 2015 ident: ref76 article-title: 3D ShapeNets: A deep representation for volumetric shapes publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref24 doi: 10.1109/ICCV48922.2021.00294 – volume: 44 start-page: 4419 year: 2022 ident: ref25 article-title: Transformer for 3D point clouds publication-title: IEEE Trans Pattern Anal Mach Intell – year: 2021 ident: ref59 article-title: All tokens matter: Token labeling for training better vision transformers publication-title: arXiv 2104 10858 – ident: ref71 doi: 10.1007/978-3-030-58610-2_19 – ident: ref56 doi: 10.1109/ICCV48922.2021.00009 – ident: ref40 doi: 10.1109/CVPR.2019.00985 – ident: ref21 doi: 10.1109/CVPR46437.2021.00738 – ident: ref37 doi: 10.1109/CVPR.2018.00526 – ident: ref23 doi: 10.1109/ACCESS.2021.3116304 – ident: ref81 doi: 10.1016/j.dsp.2019.102633 – ident: ref72 doi: 10.1109/CVPR42600.2020.01046 – start-page: 213 year: 2020 ident: ref27 article-title: End-to-end object detection with transformers publication-title: Proc Eur Conf Comput Vis – ident: ref88 doi: 10.1109/CVPR42600.2020.00563 – ident: ref8 doi: 10.1109/CVPR.2018.00961 – ident: ref35 doi: 10.1109/ICCV.2015.114 – ident: ref85 doi: 10.1109/CVPR.2019.00760 – ident: ref17 doi: 10.1109/CVPR42600.2020.01112 – ident: ref62 doi: 10.1109/CVPR46437.2021.01270 – ident: ref90 doi: 10.1109/WACV48630.2021.00386 – ident: ref44 doi: 10.1109/TCSVT.2022.3171968 – ident: ref55 doi: 10.1109/ICCV48922.2021.00010 – ident: ref34 doi: 10.1109/CVPR.2018.00409 – ident: ref58 doi: 10.1109/ICCV48922.2021.00060 – ident: ref68 doi: 10.1109/TGRS.2022.3202823 – year: 2020 ident: ref28 article-title: An image is worth 16×16 words: Transformers for image recognition at scale publication-title: arXiv 2010 11929 – ident: ref93 doi: 10.1109/CVPR.2016.170 – ident: ref80 doi: 10.1007/978-3-030-01225-0_4 – year: 2019 ident: ref61 article-title: Interlaced sparse self-attention for semantic segmentation publication-title: arXiv 1907 12273 – ident: ref53 doi: 10.1145/3326362 – ident: ref20 doi: 10.1007/s41095-021-0229-5 – ident: ref87 doi: 10.1109/CVPR.2019.00910 – ident: ref19 doi: 10.1109/IROS51168.2021.9636858 – ident: ref49 doi: 10.1109/CVPR.2018.00479 – year: 2020 ident: ref26 article-title: Deformable DETR: Deformable transformers for end-to-end object detection publication-title: arXiv 2010 04159 – ident: ref50 doi: 10.1109/CVPR.2018.00478 – ident: ref91 doi: 10.1109/CVPR46437.2021.01564 – year: 2021 ident: ref31 article-title: Hrformer: High-resolution transformer for dense prediction publication-title: arXiv 2110 09408 – ident: ref33 doi: 10.1109/ICCV48922.2021.00986 – ident: ref3 doi: 10.1109/CVPR.2018.00798 – start-page: 1 year: 2017 ident: ref15 article-title: PointNet++: Deep hierarchical feature learning on point sets in a metric space publication-title: Proc NIPS – ident: ref52 doi: 10.1109/ICCV.2019.00936 – ident: ref60 doi: 10.1109/CVPR46437.2021.01625 – ident: ref29 doi: 10.1109/ICCV48922.2021.00041 – ident: ref46 doi: 10.1109/CVPR42600.2020.00170 – ident: ref70 doi: 10.1109/CVPR.2015.7298655 – ident: ref16 doi: 10.1109/ICCV.2019.00651 – year: 2021 ident: ref45 article-title: M3DETR: Multi-representation, multi-scale, mutual-relation 3D object detection with transformers publication-title: arXiv 2104 11896 – ident: ref63 doi: 10.1109/TPAMI.2020.2983686 – year: 2021 ident: ref30 article-title: ConViT: Improving vision transformers with soft convolutional inductive biases publication-title: arXiv 2103 10697 – ident: ref96 doi: 10.1109/TGRS.2022.3162582 – ident: ref78 doi: 10.1109/CVPR.2018.00979 – ident: ref82 doi: 10.1145/3197517.3201301 – ident: ref14 doi: 10.1109/CVPR.2017.16 – ident: ref2 doi: 10.1109/TCSVT.2021.3100282 – ident: ref95 doi: 10.3390/info13040198 – start-page: 3 year: 2018 ident: ref69 article-title: CBAM: Convolutional block attention module publication-title: Proc Eur Conf Comput Vis (ECCV) – ident: ref10 doi: 10.1109/CVPR.2019.01298 – ident: ref57 doi: 10.1109/ICCV48922.2021.01204 – ident: ref92 doi: 10.1109/CVPR52688.2022.01150 |
SSID | ssj0014847 |
Score | 2.5664837 |
Snippet | Transformer with its underlying attention mechanism and the ability to capture long-range dependencies makes it become a natural choice for unordered point... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1 |
SubjectTerms | 3D vision Context Context propagation Convolution Feature extraction Message passing Modules Object recognition Point cloud compression Point cloud learning Semantic segmentation Solid modeling Task analysis Three dimensional models Three-dimensional displays Transformer Transformers |
Title | LCPFormer: Towards Effective 3D Point Cloud Analysis via Local Context Propagation in Transformers |
URI | https://ieeexplore.ieee.org/document/10049597 https://www.proquest.com/docview/2861467909 |
Volume | 33 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LS8NAEF5sT3rwWbFaZQ_eJGle3c16k2opolIwld5C9iXFkkqbivjrnd0k4gPFQyCHnWTh2-x8k535BqFTFfCQ-IQ7JvZwIh35Tkw4dbT2JPWopsIWid3ekeE4up70JlWxuq2FUUrZ5DPlmlt7li_nYmV-lXWNuhkDBtxADYjcymKtjyODKLbdxIAvwOvAkdUVMh7rJv37h8Q1jcJd4A_gI8kXL2TbqvzYi62DGWyhu3pqZV7Jk7squCvevqk2_nvu22izopr4olwbO2hN5bto45MA4R7iN_3RAFirWpzjxCbQLnGpZwybIA4v8Wg-zQvcn81XEtf6JfhlmuEb4wOx1bZ6LfBoAbH3owUZT3Oc1GwYuGULjQdXSX_oVF0XHBEwUjiEsDijELV5cIU9zgPJOdHAM5QIpNH7yrjwVZapkGY-DcMeyYJABpr1KLADHe6jZj7P1QHCsacFiaQETqEj7nMOtr6gnoyp1MwP28ivUUhFJUluOmPMUhuaeCy1yKUGubRCro3OPmyeS0GOP0e3DBSfRpYotFGnRjutPtplGsTE-A3mscNfzI7Qunl6mWPWQc1isVLHQEoKfmIX4zvHNtv5 |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOPFuxUMAHOKGkiZPYMRIHtGW1pdtqJVLUW4hf0ooqQbtZXv-Fv8JvY-wkqwKCWyUOkXKwE9kzmfkmnvkG4KmhMmExk4GLPYLUpnGQM8kDayPNI2658kVixydsepq-OcvOtuD7phbGGOOTz0zobv1Zvm7U2v0q23fsZgIRcJ9DeWS-fsYIbfXy8ADF-YzSyetiPA36JgKBooK1AWMirzgGIRFeSSYl1VIyi27TKKodfVUlVWyqyiS8inmSZKyiVFMrMo7Ozib43CtwFYFGRrvysM0hRZr7_mWIUHCB6DqHmpxI7Bfjt--K0LUmDxGxoFdmv_g938jlD-vvXdrkFvwYNqPLZPkQrlsZqm-_8UT-t7t1G272YJq86rT_DmyZ-i7cuECxeA_kbDyfIC43yxek8CnCK9IxNqOZJ8kBmTeLuiXj82atycDQQj4tKjJzXp549q4vLZkvGzS9Xo3JoibFgPcRPe_A6aUsche266Y294HkkVUs1RpRk01lLCXOjRWPdM61FXEygniQeql60nXX--O89MFXJEqvKaXTlLLXlBE838z52FGO_HP0jhP9hZGd1EewN2hX2ZulVUlz5jyjiMSDv0x7AtemxfGsnB2eHD2E6-5NXUbdHmy3y7V5hBCslY_9h0Dg_WXr0k_B5Df1 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LCPFormer%3A+Towards+Effective+3D+Point+Cloud+Analysis+via+Local+Context+Propagation+in+Transformers&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Huang%2C+Zhuoxu&rft.au=Zhao%2C+Zhiyou&rft.au=Li%2C+Banghuai&rft.au=Han%2C+Jungong&rft.date=2023-09-01&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=33&rft.issue=9&rft.spage=4985&rft.epage=4996&rft_id=info:doi/10.1109%2FTCSVT.2023.3247506&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TCSVT_2023_3247506 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |