Joint‐Learning: A Robust Segmentation Method for 3D Point Clouds Under Label Noise

ABSTRACT Most of point cloud segmentation methods are based on clean datasets and are easily affected by label noise. We present a novel method called Joint‐learning, which is the first attempt to apply a dual‐network framework to point cloud segmentation with noisy labels. Two networks are trained...

Full description

Saved in:
Bibliographic Details
Published inComputer animation and virtual worlds Vol. 36; no. 3
Main Authors Zhang, Mengyao, Zhou, Jie, Miao, Tingyun, Zhao, Yong, Si, Xin, Zhang, Jingliang
Format Journal Article
LanguageEnglish
Published Hoboken, USA John Wiley & Sons, Inc 01.05.2025
Wiley Subscription Services, Inc
Subjects
Online AccessGet full text

Cover

Loading…
Abstract ABSTRACT Most of point cloud segmentation methods are based on clean datasets and are easily affected by label noise. We present a novel method called Joint‐learning, which is the first attempt to apply a dual‐network framework to point cloud segmentation with noisy labels. Two networks are trained simultaneously, and each network selects clean samples to update its peer network. The communication between two networks is able to exchange the knowledge they learned, possessing good robustness and generalization ability. Subsequently, adaptive sample selection is proposed to maximize the learning capacity. When the accuracies of both networks are no longer improving, the selection rate is reduced, which results in cleaner selected samples. To further reduce the impact of noisy labels, for unselected samples, we provide a joint label correction algorithm to rectify their labels via two networks' predictions. We conduct various experiments on S3DIS and ScanNet‐v2 datasets under different types and rates of noises. Both quantitative and qualitative results verify the reasonableness and effectiveness of the proposed method. By comparison, our method is substantially superior to the state‐of‐the‐art methods and achieves the best results in all noise settings. The average performance improvement is more than 7.43%, with a maximum of 11.42%. We propose a novel method to deal with point cloud segmentation with noisy labels, which consists of dual‐network framework, adaptive sample selection, and joint label correction.
AbstractList Most of point cloud segmentation methods are based on clean datasets and are easily affected by label noise. We present a novel method called Joint‐learning, which is the first attempt to apply a dual‐network framework to point cloud segmentation with noisy labels. Two networks are trained simultaneously, and each network selects clean samples to update its peer network. The communication between two networks is able to exchange the knowledge they learned, possessing good robustness and generalization ability. Subsequently, adaptive sample selection is proposed to maximize the learning capacity. When the accuracies of both networks are no longer improving, the selection rate is reduced, which results in cleaner selected samples. To further reduce the impact of noisy labels, for unselected samples, we provide a joint label correction algorithm to rectify their labels via two networks' predictions. We conduct various experiments on S3DIS and ScanNet‐v2 datasets under different types and rates of noises. Both quantitative and qualitative results verify the reasonableness and effectiveness of the proposed method. By comparison, our method is substantially superior to the state‐of‐the‐art methods and achieves the best results in all noise settings. The average performance improvement is more than 7.43%, with a maximum of 11.42%.
ABSTRACT Most of point cloud segmentation methods are based on clean datasets and are easily affected by label noise. We present a novel method called Joint‐learning, which is the first attempt to apply a dual‐network framework to point cloud segmentation with noisy labels. Two networks are trained simultaneously, and each network selects clean samples to update its peer network. The communication between two networks is able to exchange the knowledge they learned, possessing good robustness and generalization ability. Subsequently, adaptive sample selection is proposed to maximize the learning capacity. When the accuracies of both networks are no longer improving, the selection rate is reduced, which results in cleaner selected samples. To further reduce the impact of noisy labels, for unselected samples, we provide a joint label correction algorithm to rectify their labels via two networks' predictions. We conduct various experiments on S3DIS and ScanNet‐v2 datasets under different types and rates of noises. Both quantitative and qualitative results verify the reasonableness and effectiveness of the proposed method. By comparison, our method is substantially superior to the state‐of‐the‐art methods and achieves the best results in all noise settings. The average performance improvement is more than 7.43%, with a maximum of 11.42%. We propose a novel method to deal with point cloud segmentation with noisy labels, which consists of dual‐network framework, adaptive sample selection, and joint label correction.
Author Zhou, Jie
Zhao, Yong
Zhang, Mengyao
Miao, Tingyun
Zhang, Jingliang
Si, Xin
Author_xml – sequence: 1
  givenname: Mengyao
  surname: Zhang
  fullname: Zhang, Mengyao
  organization: Ocean University of China
– sequence: 2
  givenname: Jie
  surname: Zhou
  fullname: Zhou, Jie
  organization: Ocean University of China
– sequence: 3
  givenname: Tingyun
  surname: Miao
  fullname: Miao, Tingyun
  organization: Ocean University of China
– sequence: 4
  givenname: Yong
  orcidid: 0009-0002-0232-2284
  surname: Zhao
  fullname: Zhao, Yong
  email: zhaoyong@ouc.edu.cn
  organization: Ocean University of China
– sequence: 5
  givenname: Xin
  surname: Si
  fullname: Si, Xin
  organization: Xiamen University of Technology
– sequence: 6
  givenname: Jingliang
  surname: Zhang
  fullname: Zhang, Jingliang
  organization: Ocean University of China
BookMark eNp10LlOw0AQBuAVChJJoOANVqKiSLKXL7rI3DKHIEF01tqeDY6c3bBrg9LxCDwjT4KDER3VTPH9M9I_QD1tNCB0SMmYEsImuXwbB4TwcAf1qSf8kWDBc-9v9-keGji3bKnPKOmj2bUpdf318ZmAtLrUixM8xQ8ma1yNH2GxAl3LujQa30D9YgqsjMX8FN9vUziuTFM4PNcFWJzIDCp8a0oH-2hXycrBwe8covn52Sy-HCV3F1fxNBnlzIvCUU59kUUgpSBAWMipECA9GXpAgkKQXHiMS6Ii3_NUBCqThVIhFZmknHPJKB-io-7u2prXBlydLk1jdfsy5YyFURj4gWjVcadya5yzoNK1LVfSblJK0m1paVta-lNaayedfS8r2PwP03j61CW-AZkdbxw
Cites_doi 10.1109/CVPR.2017.261
10.1109/CVPR52733.2024.01979
10.1145/3326362
10.1109/CVPR.2016.170
10.1109/CVPR42600.2020.01372
10.1109/CVPR46437.2021.00081
10.1109/ICCV48922.2021.00638
10.24963/ijcai.2023/494
10.1109/CVPR42600.2020.01374
10.1109/CVPR52733.2024.01676
10.1109/TGRS.2024.3416219
10.1109/CVPR52733.2024.00357
10.1109/CVPR52733.2024.00463
10.1609/aaai.v38i5.28237
10.1109/ICCV.2019.00651
10.1145/3503161.3547984
10.1109/CVPR52729.2023.00126
10.1109/TPAMI.2022.3225323
ContentType Journal Article
Copyright 2025 John Wiley & Sons Ltd.
2025 John Wiley & Sons, Ltd.
Copyright_xml – notice: 2025 John Wiley & Sons Ltd.
– notice: 2025 John Wiley & Sons, Ltd.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1002/cav.70038
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList CrossRef

Computer and Information Systems Abstracts
DeliveryMethod fulltext_linktorsrc
Discipline Visual Arts
EISSN 1546-427X
EndPage n/a
ExternalDocumentID 10_1002_cav_70038
CAV70038
Genre researchArticle
GrantInformation_xml – fundername: National Natural Science Foundation of China
  funderid: 62172005
– fundername: Natural Science Foundation of Shandong Province
  funderid: ZR2018MF006
– fundername: Open Project of the State Key Lab of CAD&CG, Zhejiang University
  funderid: A2228
– fundername: Qingdao Natural Science Foundation
  funderid: 23‐2‐1‐158‐zyyd‐jch
GroupedDBID .3N
.4S
.DC
.GA
.Y3
05W
0R~
10A
1L6
1OC
29F
31~
33P
3SF
3WU
4.4
50Y
50Z
51W
51X
52M
52N
52O
52P
52S
52T
52U
52W
52X
5GY
5VS
66C
6J9
702
7PT
8-0
8-1
8-3
8-4
8-5
930
A03
AAESR
AAEVG
AAHHS
AAHQN
AAMNL
AANHP
AANLZ
AAONW
AASGY
AAXRX
AAYCA
AAZKR
ABCQN
ABCUV
ABEML
ABIJN
ABPVW
ACAHQ
ACBWZ
ACCFJ
ACCZN
ACGFS
ACPOU
ACRPL
ACSCC
ACXBN
ACXQS
ACYXJ
ADBBV
ADEOM
ADIZJ
ADKYN
ADMGS
ADMLS
ADNMO
ADOZA
ADXAS
ADZMN
ADZOD
AEEZP
AEIGN
AEIMD
AENEX
AEQDE
AEUYR
AFBPY
AFFPM
AFGKR
AFWVQ
AFZJQ
AGHNM
AGQPQ
AGYGG
AHBTC
AITYG
AIURR
AIWBW
AJBDE
AJXKR
ALMA_UNASSIGNED_HOLDINGS
ALUQN
ALVPJ
AMBMR
AMYDB
ARCSS
ASPBG
ATUGU
AUFTA
AVWKF
AZBYB
AZFZN
AZVAB
BAFTC
BDRZF
BFHJK
BHBCM
BMNLL
BROTX
BRXPI
BY8
CS3
D-E
D-F
DCZOG
DPXWK
DR2
DRFUL
DRSTM
DU5
EBS
EDO
EJD
F00
F01
F04
F5P
FEDTE
G-S
G.N
GNP
GODZA
HF~
HGLYW
HHY
HVGLF
HZ~
I-F
ITG
ITH
IX1
J0M
JPC
KQQ
LATKE
LAW
LC2
LC3
LEEKS
LH4
LITHE
LOXES
LP6
LP7
LUTES
LW6
LYRES
MEWTI
MK4
MRFUL
MRSTM
MSFUL
MSSTM
MXFUL
MXSTM
N9A
NF~
O66
O9-
OIG
P2W
P4D
PQQKQ
Q.N
Q11
QB0
QRW
R.K
ROL
RX1
RYL
SUPJJ
TN5
TUS
UB1
V2E
V8K
W8V
W99
WBKPD
WIH
WIK
WQJ
WXSBR
WYISQ
WZISG
XG1
XV2
~IA
~WT
AAMMB
AAYXX
AEFGJ
AGXDD
AIDQK
AIDYY
CITATION
1OB
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c2598-c164b9eaa40e0283144ea5a85e07d40c4523a0f9655f9efbadff814ba1333a213
IEDL.DBID DR2
ISSN 1546-4261
IngestDate Sat Aug 23 13:08:30 EDT 2025
Thu Jul 03 08:36:26 EDT 2025
Wed Jun 25 09:40:24 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 3
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c2598-c164b9eaa40e0283144ea5a85e07d40c4523a0f9655f9efbadff814ba1333a213
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0009-0002-0232-2284
PQID 3228987674
PQPubID 2034909
PageCount 12
ParticipantIDs proquest_journals_3228987674
crossref_primary_10_1002_cav_70038
wiley_primary_10_1002_cav_70038_CAV70038
PublicationCentury 2000
PublicationDate May/June 2025
2025-05-00
20250501
PublicationDateYYYYMMDD 2025-05-01
PublicationDate_xml – month: 05
  year: 2025
  text: May/June 2025
PublicationDecade 2020
PublicationPlace Hoboken, USA
PublicationPlace_xml – name: Hoboken, USA
– name: Chichester
PublicationTitle Computer animation and virtual worlds
PublicationYear 2025
Publisher John Wiley & Sons, Inc
Wiley Subscription Services, Inc
Publisher_xml – name: John Wiley & Sons, Inc
– name: Wiley Subscription Services, Inc
References 2023; 45
2023
2022
2021
2020
2019
2019; 38
2024; 62
2018
2017
2016
2024
2024; 38
e_1_2_9_31_1
e_1_2_9_34_1
e_1_2_9_35_1
e_1_2_9_13_1
Arpit D. (e_1_2_9_33_1) 2017
e_1_2_9_12_1
Bekker A. J. (e_1_2_9_19_1) 2016
Cheng D. (e_1_2_9_26_1) 2022
Han B. (e_1_2_9_21_1) 2018
Qi C. R. (e_1_2_9_5_1) 2017
Hu Q. (e_1_2_9_11_1) 2020
Xia X. (e_1_2_9_32_1) 2022
Qi C. R. (e_1_2_9_4_1) 2017
e_1_2_9_17_1
Xu Z. (e_1_2_9_16_1) 2023
e_1_2_9_18_1
Yao B. (e_1_2_9_15_1) 2024; 62
Jiang L. (e_1_2_9_30_1) 2018
Chen F. (e_1_2_9_29_1) 2024
e_1_2_9_23_1
e_1_2_9_8_1
e_1_2_9_7_1
e_1_2_9_6_1
e_1_2_9_3_1
Cheng M. (e_1_2_9_14_1) 2021
e_1_2_9_2_1
Xiang P. (e_1_2_9_10_1) 2023
e_1_2_9_9_1
Huang Q. (e_1_2_9_24_1) 2024
e_1_2_9_25_1
Yu X. (e_1_2_9_22_1) 2019
e_1_2_9_28_1
e_1_2_9_27_1
Malach E. (e_1_2_9_20_1) 2017
References_xml – start-page: 4840
  year: 2024
  end-page: 4851
– start-page: 4442
  year: 2023
  end-page: 4450
– start-page: 16630
  year: 2022
  end-page: 16639
– start-page: 11105
  year: 2020
  end-page: 11114
– start-page: 8536
  year: 2018
  end-page: 8546
– start-page: 13703
  year: 2020
  end-page: 13712
– start-page: 6423
  year: 2021
  end-page: 6432
– start-page: 77
  year: 2017
  end-page: 85
– volume: 38
  start-page: 4397
  issue: 5
  year: 2024
  end-page: 4405
  article-title: Less Is More: Label Recommendation for Weakly Supervised Point Cloud Semantic Segmentation
  publication-title: Proceedings of the AAAI Conference on Artificial Intelligence
– start-page: 6410
  year: 2019
  end-page: 6419
– start-page: 2682
  year: 2016
  end-page: 2686
– start-page: 178
  year: 2024
  end-page: 195
– start-page: 17700
  year: 2024
  end-page: 17709
– start-page: 5105
  year: 2017
  end-page: 5114
– start-page: 20943
  year: 2024
  end-page: 20953
– start-page: 4635
  year: 2022
  end-page: 4644
– volume: 62
  start-page: 1
  year: 2024
  end-page: 13
  article-title: Uncertainty‐Guided Contrastive Learning for Weakly Supervised Point Cloud Segmentation
  publication-title: IEEE Transactions on Geoscience and Remote Sensing
– volume: 38
  start-page: 146:1
  issue: 5
  year: 2019
  end-page: 146:12
  article-title: Dynamic Graph CNN for Learning on Point Clouds
  publication-title: ACM Transactions on Graphics
– start-page: 3721
  year: 2024
  end-page: 3731
– start-page: 752
  year: 2021
  end-page: 761
– start-page: 18052
  year: 2023
  end-page: 18062
– start-page: 2304
  year: 2018
  end-page: 2313
– year: 2022
– start-page: 2432
  year: 2017
  end-page: 2443
– start-page: 1244
  year: 2023
  end-page: 1254
– start-page: 961
  year: 2017
  end-page: 971
– start-page: 7164
  year: 2019
  end-page: 7173
– volume: 45
  start-page: 7696
  issue: 6
  year: 2023
  end-page: 7710
  article-title: Robust Point Cloud Segmentation With Noisy Annotations
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– start-page: 1140
  year: 2021
  end-page: 1147
– start-page: 233
  year: 2017
  end-page: 242
– start-page: 1534
  year: 2016
  end-page: 1543
– start-page: 17780
  year: 2023
  end-page: 17792
– start-page: 876
  year: 2024
  end-page: 885
– start-page: 13723
  year: 2020
  end-page: 13732
– start-page: 77
  volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
  year: 2017
  ident: e_1_2_9_4_1
– ident: e_1_2_9_35_1
  doi: 10.1109/CVPR.2017.261
– ident: e_1_2_9_7_1
  doi: 10.1109/CVPR52733.2024.01979
– start-page: 16630
  volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
  year: 2022
  ident: e_1_2_9_26_1
– ident: e_1_2_9_8_1
  doi: 10.1145/3326362
– start-page: 17780
  volume-title: Proceedings of the International Conference on Computer Vision
  year: 2023
  ident: e_1_2_9_10_1
– ident: e_1_2_9_34_1
  doi: 10.1109/CVPR.2016.170
– start-page: 11105
  volume-title: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition
  year: 2020
  ident: e_1_2_9_11_1
– ident: e_1_2_9_13_1
  doi: 10.1109/CVPR42600.2020.01372
– ident: e_1_2_9_23_1
  doi: 10.1109/CVPR46437.2021.00081
– start-page: 5105
  volume-title: Advances in Neural Information Processing Systems
  year: 2017
  ident: e_1_2_9_5_1
– ident: e_1_2_9_2_1
  doi: 10.1109/ICCV48922.2021.00638
– ident: e_1_2_9_25_1
  doi: 10.24963/ijcai.2023/494
– start-page: 7164
  volume-title: Proceedings of the International Conference on Machine Learning
  year: 2019
  ident: e_1_2_9_22_1
– start-page: 2304
  volume-title: Proceedings of the International Conference on Machine Learning
  year: 2018
  ident: e_1_2_9_30_1
– ident: e_1_2_9_31_1
  doi: 10.1109/CVPR42600.2020.01374
– start-page: 961
  volume-title: Advances in Neural Information Processing Systems
  year: 2017
  ident: e_1_2_9_20_1
– start-page: 178
  volume-title: Proceedings of the European Conference on Computer Vision
  year: 2024
  ident: e_1_2_9_24_1
– ident: e_1_2_9_28_1
  doi: 10.1109/CVPR52733.2024.01676
– volume: 62
  start-page: 1
  year: 2024
  ident: e_1_2_9_15_1
  article-title: Uncertainty‐Guided Contrastive Learning for Weakly Supervised Point Cloud Segmentation
  publication-title: IEEE Transactions on Geoscience and Remote Sensing
  doi: 10.1109/TGRS.2024.3416219
– start-page: 2682
  volume-title: Proceedings of the International Conference on Acoustics, Speech, and Signal Processing
  year: 2016
  ident: e_1_2_9_19_1
– ident: e_1_2_9_17_1
  doi: 10.1109/CVPR52733.2024.00357
– start-page: 233
  volume-title: Proceedings of the International Conference on Machine Learning
  year: 2017
  ident: e_1_2_9_33_1
– ident: e_1_2_9_12_1
  doi: 10.1109/CVPR52733.2024.00463
– ident: e_1_2_9_18_1
  doi: 10.1609/aaai.v38i5.28237
– ident: e_1_2_9_6_1
  doi: 10.1109/ICCV.2019.00651
– ident: e_1_2_9_27_1
  doi: 10.1145/3503161.3547984
– start-page: 876
  volume-title: Proceedings of the ACM International Conference on Multimedia
  year: 2024
  ident: e_1_2_9_29_1
– start-page: 8536
  volume-title: Advances in Neural Information Processing Systems
  year: 2018
  ident: e_1_2_9_21_1
– ident: e_1_2_9_9_1
  doi: 10.1109/CVPR52729.2023.00126
– start-page: 18052
  volume-title: Proceedings of the International Conference on Computer Vision
  year: 2023
  ident: e_1_2_9_16_1
– start-page: 1140
  volume-title: Proceedings of the AAAI Conference on Artificial Intelligence
  year: 2021
  ident: e_1_2_9_14_1
– volume-title: Proceedings of the International Conference on Learning Representations
  year: 2022
  ident: e_1_2_9_32_1
– ident: e_1_2_9_3_1
  doi: 10.1109/TPAMI.2022.3225323
SSID ssj0026210
Score 2.37162
Snippet ABSTRACT Most of point cloud segmentation methods are based on clean datasets and are easily affected by label noise. We present a novel method called...
Most of point cloud segmentation methods are based on clean datasets and are easily affected by label noise. We present a novel method called Joint‐learning,...
SourceID proquest
crossref
wiley
SourceType Aggregation Database
Index Database
Publisher
SubjectTerms adaptive sample selection
Adaptive sampling
Datasets
dual‐network framework
Image segmentation
joint label correction
label noise
Labels
Learning
Networks
Noise
point cloud segmentation
Three dimensional models
Title Joint‐Learning: A Robust Segmentation Method for 3D Point Clouds Under Label Noise
URI https://onlinelibrary.wiley.com/doi/abs/10.1002%2Fcav.70038
https://www.proquest.com/docview/3228987674
Volume 36
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3PS8MwFA5jJz34W5xOCeLBS7e0TbtWT2M6ZOiQuY0dhJKkyRjOVmzrwZN_gn-jf4lJ2m4qCOKllNKUNsl7-b70ve8BcCINjDBTSG5iktDANqMGpciWB4dQKvxQ6KSwm757NcK9iTOpgPMyFybXh1hsuCnL0P5aGTihSXMpGsrIS6OlfmxJ_6titRQgGiykoyzXypUIHOwaiiaUqkLIai5afl-LlgDzK0zV60x3HdyXb5iHlzw0spQ22OsP8cZ_fsIGWCvwJ2znE2YTVHi0BVbHsyTLrybbYNiLZ1H68fZeaK9Oz2AbDmKaJSm849PHIlspgje6-jSUsBfaF_BWtYKdeZyFCdTllOA1oXwO-_Es4Ttg1L0cdq6MoviCwSQj8gwmeRT1OSEYcYVBJPHixCGew1ErxIhhyWAJEr7rOMLngpJQCM_ElEjSaxPLtHdBNYojvgegKSTOEchnzPUwRczzGGYhF7bwEZH4rQaOy2EInnKNjSBXU7YC2UWB7qIaqJcDFBRmlgTSG3m-p_SIauBU9_TvDwg67bE-2f_7rQdgxVL1fnWAYx1U0-eMH0oQktIjPds-AZtd1_A
linkProvider Wiley-Blackwell
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1BT9swFH7q4LBxYMBAFDpmIQ5cUpzESZNpl6qj6kpbTaxUvUyR7dioGkumJeHAiZ-w37hfMttJWkCahLhEVmRHybOf_X2O3_cATpSDUW5LxU1sGlvE5cxiDLvq4lHGZBhLExQ2nviDKzKce_MGfKpjYUp9iOWGm_YMM19rB9cb0mcr1VBOb9sd_WfrFazrjN6GUF0uxaMc3ym1CDziW5oo1LpC2DlbNn28Gq0g5kOgalaa_lv4Xr9jecDkR7vIWZvfPZFvfOlHbMFmBUFRtxwz29AQyQ5szBZZUd7N3sF0mC6S_O_9n0p-9foj6qLLlBVZjr6J659VwFKCxiYBNVLIF7mf0VfdCvVu0iLOkMmohEaUiRs0SReZ2IWr_vm0N7Cq_AsWV6QosLiiUiwUlBIsNAxR3EtQjwaewJ2YYE4UiaVYhr7nyVBIRmMpA5swqnivSx3b3YO1JE3EPiBbKqgjcci5HxCGeRBwwmMhXRliqiBcE47rfoh-lTIbUSmo7ETKRJExURNadQ9FladlkZqQgjDQkkRNODWm_v8Dol53ZgoHz6_6AV4PpuNRNPoyuTiEN45O_2vOO7ZgLf9diPcKk-TsyAy9fySh3As
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NT9wwEB3BVkL0AIWCWEqLhTj0ksVJnKxTTqvdrvhcIQqIA1JkO_ZqVUgQSXrgxE_ob-SXYDvJ0iIhIS5RFMVRMp6x33M8bwC2dYAx4SrNTVyWOMQX3OEc-_oQMM5VlCibFHY8CvfOycFlcDkDu00uTKUPMV1wM5Fhx2sT4LeJ2nkWDRXsT6drfmzNwgcSYmpcenA61Y7yQq-SIghI6Bie0MgKYW9n2vT_yegZYf6LU-1EM1yEq-YVq_0lvztlwTvi_oV64zu_4RMs1AAU9SqPWYIZmS7Dx4tJXlZX889wdpBN0uLx4W8tvjr-gXroNONlXqBfcnxTpyul6NiWn0Ya9yJ_gE5MK9S_zsokR7aeEjpiXF6jUTbJ5QqcD3-e9fecuvqCIzQloo7QRIpHkjGCpQEhmnlJFjAaSNxNCBZEU1iGVRQGgYqk4ixRirqEM816fea5_iq00iyVa4BcpYGOwpEQISUcC0oFEYlUvoow0wCuDVtNN8S3lchGXMkpe7E2UWxN1IaNpoPiOs7yWA9HNKJGkKgN362lX39A3O9d2JP1t9-6CXMng2F8tD86_ALznqn9azc7bkCruCvlVw1ICv7NOt4TyQHaww
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Joint%E2%80%90Learning%3A+A+Robust+Segmentation+Method+for+3D+Point+Clouds+Under+Label+Noise&rft.jtitle=Computer+animation+and+virtual+worlds&rft.au=Zhang%2C+Mengyao&rft.au=Zhou%2C+Jie&rft.au=Miao%2C+Tingyun&rft.au=Zhao%2C+Yong&rft.date=2025-05-01&rft.pub=John+Wiley+%26+Sons%2C+Inc&rft.issn=1546-4261&rft.eissn=1546-427X&rft.volume=36&rft.issue=3&rft.epage=n%2Fa&rft_id=info:doi/10.1002%2Fcav.70038&rft.externalDBID=10.1002%252Fcav.70038&rft.externalDocID=CAV70038
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1546-4261&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1546-4261&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1546-4261&client=summon