Real-Time Scene-Aware LiDAR Point Cloud Compression Using Semantic Prior Representation

Existing LiDAR point cloud compression (PCC) methods tend to treat compression as a fidelity issue, without sufficiently addressing its machine perception aspect. The latter issue is often encountered by the decoder agents that might aim to conduct scene-understanding related tasks only, such as com...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 32; no. 8; pp. 5623 - 5637
Main Authors Zhao, Lili, Ma, Kai-Kuang, Liu, Zhili, Yin, Qian, Chen, Jianwen
Format Journal Article
LanguageEnglish
Published New York IEEE 01.08.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Existing LiDAR point cloud compression (PCC) methods tend to treat compression as a fidelity issue, without sufficiently addressing its machine perception aspect. The latter issue is often encountered by the decoder agents that might aim to conduct scene-understanding related tasks only, such as computing the localization information. For tackling this challenge, a novel LiDAR PCC system is proposed to compress the point cloud geometry, which contains a back channel for allowing the decoder to initiate such request to the encoder. The key success of our PCC method lies in our proposed semantic prior representation (SPR) and its lossy encoding algorithm with variable precision to generate the final bitstream; the entire process is fast and achieves real-time performance. Note that our SPR is a compact and effective representation of three-dimensional (3D) input point clouds, and it consists of labels, predictions , and residuals . These information can be generated by first exploiting a scene-aware object segmentation to a set of 2D range images (frames) individually, which were generated from the 3D point clouds via a projection process. Based on the generated labels, the pixels associated with those moving objects are considered as noisy information and should be removed for not only saving bit budget on transmission but also, most importantly, improving the accuracy of localization computed at the decoder. Experimental results conducted on the commonly-used test dataset have shown that our proposed system outperforms the MPEG's G-PCC (TMC13-v14.0) in a large bitrate range. In fact, the performance gap will become even larger when more and/or large moving objects are involved in the input point clouds.
AbstractList Existing LiDAR point cloud compression (PCC) methods tend to treat compression as a fidelity issue, without sufficiently addressing its machine perception aspect. The latter issue is often encountered by the decoder agents that might aim to conduct scene-understanding related tasks only, such as computing the localization information. For tackling this challenge, a novel LiDAR PCC system is proposed to compress the point cloud geometry, which contains a back channel for allowing the decoder to initiate such request to the encoder. The key success of our PCC method lies in our proposed semantic prior representation (SPR) and its lossy encoding algorithm with variable precision to generate the final bitstream; the entire process is fast and achieves real-time performance. Note that our SPR is a compact and effective representation of three-dimensional (3D) input point clouds, and it consists of labels, predictions , and residuals . These information can be generated by first exploiting a scene-aware object segmentation to a set of 2D range images (frames) individually, which were generated from the 3D point clouds via a projection process. Based on the generated labels, the pixels associated with those moving objects are considered as noisy information and should be removed for not only saving bit budget on transmission but also, most importantly, improving the accuracy of localization computed at the decoder. Experimental results conducted on the commonly-used test dataset have shown that our proposed system outperforms the MPEG's G-PCC (TMC13-v14.0) in a large bitrate range. In fact, the performance gap will become even larger when more and/or large moving objects are involved in the input point clouds.
Author Zhao, Lili
Yin, Qian
Ma, Kai-Kuang
Chen, Jianwen
Liu, Zhili
Author_xml – sequence: 1
  givenname: Lili
  orcidid: 0000-0002-5182-7230
  surname: Zhao
  fullname: Zhao, Lili
  email: zllmail@foxmail.com
  organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 2
  givenname: Kai-Kuang
  orcidid: 0000-0003-2932-5709
  surname: Ma
  fullname: Ma, Kai-Kuang
  email: ekkma@ntu.edu.sg
  organization: School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
– sequence: 3
  givenname: Zhili
  surname: Liu
  fullname: Liu, Zhili
  email: liuzhili@yihang.ai
  organization: Yihang Intellitech Company Ltd., Beijing, China
– sequence: 4
  givenname: Qian
  surname: Yin
  fullname: Yin, Qian
  email: yinqian_xixi@163.com
  organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 5
  givenname: Jianwen
  orcidid: 0000-0002-5987-148X
  surname: Chen
  fullname: Chen, Jianwen
  email: jianwen.chen@ieee.org
  organization: School of Information and Communication Engineering, University of Electronic Science and Technology of China, Chengdu, China
BookMark eNp9kE1LAzEQhoMo2Fb_gF4Cnrdmssl-HMv6CQVLu9Xjku5OJWU3qckW8d-b2uLBg6cM5HlmXt4hOTXWICFXwMYALL8ti8VrOeaM83EMQkqIT8gApMwizpk8DTOTEGUc5DkZer9hDEQm0gF5m6Nqo1J3SBc1Gowmn8ohneq7yZzOrDY9LVq7a2hhu61D77U1dOm1eacL7JTpdU1nTltH57j_R9OrPjAX5GytWo-Xx3dElg_3ZfEUTV8en4vJNKp5LvsI1-kqAdbkMm1kHgsEpZoQFVnCV0rAKsWAYLZKsyQNnBRNIwQwkSYq5Id4RG4Oe7fOfuzQ99XG7pwJJyue5KlMcinjQGUHqnbWe4frqtaHnL1Tuq2AVfsaq58aq32N1bHGoPI_6tbpTrmv_6Xrg6QR8VfIk5wB8Pgbqzp_Yg
CODEN ITCTEM
CitedBy_id crossref_primary_10_1109_TITS_2024_3409907
crossref_primary_10_1145_3715916
crossref_primary_10_3390_s24103185
crossref_primary_10_3390_s23052398
crossref_primary_10_1109_TCSVT_2022_3196550
crossref_primary_10_3390_infrastructures7040049
crossref_primary_10_1109_TCSVT_2023_3276788
crossref_primary_10_1109_TBC_2022_3162406
crossref_primary_10_3390_s25061660
crossref_primary_10_1109_TCSVT_2023_3309902
crossref_primary_10_1109_TCSVT_2022_3211084
crossref_primary_10_1109_TCSVT_2024_3496489
Cites_doi 10.1109/TCSVT.2021.3069838
10.1109/TCSVT.2021.3098832
10.1109/TIP.2019.2936738
10.1109/TCSVT.2021.3100279
10.1109/TIP.2019.2957853
10.1109/TRO.2017.2705103
10.1109/ICRA.2019.8793585
10.1109/ICCV.2019.00939
10.1109/LSP.2020.2965322
10.1109/TCSVT.2021.3101807
10.1109/IROS40897.2019.8967704
10.1109/ICRA.2019.8794264
10.1109/TBC.2019.2957652
10.1109/VCIP47243.2019.8965783
10.1109/TCSVT.2020.3026046
10.1109/LRA.2019.2900747
10.1109/IROS40897.2019.8967762
10.1109/TMM.2018.2859591
10.1109/DCC.2018.00067
10.1109/34.88573
10.1109/CVPR.2012.6248074
10.1109/DCC47342.2020.00015
10.1109/IROS.2006.282246
10.1109/ICME.2018.8486481
10.1109/LRA.2020.3010207
10.1109/CVPR.2018.00938
10.1109/CVPR.2017.691
10.1109/TIP.2017.2707807
10.1109/3DV.2018.00017
10.1109/TPAMI.2019.2926463
10.1109/ICIP.2019.8803690
10.1109/ACCESS.2019.2935253
10.1109/DCC47342.2020.00082
10.1109/TCSVT.2020.3015901
10.1109/IROS.2016.7759050
10.1109/ICRA.2011.5980567
10.1109/TCSVT.2021.3051377
10.1109/ICRA.2012.6224647
10.1109/TITS.2019.2956066
10.1109/MRA.2006.1638022
10.1109/TPAMI.2014.2316828
10.1145/3177853
10.1109/VCIP.2018.8698661
10.1109/TCSVT.2016.2543039
10.1177/0278364913491297
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2022.3145513
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 5637
ExternalDocumentID 10_1109_TCSVT_2022_3145513
9690112
Genre orig-research
GrantInformation_xml – fundername: Nanyang Technological University & Wallenberg AI, Autonomous Systems and Software Program Joint Project (NTU-WASP) Joint Project
  grantid: M4082184
  funderid: 10.13039/501100001475
– fundername: Sichuan Science and Technology Program
  grantid: 2019YJ0190; 2020YFG0149
  funderid: 10.13039/100012542
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-ef7b610d957d5934e1aad051e062ba41b7eef7e8b786710d54dd4410476a84713
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Sun Jun 29 15:46:22 EDT 2025
Thu Apr 24 23:03:48 EDT 2025
Tue Jul 01 00:41:17 EDT 2025
Wed Aug 27 02:23:49 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 8
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-ef7b610d957d5934e1aad051e062ba41b7eef7e8b786710d54dd4410476a84713
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-5182-7230
0000-0002-5987-148X
0000-0003-2932-5709
PQID 2697569553
PQPubID 85433
PageCount 15
ParticipantIDs crossref_citationtrail_10_1109_TCSVT_2022_3145513
proquest_journals_2697569553
ieee_primary_9690112
crossref_primary_10_1109_TCSVT_2022_3145513
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2022-08-01
PublicationDateYYYYMMDD 2022-08-01
PublicationDate_xml – month: 08
  year: 2022
  text: 2022-08-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2022
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref14
ref53
(ref5) 2018
ref11
ref10
ref17
ref16
ref19
ref18
Grupp (ref58) 2017
(ref2) 2017
(ref37) 2021
ref51
ref50
ref46
ref48
(ref39) 2020
ref47
ref42
ref41
ref44
(ref45) 2019
ref43
ref49
ref8
ref7
ref9
ref6
ref40
Zhou (ref55) 2018
ref35
ref34
(ref38) 2020
ref36
ref31
ref30
ref33
ref32
ref1
(ref25) 2020
(ref54) 2021
(ref4) 2017
ref24
ref23
ref26
(ref52) 2021
ref20
ref22
ref21
ref28
ref27
ref29
(ref3) 2017
References_xml – year: 2018
  ident: ref55
  article-title: Open3D: A modern library for 3D data processing
  publication-title: arXiv:1801.09847
– ident: ref14
  doi: 10.1109/TCSVT.2021.3069838
– ident: ref10
  doi: 10.1109/TCSVT.2021.3098832
– ident: ref12
  doi: 10.1109/TIP.2019.2936738
– ident: ref8
  doi: 10.1109/TCSVT.2021.3100279
– ident: ref7
  doi: 10.1109/TIP.2019.2957853
– ident: ref56
  doi: 10.1109/TRO.2017.2705103
– volume-title: Point Cloud Compression Test Model for Category 1 V0
  year: 2017
  ident: ref2
– ident: ref47
  doi: 10.1109/ICRA.2019.8793585
– ident: ref44
  doi: 10.1109/ICCV.2019.00939
– ident: ref16
  doi: 10.1109/LSP.2020.2965322
– ident: ref6
  doi: 10.1109/TCSVT.2021.3101807
– ident: ref24
  doi: 10.1109/IROS40897.2019.8967704
– ident: ref21
  doi: 10.1109/ICRA.2019.8794264
– volume-title: evo: Python Package for the Evaluation of Odometry and SLAM
  year: 2017
  ident: ref58
– ident: ref15
  doi: 10.1109/TBC.2019.2957652
– ident: ref35
  doi: 10.1109/VCIP47243.2019.8965783
– ident: ref11
  doi: 10.1109/TCSVT.2020.3026046
– volume-title: Point Cloud Compression Category 13 Reference Software, TMC 13 Vesion 14.0
  year: 2021
  ident: ref52
– ident: ref18
  doi: 10.1109/LRA.2019.2900747
– volume-title: FPZIP Version 1.3.0
  year: 2019
  ident: ref45
– ident: ref43
  doi: 10.1109/IROS40897.2019.8967762
– ident: ref17
  doi: 10.1109/TMM.2018.2859591
– ident: ref32
  doi: 10.1109/DCC.2018.00067
– ident: ref57
  doi: 10.1109/34.88573
– volume-title: An Extremely Fast Lossless Compression Algorithm
  year: 2020
  ident: ref38
– ident: ref51
  doi: 10.1109/CVPR.2012.6248074
– ident: ref29
  doi: 10.1109/DCC47342.2020.00015
– ident: ref40
  doi: 10.1109/IROS.2006.282246
– ident: ref28
  doi: 10.1109/ICME.2018.8486481
– ident: ref36
  doi: 10.1109/LRA.2020.3010207
– ident: ref41
  doi: 10.1109/CVPR.2018.00938
– volume-title: PCC Test Model Category 2 V0
  year: 2017
  ident: ref3
– volume-title: The bzip2 Compression Program
  year: 2020
  ident: ref39
– ident: ref46
  doi: 10.1109/CVPR.2017.691
– ident: ref31
  doi: 10.1109/TIP.2017.2707807
– ident: ref50
  doi: 10.1109/3DV.2018.00017
– volume-title: G-PCC TMC13v14 Performance Evaluation and Anchor Results
  year: 2021
  ident: ref54
– ident: ref48
  doi: 10.1109/TPAMI.2019.2926463
– ident: ref34
  doi: 10.1109/ICIP.2019.8803690
– ident: ref20
  doi: 10.1109/ACCESS.2019.2935253
– ident: ref30
  doi: 10.1109/DCC47342.2020.00082
– volume-title: PCC Test Model Category 3 V0
  year: 2017
  ident: ref4
– ident: ref13
  doi: 10.1109/TCSVT.2020.3015901
– ident: ref42
  doi: 10.1109/IROS.2016.7759050
– ident: ref53
  doi: 10.1109/ICRA.2011.5980567
– ident: ref9
  doi: 10.1109/TCSVT.2021.3051377
– ident: ref27
  doi: 10.1109/ICRA.2012.6224647
– volume-title: G-PCC Codec Description
  year: 2021
  ident: ref37
– ident: ref19
  doi: 10.1109/TITS.2019.2956066
– ident: ref22
  doi: 10.1109/MRA.2006.1638022
– ident: ref49
  doi: 10.1109/TPAMI.2014.2316828
– ident: ref23
  doi: 10.1145/3177853
– volume-title: PCC Test Model Category 13 V2
  year: 2018
  ident: ref5
– ident: ref33
  doi: 10.1109/VCIP.2018.8698661
– ident: ref1
  doi: 10.1109/TCSVT.2016.2543039
– volume-title: Common Test Conditions for PCC
  year: 2020
  ident: ref25
– ident: ref26
  doi: 10.1177/0278364913491297
SSID ssj0014847
Score 2.5173936
Snippet Existing LiDAR point cloud compression (PCC) methods tend to treat compression as a fidelity issue, without sufficiently addressing its machine perception...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 5623
SubjectTerms Algorithms
Cloud computing
Coders
Encoding
Geometry
Image coding
Image segmentation
Labels
Laser radar
LiDAR
Localization
Object motion
Point cloud compression
Real time
Real-time systems
Representations
Semantics
Three dimensional models
Three-dimensional displays
Title Real-Time Scene-Aware LiDAR Point Cloud Compression Using Semantic Prior Representation
URI https://ieeexplore.ieee.org/document/9690112
https://www.proquest.com/docview/2697569553
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT8MwDI7GTnDgjRgMlAM3yOgjTZvjNJgQYmjaxuNWpY0nTYwWjU5I_HqctJt4CXHrwakiO7U_N7Y_Qk5SNxFc-op5GMwwQYl8pkD7LBVagUp0wJXpRu7diqs7fv0YPNbI2bIXBgBs8Rm0zKO9y9d5Oje_ys6lYU8ylMIrmLiVvVrLGwMeWTIxhAsuizCOLRpkHHk-6gzvR5gKeh5mqNwwmnwJQpZV5YcrtvGlu0F6i52VZSVPrXmRtNL3b0Mb_7v1TbJeAU3aLk_GFqlBtk3WPo0f3CEPA0SJzDSB0GGKPo-139QM6M3koj2g_XySFbQzzeeaGqdR1stm1NYY0CE8o0kmKe3PJvmMDmw9bdXGlO2Su-7lqHPFKqIFlnoyKBiMwwRhlJZBqAPpc3CV0qhIcISXKO4mIaAIRElopuE5aECtEUY5PBTKRDd_j9SzPIN9QsdcS0CXyc0Fn0BMHCEmENqgQs_x-bhB3IXm47SaQm7IMKaxzUYcGVtrxcZacWWtBjldrnkpZ3D8Kb1j1L-UrDTfIM2FgePqM32NPSHDQMgg8A9-X3VIVs27y4q_JqkXszkcIQopkmN7_D4AzMfXLA
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9NAEB6VcgAOvAoitMAe4IQ2tde7tvfAIUpbpTStqiSF3szaO5GitjZKHVX0t_BX-G_Mrp2oPMStEjcfdv3Y-TTzjecF8LYI81jqyHBBxowclDTiBm3Ei9gaNLlV0rhq5MOjeHAiP56q0zX4vqqFQUSffIZdd-lj-bYqFu5X2bZ205NC0aZQHuC3K3LQLj_s75A03wmxtzvpD3g7Q4AXQqua4zTJiSFYrRKrdCQxNMYSEDGIRW5kmCdISzDNE9foLaB3s5YYQiCT2DjFHdF978Bd4hlKNNVhqxiFTP34MiIoIU_Jci5LcgK9PemPP03I-RSCfGLpZqj8Yvb8HJc_lL-3aHuP4MfyLJpElrPuos67xfVvbSL_18N6DA9bKs16DfafwBqWT-HBjQaLG_B5RDyYuzIXNi5Iq_PelZkjG852eiN2XM3KmvXPq4VlTi02GcEl81kUbIwXBLpZwY7ns2rORj5juC3UKp_Bya182XNYL6sSXwCbSquRjIJ0IcyYWH9KrCe2jveKIJLTDoRLSWdF22fdjfs4z7y_FejMoyNz6MhadHTg_WrP16bLyD9Xbzhxr1a2ku7A1hJQWauILjMR60TFWqno5d93vYF7g8nhMBvuHx1swn33nCa_cQvW6_kCXxHnqvPXHvoMvtw2fH4CUSEzTw
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Real-Time+Scene-Aware+LiDAR+Point+Cloud+Compression+Using+Semantic+Prior+Representation&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Zhao%2C+Lili&rft.au=Ma%2C+Kai-Kuang&rft.au=Liu%2C+Zhili&rft.au=Yin%2C+Qian&rft.date=2022-08-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=32&rft.issue=8&rft.spage=5623&rft.epage=5637&rft_id=info:doi/10.1109%2FTCSVT.2022.3145513&rft.externalDocID=9690112
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon