PQA-Net: Deep No Reference Point Cloud Quality Assessment via Multi-View Projection

Recently, 3D point cloud is becoming popular due to its capability to represent the real world for advanced content modality in modern communication systems. In view of its wide applications, especially for immersive communication towards human perception, quality metrics for point clouds are essent...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on circuits and systems for video technology Vol. 31; no. 12; pp. 4645 - 4660
Main Authors Liu, Qi, Yuan, Hui, Su, Honglei, Liu, Hao, Wang, Yu, Yang, Huan, Hou, Junhui
Format Journal Article
LanguageEnglish
Published New York IEEE 01.12.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text

Cover

Loading…
Abstract Recently, 3D point cloud is becoming popular due to its capability to represent the real world for advanced content modality in modern communication systems. In view of its wide applications, especially for immersive communication towards human perception, quality metrics for point clouds are essential. Existing point cloud quality evaluations rely on a full or certain portion of the original point cloud, which severely limits their applications. To overcome this problem, we propose a novel deep learning-based no reference point cloud quality assessment method, namely PQA-Net. Specifically, the PQA-Net consists of a multi-view-based joint feature extraction and fusion (MVFEF) module, a distortion type identification (DTI) module, and a quality vector prediction (QVP) module. The DTI and QVP modules share the feature generated from the MVFEF module. By using the distortion type labels, the DTI and the MVFEF modules are first pre-trained to initialize the network parameters, based on which the whole network is then jointly trained to finally evaluate the point cloud quality. Experimental results on the Waterloo Point Cloud dataset show that PQA-Net achieves better or equivalent performance comparing with the state-of-the-art quality assessment methods. The code of the proposed model will be made publicly available to facilitate reproducible research https://github.com/qdushl/PQA-Net .
AbstractList Recently, 3D point cloud is becoming popular due to its capability to represent the real world for advanced content modality in modern communication systems. In view of its wide applications, especially for immersive communication towards human perception, quality metrics for point clouds are essential. Existing point cloud quality evaluations rely on a full or certain portion of the original point cloud, which severely limits their applications. To overcome this problem, we propose a novel deep learning-based no reference point cloud quality assessment method, namely PQA-Net. Specifically, the PQA-Net consists of a multi-view-based joint feature extraction and fusion (MVFEF) module, a distortion type identification (DTI) module, and a quality vector prediction (QVP) module. The DTI and QVP modules share the feature generated from the MVFEF module. By using the distortion type labels, the DTI and the MVFEF modules are first pre-trained to initialize the network parameters, based on which the whole network is then jointly trained to finally evaluate the point cloud quality. Experimental results on the Waterloo Point Cloud dataset show that PQA-Net achieves better or equivalent performance comparing with the state-of-the-art quality assessment methods. The code of the proposed model will be made publicly available to facilitate reproducible research https://github.com/qdushl/PQA-Net .
Author Liu, Hao
Su, Honglei
Wang, Yu
Yang, Huan
Hou, Junhui
Yuan, Hui
Liu, Qi
Author_xml – sequence: 1
  givenname: Qi
  orcidid: 0000-0002-3958-9962
  surname: Liu
  fullname: Liu, Qi
  email: sdqi.liu@gmail.com
  organization: School of Control Science and Engineering, Shandong University, Jinan, China
– sequence: 2
  givenname: Hui
  orcidid: 0000-0001-5212-3393
  surname: Yuan
  fullname: Yuan, Hui
  email: huiyuan@sdu.edu.cn
  organization: School of Control Science and Engineering, Shandong University, Jinan, China
– sequence: 3
  givenname: Honglei
  orcidid: 0000-0001-6144-4930
  surname: Su
  fullname: Su, Honglei
  email: suhonglei@qdu.edu.cn
  organization: School of Electronic Information, Qingdao University, Qingdao, China
– sequence: 4
  givenname: Hao
  orcidid: 0000-0003-0246-2527
  surname: Liu
  fullname: Liu, Hao
  email: liuhaoxb@gmail.com
  organization: School of Information Science and Engineering, Shandong University, Qingdao, China
– sequence: 5
  givenname: Yu
  surname: Wang
  fullname: Wang, Yu
  email: armstrong_wangyu@tju.edu.cn
  organization: School of Electrical and Information Engineering, Tianjin University, Tianjin, China
– sequence: 6
  givenname: Huan
  orcidid: 0000-0001-5810-0248
  surname: Yang
  fullname: Yang, Huan
  email: cathy_huanyang@hotmail.com
  organization: College of Computer Science and Technology, Qingdao University, Qingdao, China
– sequence: 7
  givenname: Junhui
  orcidid: 0000-0003-3431-2021
  surname: Hou
  fullname: Hou, Junhui
  email: jh.hou@cityu.edu.hk
  organization: Department of Computer Science, City University of Hong Kong, Hong Kong
BookMark eNp9kEtPwzAQhC1UJFrgD8DFEucUPxLH5laVp1SgQOk1cpO15CqNi-2A-u9JacWBA6ddaefb0cwA9RrXAEJnlAwpJepyNn6bz4aMMDrklBAm2QHq0yyTCWMk63U7yWgiGc2O0CCEJSE0lWneR2_Tl1HyBPEKXwOs8ZPDr2DAQ1MCnjrbRDyuXVvhl1bXNm7wKAQIYQXd4dNq_NjW0SZzC1946t0Symhdc4IOja4DnO7nMXq_vZmN75PJ893DeDRJSqaymEihKm1ApcoIo9KUVZDmcgE85UoYaSAtVUVyqgWIXGploOKLhaF0YSrNteLH6GL3d-3dRwshFkvX-qazLJggWU4ynotOxXaq0rsQPJhi7e1K-01BSbEtr_gpr9iWV-zL6yD5Bypt1Ntw0Wtb_4-e71ALAL9eXUohOOffn7d-3g
CODEN ITCTEM
CitedBy_id crossref_primary_10_1016_j_eswa_2024_125039
crossref_primary_10_1016_j_displa_2023_102450
crossref_primary_10_1109_TIP_2023_3253252
crossref_primary_10_1007_s11760_024_03444_2
crossref_primary_10_1109_ACCESS_2024_3521093
crossref_primary_10_1109_TBC_2022_3192997
crossref_primary_10_1109_TMM_2022_3158809
crossref_primary_10_1109_TVCG_2023_3282802
crossref_primary_10_3390_s24227383
crossref_primary_10_1109_ACCESS_2024_3383536
crossref_primary_10_1109_TCSVT_2024_3362369
crossref_primary_10_1109_LSP_2024_3452556
crossref_primary_10_32604_cmc_2024_056141
crossref_primary_10_1109_LSP_2023_3264105
crossref_primary_10_1109_TCE_2024_3367539
crossref_primary_10_1145_3550274
crossref_primary_10_1109_TCSVT_2023_3247506
crossref_primary_10_1109_TIM_2025_3541782
crossref_primary_10_1016_j_ins_2022_07_053
crossref_primary_10_1016_j_displa_2024_102882
crossref_primary_10_1016_j_displa_2023_102540
crossref_primary_10_1109_ACCESS_2022_3198995
crossref_primary_10_3390_electronics13010220
crossref_primary_10_1109_TIP_2024_3468893
crossref_primary_10_1016_j_image_2024_117239
crossref_primary_10_1007_s41233_023_00057_4
crossref_primary_10_1109_TCSVT_2024_3350180
crossref_primary_10_1186_s13640_024_00626_3
crossref_primary_10_3390_jimaging10060129
crossref_primary_10_1109_TIM_2025_3529543
crossref_primary_10_1016_j_image_2025_117262
crossref_primary_10_1109_TVCG_2023_3338359
crossref_primary_10_1109_TBC_2023_3311339
crossref_primary_10_1145_3715134
crossref_primary_10_1109_TCSVT_2022_3170588
crossref_primary_10_1109_TCSVT_2024_3410052
crossref_primary_10_1109_TIP_2025_3539465
crossref_primary_10_1109_TETCI_2022_3201619
crossref_primary_10_1109_TIP_2023_3330086
crossref_primary_10_3389_frsip_2024_1420060
crossref_primary_10_1016_j_measurement_2024_114400
crossref_primary_10_1145_3643817
crossref_primary_10_1109_TCSVT_2022_3186894
crossref_primary_10_1007_s00371_024_03352_z
crossref_primary_10_1109_TCE_2024_3423830
crossref_primary_10_1109_TCSVT_2024_3420150
crossref_primary_10_1016_j_cag_2025_104176
crossref_primary_10_1109_TMM_2024_3443634
crossref_primary_10_1016_j_displa_2025_103007
crossref_primary_10_1109_TVCG_2022_3167151
crossref_primary_10_1145_3592786
crossref_primary_10_1016_j_patcog_2025_111361
crossref_primary_10_1016_j_rcim_2024_102863
crossref_primary_10_1109_TMM_2023_3340894
crossref_primary_10_1016_j_aej_2024_02_007
crossref_primary_10_1145_3664199
crossref_primary_10_1109_TBC_2021_3114510
crossref_primary_10_1109_TCSVT_2023_3341622
crossref_primary_10_1016_j_jag_2024_103951
crossref_primary_10_1016_j_measurement_2023_112592
crossref_primary_10_1109_TMM_2024_3407698
crossref_primary_10_1109_TCSVT_2022_3179575
crossref_primary_10_1109_TIM_2023_3290291
crossref_primary_10_1109_TBC_2024_3482173
crossref_primary_10_1109_TIM_2023_3322475
crossref_primary_10_1109_LSP_2022_3198601
crossref_primary_10_37188_lam_2022_035
crossref_primary_10_1109_TPAMI_2024_3422490
crossref_primary_10_1016_j_eswa_2023_122438
crossref_primary_10_1016_j_eswa_2023_122953
crossref_primary_10_1109_TIP_2023_3327003
crossref_primary_10_1109_TMM_2023_3347638
Cites_doi 10.1109/QoMEX.2019.8743313
10.1109/TIP.2012.2221725
10.1017/ATSIP.2019.20
10.1109/ICIP.2019.8803298
10.1109/ICIP.2017.8296925
10.1109/VCIP.2017.8305132
10.1109/ICME.2018.8486512
10.1136/bmj.e4483
10.1016/j.neucom.2020.07.014
10.1109/TBC.2019.2957652
10.1109/QoMEX.2019.8743277
10.1109/LSP.2020.3024065
10.1016/j.neucom.2020.05.086
10.1007/s11831-019-09320-4
10.1109/TIP.2005.859378
10.1109/ICMEW46912.2020.9106005
10.1109/JETCAS.2018.2885981
10.23919/APSIPA.2018.8659653
10.1109/LSP.2019.2963793
10.1109/TIP.2017.2707807
10.1109/JSTSP.2009.2014497
10.1109/LSP.2019.2951533
10.1109/TCSVT.2016.2543039
10.1109/QoMEX48832.2020.9123087
10.1109/TIP.2018.2799331
10.1109/MMSP.2017.8122239
10.1109/QoMEX48832.2020.9123147
10.1109/QoMEX48832.2020.9123076
10.1109/TIP.2017.2774045
10.1109/ACSSC.2003.1292216
10.1111/j.1467-8659.2012.03188.x
10.1109/ICCV.2015.169
10.1109/TIP.2003.819861
10.1016/j.neucom.2020.03.086
10.1145/3240508.3240643
10.1109/QoMEX48832.2020.9123089
10.1109/MMSP48831.2020.9287154
10.1109/ICMEW46912.2020.9106052
10.1109/TCSVT.2020.2966118
10.1109/ICIP40778.2020.9190956
10.1109/LSP.2020.3010128
10.1109/TIP.2019.2936738
10.1007/s11036-020-01570-y
10.1109/VCIP47243.2019.8965861
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TCSVT.2021.3100282
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Electronics & Communications Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Technology Research Database
Computer and Information Systems Abstracts – Academic
Electronics & Communications Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts Professional
DatabaseTitleList
Technology Research Database
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
EISSN 1558-2205
EndPage 4660
ExternalDocumentID 10_1109_TCSVT_2021_3100282
9496633
Genre orig-research
GrantInformation_xml – fundername: Shandong Provincial Natural Science Foundation, China
  grantid: ZR2018PF002
  funderid: 10.13039/501100007129
– fundername: Hong Kong Research Grant Council (RGC)
  grantid: 9042955 (CityU 11202320)
  funderid: 10.13039/501100002920
– fundername: National Natural Science Foundation of China
  grantid: 61871342
  funderid: 10.13039/501100001809
– fundername: Open Project Program of the State Key Laboratory of Virtual Reality Technology and Systems, Beihang University
  grantid: VRLAB2021A01
  funderid: 10.13039/501100011160
– fundername: OPPO Research Fund
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ASUFR
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
ICLAB
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
RXW
TAE
TN5
VH1
AAYXX
CITATION
RIG
7SC
7SP
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c295t-869dafe949f6f9442de478be34396f8fe4c9d071a6e678a9fed3bbf11bfda3a93
IEDL.DBID RIE
ISSN 1051-8215
IngestDate Sun Jun 29 15:53:30 EDT 2025
Tue Jul 01 00:41:16 EDT 2025
Thu Apr 24 23:04:16 EDT 2025
Wed Aug 27 05:01:32 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 12
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c295t-869dafe949f6f9442de478be34396f8fe4c9d071a6e678a9fed3bbf11bfda3a93
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-6144-4930
0000-0001-5212-3393
0000-0003-0246-2527
0000-0003-3431-2021
0000-0001-5810-0248
0000-0002-3958-9962
PQID 2605705376
PQPubID 85433
PageCount 16
ParticipantIDs crossref_citationtrail_10_1109_TCSVT_2021_3100282
proquest_journals_2605705376
crossref_primary_10_1109_TCSVT_2021_3100282
ieee_primary_9496633
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-12-01
PublicationDateYYYYMMDD 2021-12-01
PublicationDate_xml – month: 12
  year: 2021
  text: 2021-12-01
  day: 01
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on circuits and systems for video technology
PublicationTitleAbbrev TCSVT
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref12
ref15
ref14
ref53
ref55
ref11
ref54
ref10
sheskin (ref49) 2007
liu (ref38) 2020
ref17
yang (ref27) 2021
ref19
ref18
yang (ref56) 2020
ballé (ref42) 2019
ref51
meynet (ref57) 2021
ref50
liu (ref35) 2020
ref46
javaheri (ref65) 2020
ref48
ting (ref61) 2010
ref44
ref43
yang (ref36) 2020
(ref60) 2000
(ref45) 2010
ref8
(ref16) 2020
ref7
ref9
ref4
ref3
ref6
ref5
ref40
ballé (ref41) 2016
wolf (ref30) 2009
ref34
ref37
mekuria (ref52) 2016
ref31
ref33
ref32
ref2
alexiou (ref58) 2020
ref1
kingma (ref47) 2014
ref24
ref23
ref25
ref64
ref20
ref63
ref66
ref22
yang (ref26) 2020
ref21
(ref39) 2006
ref28
torlig (ref29) 2018; 10752
ref62
viola (ref59) 2020
References_xml – ident: ref20
  doi: 10.1109/QoMEX.2019.8743313
– ident: ref50
  doi: 10.1109/TIP.2012.2221725
– ident: ref66
  doi: 10.1017/ATSIP.2019.20
– ident: ref12
  doi: 10.1109/ICIP.2019.8803298
– ident: ref17
  doi: 10.1109/ICIP.2017.8296925
– year: 2020
  ident: ref38
  article-title: Reduced reference perceptual quality model and application to rate control for 3D point cloud compression
  publication-title: arXiv 2011 12688
– year: 2021
  ident: ref27
  article-title: Point cloud distortion quantification based on potential energy for human and machine perception
  publication-title: arXiv 2103 02850
– ident: ref53
  doi: 10.1109/VCIP.2017.8305132
– ident: ref18
  doi: 10.1109/ICME.2018.8486512
– ident: ref48
  doi: 10.1136/bmj.e4483
– ident: ref62
  doi: 10.1016/j.neucom.2020.07.014
– ident: ref11
  doi: 10.1109/TBC.2019.2957652
– year: 2014
  ident: ref47
  article-title: Adam: A method for stochastic optimization
  publication-title: arXiv 1412 6980
– year: 2020
  ident: ref26
  article-title: Inferring point cloud quality via graph similarity
  publication-title: IEEE Transactions on Pattern Analysis and Machine Intelligence
– ident: ref46
  doi: 10.1109/QoMEX.2019.8743277
– year: 2010
  ident: ref45
  publication-title: PhotoScan Agisoft
– ident: ref54
  doi: 10.1109/LSP.2020.3024065
– year: 2020
  ident: ref35
  article-title: Model-based joint bit allocation between geometry and color for video-based 3D point cloud compression
  publication-title: IEEE Trans Multimedia
– ident: ref1
  doi: 10.1016/j.neucom.2020.05.086
– ident: ref9
  doi: 10.1007/s11831-019-09320-4
– ident: ref32
  doi: 10.1109/TIP.2005.859378
– ident: ref37
  doi: 10.1109/ICMEW46912.2020.9106005
– ident: ref10
  doi: 10.1109/JETCAS.2018.2885981
– start-page: 1
  year: 2019
  ident: ref42
  article-title: End-to-end optimized image compression
  publication-title: Proc 5th Int Conf Learn Represent (ICLR)
– ident: ref6
  doi: 10.23919/APSIPA.2018.8659653
– ident: ref8
  doi: 10.1109/LSP.2019.2963793
– year: 2020
  ident: ref65
  article-title: Point cloud rendering after coding: Impacts on subjective and objective quality
  publication-title: IEEE Trans Multimedia
– volume: 10752
  year: 2018
  ident: ref29
  article-title: A novel methodology for quality assessment of voxelized point clouds
  publication-title: Proc SPIE
– ident: ref34
  doi: 10.1109/TIP.2017.2707807
– ident: ref40
  doi: 10.1109/JSTSP.2009.2014497
– year: 2021
  ident: ref57
  publication-title: PCQM
– ident: ref63
  doi: 10.1109/LSP.2019.2951533
– ident: ref55
  doi: 10.1109/TCSVT.2016.2543039
– ident: ref19
  doi: 10.1109/QoMEX48832.2020.9123087
– ident: ref51
  doi: 10.1109/TIP.2018.2799331
– ident: ref13
  doi: 10.1109/MMSP.2017.8122239
– year: 2020
  ident: ref36
  article-title: Predicting the perceptual quality of point cloud: A 3D-to-2D projection-based exploration
  publication-title: IEEE Trans Multimedia
– ident: ref22
  doi: 10.1109/QoMEX48832.2020.9123147
– ident: ref24
  doi: 10.1109/QoMEX48832.2020.9123076
– year: 2020
  ident: ref56
  publication-title: Inferring Point Cloud Quality Via Graph Similarity
– year: 2020
  ident: ref16
  publication-title: Common Test Conditions for Point Cloud Compression
– ident: ref14
  doi: 10.1109/TIP.2017.2774045
– ident: ref33
  doi: 10.1109/ACSSC.2003.1292216
– year: 2020
  ident: ref58
  publication-title: PointSSIM Point Cloud Structural Similarity Metric
– ident: ref44
  doi: 10.1111/j.1467-8659.2012.03188.x
– ident: ref64
  doi: 10.1109/ICCV.2015.169
– ident: ref31
  doi: 10.1109/TIP.2003.819861
– start-page: 1353
  year: 2007
  ident: ref49
  article-title: Spearman's rank-order correlation coefficient
  publication-title: Handbook of Parametric and Nonparametric Statistical Procedures
– start-page: 1
  year: 2016
  ident: ref41
  article-title: Density modeling of images using a generalized normalization transformation
  publication-title: Proc 4th Int Conf Learn Represent (ICLR)
– year: 2009
  ident: ref30
  publication-title: Reference algorithm for computing Peak Signal to Noise Ratio (PSNR) of a video sequence with a constant delay
– year: 2020
  ident: ref59
  publication-title: PCM_RR
– ident: ref2
  doi: 10.1016/j.neucom.2020.03.086
– ident: ref15
  doi: 10.1145/3240508.3240643
– ident: ref21
  doi: 10.1109/QoMEX48832.2020.9123089
– year: 2000
  ident: ref60
  publication-title: Final report from the video quality experts group on the validation of objective models of video quality assessment
– ident: ref23
  doi: 10.1109/MMSP48831.2020.9287154
– year: 2006
  ident: ref39
  publication-title: Vocabulary for Performance and Quality of Service
– ident: ref3
  doi: 10.1109/ICMEW46912.2020.9106052
– ident: ref7
  doi: 10.1109/TCSVT.2020.2966118
– ident: ref25
  doi: 10.1109/ICIP40778.2020.9190956
– ident: ref28
  doi: 10.1109/LSP.2020.3010128
– ident: ref4
  doi: 10.1109/TIP.2019.2936738
– year: 2016
  ident: ref52
  publication-title: Evaluation Criteria for PCC (Point Cloud Compression)
– ident: ref5
  doi: 10.1007/s11036-020-01570-y
– start-page: 209
  year: 2010
  ident: ref61
  publication-title: Confusion Matrix
– ident: ref43
  doi: 10.1109/VCIP47243.2019.8965861
SSID ssj0014847
Score 2.6718812
Snippet Recently, 3D point cloud is becoming popular due to its capability to represent the real world for advanced content modality in modern communication systems....
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 4645
SubjectTerms Communications systems
deep neural network
Distortion
Feature extraction
Forecasting
Geometry
Image color analysis
Learning systems
Machine learning
Measurement
Modules
multi-task learning
multi-view
Multitasking
No-reference point cloud quality assessment
Point cloud compression
Quality assessment
Three dimensional models
Three-dimensional displays
Title PQA-Net: Deep No Reference Point Cloud Quality Assessment via Multi-View Projection
URI https://ieeexplore.ieee.org/document/9496633
https://www.proquest.com/docview/2605705376
Volume 31
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Nb9QwEB21PZUDHy2IhVL5wA28TWzHsbmtllYVUleLuq16ixx7LK1abao2C4Jfj-1koxYQ4paDHY1mbM8be-YNwHsrPJaMeVoyyamwVlITzkoa28DnsjS2ULFQ-GwmTy_El6viags-DrUwiJiSz3AcP9NbvmvsOl6VHWkRwDnn27AdAreuVmt4MRAqNRMLcCGnKvixTYFMpo8W0_PLRQgFWT6O19lMsUdOKHVV-eMoTv7l5BmcbSTr0kqux-u2Htufv5E2_q_oz-FpDzTJpFsZL2ALV3vw5AH94D6cz79O6AzbT-Qz4i2ZNWSgnSXzZrlqyfSmWTvS8Wz8IJOBxZN8WxqSanfp5RK_k3l3nxNs_BIuTo4X01PaN1mglumipUpqZzwG-bz0WgjmUJSqRh6QivTKo7DaBRxiJAa_ZrRHx-va53ntneFG81ews2pW-BqIykxRsFpkhjvh64BESuF95krDvfVKjiDfaL2yPQN5bIRxU6VIJNNVslQVLVX1lhrBh2HObce_8c_R-1H1w8he6yM42Bi36rfofRUDuTKx2bz5-6y3sBv_3eWuHMBOe7fGdwGBtPVhWnq_ACDb1pU
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3fb9MwED6N8QA8bMCY6BjgB97AXWI7js1b1TEVWKuiddPeIsc5SxVTM7F0CP56bCeN-CXEWx5s5XRn-76z774DeGWFw5wxR3MmORXWSmr8WUlDG_hU5sZmKhQKT2dyci4-XGaXW_Cmr4VBxJh8hsPwGd_yq9quw1XZkRYenHN-B-56v5-lbbVW_2YgVGwn5gFDSpX3ZJsSmUQfLcZnFwsfDLJ0GC60mWK_uKHYV-WPwzh6mJNdmG5kaxNLPg_XTTm033-jbfxf4R_CTgc1yahdG49gC1eP4cFPBIR7cDb_NKIzbN6SY8RrMqtJTzxL5vVy1ZDxVb2uSMu08Y2Meh5Pcrs0JFbv0oslfiXz9kbHW_kJnJ-8W4wntGuzQC3TWUOV1JVx6OVz0mkhWIUiVyVyj1WkUw6F1ZVHIkai92xGO6x4Wbo0LV1luNF8H7ZX9QqfAlGJyTJWisTwSrjSY5FcOJdUueHOOiUHkG60XtiOgzy0wrgqYiyS6CJaqgiWKjpLDeB1P-e6ZeD45-i9oPp-ZKf1ARxujFt0m_SmCKFcHvlsDv4-6yXcmyymp8Xp-9nHZ3A__KfNZDmE7ebLGp97PNKUL-Iy_AF9E9ne
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=PQA-Net%3A+Deep+No+Reference+Point+Cloud+Quality+Assessment+via+Multi-View+Projection&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Liu%2C+Qi&rft.au=Yuan%2C+Hui&rft.au=Su%2C+Honglei&rft.au=Liu%2C+Hao&rft.date=2021-12-01&rft.pub=The+Institute+of+Electrical+and+Electronics+Engineers%2C+Inc.+%28IEEE%29&rft.issn=1051-8215&rft.eissn=1558-2205&rft.volume=31&rft.issue=12&rft.spage=4645&rft_id=info:doi/10.1109%2FTCSVT.2021.3100282&rft.externalDBID=NO_FULL_TEXT
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon