Deep Virtual Reality Image Quality Assessment With Human Perception Guider for Omnidirectional Image
In this paper, we propose a novel deep learning-based virtual reality image quality assessment method that automatically predicts the visual quality of an omnidirectional image. In order to assess the visual quality in viewing the omnidirectional image, we propose deep networks consisting of virtual...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 30; no. 4; pp. 917 - 928 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2020
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this paper, we propose a novel deep learning-based virtual reality image quality assessment method that automatically predicts the visual quality of an omnidirectional image. In order to assess the visual quality in viewing the omnidirectional image, we propose deep networks consisting of virtual reality (VR) quality score predictor and human perception guider. The proposed VR quality score predictor learns the positional and visual characteristics of the omnidirectional image by encoding the positional feature and visual feature of a patch on the omnidirectional image. With the encoded positional feature and visual feature, patch weight and patch quality score are estimated. Then, by aggregating all weights and scores of the patches, the image quality score is predicted. The proposed human perception guider evaluates the predicted quality score by referring to the human subjective score (i.e., ground-truth obtained by subjects) using an adversarial learning. With adversarial learning, the VR quality score predictor is trained to accurately predict the quality score in order to deceive the guider, while the proposed human perception guider is trained to precisely distinguish between the predictor score and the ground-truth subjective score. To verify the performance of the proposed method, we conducted comprehensive subjective experiments and evaluated the performance of the proposed method. The experimental results show that the proposed method outperforms the existing two-dimentional image quality models and the state-of-the-art image quality models for omnidirectional images. |
---|---|
AbstractList | In this paper, we propose a novel deep learning-based virtual reality image quality assessment method that automatically predicts the visual quality of an omnidirectional image. In order to assess the visual quality in viewing the omnidirectional image, we propose deep networks consisting of virtual reality (VR) quality score predictor and human perception guider. The proposed VR quality score predictor learns the positional and visual characteristics of the omnidirectional image by encoding the positional feature and visual feature of a patch on the omnidirectional image. With the encoded positional feature and visual feature, patch weight and patch quality score are estimated. Then, by aggregating all weights and scores of the patches, the image quality score is predicted. The proposed human perception guider evaluates the predicted quality score by referring to the human subjective score (i.e., ground-truth obtained by subjects) using an adversarial learning. With adversarial learning, the VR quality score predictor is trained to accurately predict the quality score in order to deceive the guider, while the proposed human perception guider is trained to precisely distinguish between the predictor score and the ground-truth subjective score. To verify the performance of the proposed method, we conducted comprehensive subjective experiments and evaluated the performance of the proposed method. The experimental results show that the proposed method outperforms the existing two-dimentional image quality models and the state-of-the-art image quality models for omnidirectional images. |
Author | Ro, Yong Man Lim, Heoun-Taek Kim, Hak Gu |
Author_xml | – sequence: 1 givenname: Hak Gu orcidid: 0000-0003-2137-934X surname: Kim fullname: Kim, Hak Gu email: hgkim0331@kaist.ac.kr organization: Image and Video Systems Lab, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea – sequence: 2 givenname: Heoun-Taek orcidid: 0000-0002-0267-395X surname: Lim fullname: Lim, Heoun-Taek email: ingheoun@kaist.ac.kr organization: Image and Video Systems Lab, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea – sequence: 3 givenname: Yong Man orcidid: 0000-0001-5306-6853 surname: Ro fullname: Ro, Yong Man email: ymro@ee.kaist.ac.kr organization: Image and Video Systems Lab, School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, South Korea |
BookMark | eNp9kE1PwzAMhiM0JLbBH4BLJM4d-WjS9DgN2CZNGh9jHKss9SDT2o4kPezf066IAwdOtmU_r-13gHplVQJC15SMKCXp3Wryul6NGKHpiKlUJZydoT4VQkWMEdFrciJopBgVF2jg_Y4QGqs46aP8HuCA19aFWu_xC-i9DUc8L_QH4Oe6q8beg_cFlAG_2_CJZ3WhS_wEzsAh2KrE09rm4PC2cnhZlDa3DkzbaBRPSpfofKv3Hq5-4hC9PT6sJrNosZzOJ-NFZFgqQsSBAhUy0WlMOQVu8lzyVFJBTLIhzVPGgJFCJDnbxNRoLVKmQSoppNQNxIfottM9uOqrBh-yXVW75gyfMa4SkhCS0GaKdVPGVd472GYHZwvtjhklWetmdnIza93MftxsIPUHMjbo9sngtN3_j950qAWA311K8qYv-DdhB4UY |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1002_dac_4802 crossref_primary_10_1007_s00521_022_07649_9 crossref_primary_10_1016_j_jvcir_2021_103419 crossref_primary_10_1109_TCSVT_2021_3103544 crossref_primary_10_1109_TCSVT_2020_3015186 crossref_primary_10_1109_TMM_2023_3310276 crossref_primary_10_1109_TCSVT_2021_3073410 crossref_primary_10_1007_s10055_021_00594_3 crossref_primary_10_1109_TCSVT_2024_3497994 crossref_primary_10_1109_ACCESS_2020_3019458 crossref_primary_10_1016_j_imavis_2024_105151 crossref_primary_10_1109_JSTSP_2019_2955024 crossref_primary_10_1109_TIM_2022_3205928 crossref_primary_10_1049_ipr2_12722 crossref_primary_10_1109_TNNLS_2023_3328340 crossref_primary_10_1016_j_jvcir_2024_104241 crossref_primary_10_1016_j_engappai_2023_106015 crossref_primary_10_1016_j_jvcir_2023_103770 crossref_primary_10_1109_TBC_2022_3231101 crossref_primary_10_1109_TCSVT_2021_3081162 crossref_primary_10_1109_TCSVT_2022_3181235 crossref_primary_10_1109_TIM_2023_3345908 crossref_primary_10_1016_j_ins_2019_07_096 crossref_primary_10_1109_TIM_2021_3093940 crossref_primary_10_1109_ACCESS_2019_2933014 crossref_primary_10_1155_2021_4513577 crossref_primary_10_1049_ipr2_12856 crossref_primary_10_1016_j_neucom_2024_129243 crossref_primary_10_1109_TETCI_2022_3165935 crossref_primary_10_1016_j_micpro_2020_103796 crossref_primary_10_1109_OJCAS_2021_3073891 crossref_primary_10_1109_TCSVT_2021_3112120 crossref_primary_10_1587_transcom_2022EBP3109 crossref_primary_10_3390_info14060340 crossref_primary_10_59782_sidr_v5i1_92 crossref_primary_10_1016_j_vrih_2022_03_004 crossref_primary_10_1109_TCSVT_2022_3172135 crossref_primary_10_3390_s23218676 crossref_primary_10_1109_ACCESS_2020_2972158 crossref_primary_10_1109_TCSVT_2021_3129478 crossref_primary_10_1109_TCSVT_2023_3250970 crossref_primary_10_3390_s23094242 crossref_primary_10_1016_j_neunet_2024_106752 crossref_primary_10_1109_TMI_2023_3282387 crossref_primary_10_3788_AOS230509 crossref_primary_10_1371_journal_pone_0266021 crossref_primary_10_1109_JSTSP_2019_2956408 crossref_primary_10_1145_3722559 crossref_primary_10_1109_TCSVT_2021_3112197 crossref_primary_10_1145_3549544 crossref_primary_10_1016_j_neucom_2021_12_032 crossref_primary_10_1109_TIM_2023_3322995 crossref_primary_10_1364_AOP_468066 crossref_primary_10_1109_JSTSP_2020_2968182 crossref_primary_10_1109_TCSVT_2022_3225172 crossref_primary_10_1109_TCSVT_2021_3128014 crossref_primary_10_1109_TCSVT_2021_3081182 crossref_primary_10_1109_TCSVT_2024_3378352 crossref_primary_10_2478_amns_2024_3540 crossref_primary_10_3390_ijerph191811278 crossref_primary_10_1109_TCSVT_2020_3030895 crossref_primary_10_3390_sym15071406 crossref_primary_10_3390_mti8100085 crossref_primary_10_1016_j_micpro_2021_103855 crossref_primary_10_1146_annurev_vision_100419_120301 crossref_primary_10_1109_JSTSP_2020_2966864 crossref_primary_10_1007_s11042_023_15739_6 crossref_primary_10_1587_transcom_2021EBP3163 crossref_primary_10_1177_15347354241280272 crossref_primary_10_1016_j_patcog_2025_111429 crossref_primary_10_1109_TIM_2024_3400304 crossref_primary_10_1109_TCSVT_2024_3359663 crossref_primary_10_1016_j_optlaseng_2025_108888 crossref_primary_10_1016_j_techsoc_2024_102666 crossref_primary_10_3389_fnins_2022_1022041 crossref_primary_10_1109_TCSVT_2022_3179744 crossref_primary_10_1109_TIP_2021_3052073 crossref_primary_10_1109_TIP_2021_3087322 crossref_primary_10_1007_s00371_023_02791_4 crossref_primary_10_1109_JSTSP_2023_3250956 crossref_primary_10_1109_ACCESS_2024_3478793 crossref_primary_10_3390_app112211019 crossref_primary_10_1109_TCSVT_2020_3043349 crossref_primary_10_1007_s00530_024_01285_0 crossref_primary_10_1007_s00530_024_01434_5 crossref_primary_10_1016_j_sigpro_2022_108534 crossref_primary_10_1109_ACCESS_2024_3359167 crossref_primary_10_1186_s13640_020_00538_y crossref_primary_10_1080_10494820_2024_2347309 crossref_primary_10_1007_s11042_021_10862_8 crossref_primary_10_1109_ACCESS_2022_3204766 crossref_primary_10_1109_TIP_2025_3539468 crossref_primary_10_1109_TIM_2024_3485447 crossref_primary_10_1007_s11042_024_19457_5 crossref_primary_10_1109_TIP_2021_3092828 crossref_primary_10_1016_j_comcom_2021_06_029 crossref_primary_10_1109_ACCESS_2024_3357134 crossref_primary_10_1109_TMM_2022_3171684 crossref_primary_10_1016_j_measen_2023_100853 crossref_primary_10_1007_s10489_024_05421_1 crossref_primary_10_3390_app11010164 crossref_primary_10_1145_3640344 crossref_primary_10_1145_3723165 crossref_primary_10_1109_ACCESS_2019_2953983 crossref_primary_10_1016_j_ijleo_2020_165887 crossref_primary_10_1109_TIM_2021_3102691 crossref_primary_10_1007_s41233_020_00032_3 crossref_primary_10_1109_TMM_2022_3190697 crossref_primary_10_1145_3565020 |
Cites_doi | 10.1109/ICUFN.2017.7993736 10.1109/TIP.2017.2760518 10.1109/TCSVT.2012.2221191 10.1109/LSP.2017.2720693 10.1109/ICIP.2017.8296923 10.1109/ICASSP.2004.1326643 10.1109/QoMEX.2017.7965659 10.1145/3083187.3083218 10.1109/QoMEX.2018.8463418 10.1109/TIP.2003.819861 10.1109/79.952804 10.1109/ISMAR.2015.12 10.1109/TVCG.2016.2518079 10.1109/QoMEX.2017.7965660 10.1109/TCSVT.2004.839989 10.1145/3083187.3083215 10.1109/ICIP.2017.8296374 10.1109/ICIP.2002.1038942 10.1109/VCIP.2017.8305084 10.1109/30.125072 10.1109/CVPR.2016.90 10.1109/QoMEX.2017.7965634 10.1109/ICIP.2017.8296667 10.1109/ICME.2017.8019351 10.1145/3139131.3139137 10.3389/fpsyg.2015.00026 10.1109/TCSVT.2018.2817250 10.1109/TIP.2018.2865089 10.1016/j.sigpro.2018.01.004 10.1002/bltj.20538 10.1109/TIP.2017.2725584 10.1109/TCSVT.2018.2886277 10.1109/CVPR.2017.213 10.1109/ACSSC.2003.1292216 10.1145/3240508.3240581 10.1109/TBC.2018.2811627 10.1109/ICASSP.2018.8461317 10.1109/TIP.2017.2774045 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2020 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2019.2898732 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 928 |
ExternalDocumentID | 10_1109_TCSVT_2019_2898732 8638985 |
Genre | orig-research |
GrantInformation_xml | – fundername: Korean Government (MSIT) (Development of VR sickness reduction technique for enhanced sensitivity broadcasting) grantid: 2017-0-00780 – fundername: Institute of Information and communications Technology Planning and Evaluation (IITP) |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-3e1e1567a94131e3cdd6396150c7b0873ccec6557d2b41caa592ae686566aa943 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 07:14:11 EDT 2025 Tue Jul 01 00:41:12 EDT 2025 Thu Apr 24 22:59:21 EDT 2025 Wed Aug 27 02:35:31 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-3e1e1567a94131e3cdd6396150c7b0873ccec6557d2b41caa592ae686566aa943 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-2137-934X 0000-0002-0267-395X 0000-0001-5306-6853 |
PQID | 2387070071 |
PQPubID | 85433 |
PageCount | 12 |
ParticipantIDs | crossref_primary_10_1109_TCSVT_2019_2898732 ieee_primary_8638985 crossref_citationtrail_10_1109_TCSVT_2019_2898732 proquest_journals_2387070071 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2020-04-01 |
PublicationDateYYYYMMDD | 2020-04-01 |
PublicationDate_xml | – month: 04 year: 2020 text: 2020-04-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2020 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref12 ref15 ref14 ref11 ref10 ref17 ref16 ref19 lim (ref47) 2018 (ref46) 2015 goodfellow (ref40) 2014 ng (ref7) 2005; 15 (ref51) 2012 ref44 ref43 (ref45) 2012 ref8 ref9 ref4 ref3 ref6 simonyan (ref53) 2013 zakharchenko (ref18) 2016; 9970 hanhart (ref49) 2017 ref35 ref34 springenberg (ref52) 2014 ref37 ref36 ref31 ref30 ref33 ref32 abadi (ref48) 2016; 16 ref2 ref1 ref38 boyce (ref5) 2017; 16 ref24 ref23 ref26 ref25 nair (ref39) 2010 ref20 zakharchenko (ref50) 2016 bellard (ref41) 2005 ref22 ref21 ref28 ref27 ref29 xiao (ref42) 2012 |
References_xml | – ident: ref20 doi: 10.1109/ICUFN.2017.7993736 – year: 2012 ident: ref51 – year: 2005 ident: ref41 publication-title: FFMPEG Multimedia System – ident: ref27 doi: 10.1109/TIP.2017.2760518 – ident: ref12 doi: 10.1109/TCSVT.2012.2221191 – ident: ref16 doi: 10.1109/LSP.2017.2720693 – ident: ref6 doi: 10.1109/ICIP.2017.8296923 – volume: 16 start-page: 1 year: 2017 ident: ref5 article-title: JVET common test conditions and evaluation procedures for 360 video – ident: ref15 doi: 10.1109/ICASSP.2004.1326643 – ident: ref34 doi: 10.1109/QoMEX.2017.7965659 – year: 2014 ident: ref52 publication-title: Striving for simplicity The all convolutional net – ident: ref36 doi: 10.1145/3083187.3083218 – ident: ref38 doi: 10.1109/QoMEX.2018.8463418 – ident: ref13 doi: 10.1109/TIP.2003.819861 – volume: 16 start-page: 265 year: 2016 ident: ref48 article-title: TensorFlow: A system for large-scale machine learning publication-title: Proc OSDI – ident: ref11 doi: 10.1109/79.952804 – ident: ref17 doi: 10.1109/ISMAR.2015.12 – year: 2016 ident: ref50 publication-title: 360tools – ident: ref44 doi: 10.1109/TVCG.2016.2518079 – ident: ref2 doi: 10.1109/QoMEX.2017.7965660 – volume: 15 start-page: 82 year: 2005 ident: ref7 article-title: Data compression and transmission aspects of panoramic videos publication-title: IEEE Trans Circuits Syst Video Technol doi: 10.1109/TCSVT.2004.839989 – year: 2017 ident: ref49 article-title: VQMT: Video quality measurement tool|multimedia. Signal processing group (MMSPG) – ident: ref35 doi: 10.1145/3083187.3083215 – ident: ref29 doi: 10.1109/ICIP.2017.8296374 – ident: ref3 doi: 10.1109/ICIP.2002.1038942 – ident: ref21 doi: 10.1109/VCIP.2017.8305084 – ident: ref10 doi: 10.1109/30.125072 – ident: ref33 doi: 10.1109/CVPR.2016.90 – ident: ref37 doi: 10.1109/QoMEX.2017.7965634 – ident: ref8 doi: 10.1109/ICIP.2017.8296667 – ident: ref19 doi: 10.1109/ICME.2017.8019351 – ident: ref31 doi: 10.1145/3139131.3139137 – ident: ref1 doi: 10.3389/fpsyg.2015.00026 – start-page: 807 year: 2010 ident: ref39 article-title: Rectified linear units improve restricted boltzmann machines publication-title: Proc 27th Int Conf Mach Learn (ICML) – year: 2012 ident: ref45 publication-title: Methodology for the Subjective Assessment of the Quality of Television Pictures – ident: ref30 doi: 10.1109/TCSVT.2018.2817250 – ident: ref43 doi: 10.1109/TIP.2018.2865089 – year: 2018 ident: ref47 publication-title: KAIST IVY Omnidirectional Image Database for Visual Quality Assessment – ident: ref9 doi: 10.1016/j.sigpro.2018.01.004 – year: 2013 ident: ref53 publication-title: Deep Inside Convolutional Networks Visualising Image Classification Models and Saliency Maps – ident: ref4 doi: 10.1002/bltj.20538 – ident: ref28 doi: 10.1109/TIP.2017.2725584 – ident: ref23 doi: 10.1109/TCSVT.2018.2886277 – volume: 9970 year: 2016 ident: ref18 article-title: Quality metric for spherical panoramic video publication-title: Proc SPIE – ident: ref25 doi: 10.1109/CVPR.2017.213 – start-page: 2695 year: 2012 ident: ref42 article-title: Recognizing scene viewpoint using panoramic place representation publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – ident: ref14 doi: 10.1109/ACSSC.2003.1292216 – ident: ref24 doi: 10.1145/3240508.3240581 – ident: ref22 doi: 10.1109/TBC.2018.2811627 – start-page: 2672 year: 2014 ident: ref40 article-title: Generative adversarial nets publication-title: Proc Adv Neural Inf Process Syst – ident: ref32 doi: 10.1109/ICASSP.2018.8461317 – ident: ref26 doi: 10.1109/TIP.2017.2774045 – year: 2015 ident: ref46 publication-title: Subjective Methods for the Assessment of Stereoscopic 3dtv Systems |
SSID | ssj0014847 |
Score | 2.614558 |
Snippet | In this paper, we propose a novel deep learning-based virtual reality image quality assessment method that automatically predicts the visual quality of an... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 917 |
SubjectTerms | Adversarial learning Deep learning Distortion Image coding Image quality Machine learning Measurement omnidirectional image Perception Performance evaluation Quality Quality assessment Virtual networks Virtual reality Visualization |
Title | Deep Virtual Reality Image Quality Assessment With Human Perception Guider for Omnidirectional Image |
URI | https://ieeexplore.ieee.org/document/8638985 https://www.proquest.com/docview/2387070071 |
Volume | 30 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB21nODAjigU5AM3SKmTOImPqFAWiUVQlluU2BNRQUtV0gN8PWMnjdiEuEVKHFl5zrw39iwAO77SRJspOphlqeOjlzlpEKDDeZYlbpLIUJvk5POL4OTWP3sQDzXYq3JhENEGn2HLXNqzfP2iJmarbD8y9BqJOtTJcStytaoTAz-yzcRILnAnIh6bJsi05X6vc3PXM1FcskXuRRR67hcSsl1Vfphiyy_dBTifzqwIK3lqTfK0pd6_FW3879QXYb4UmuygWBlLUMPhMsx9Kj-4AvoQccTu-mOTQ8Ku0UpydjogE8OK2hpv7KCq3Mnu-_kjs5v-7KoKh2HHE5PIx0j7ssvBsF9QpN1fLN60Crfdo17nxCm7LjjKlSJ3PORITl2YSOI3jp7SmlSMqRuvwrRNX08pVIEQoXZTn6skEdJNMIiMMCRsfW8NZoYvQ1wHprkUaG75IvQ1F5EMFRk0jNpSSeGKBvApDLEqS5KbzhjPsXVN2jK20MUGuriErgG71ZhRUZDjz6dXDBbVkyUMDWhO0Y7Lf_Y1JvESkgEkzbXx-6hNmHWNt23jdpowk48nuEWSJE-37Vr8ADWo3ZU |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3JTsMwEB2xHIADO6KsPnCDlDqJk_iI2MpSQFCWW5TYE1FBC4L0AF_P2EkjNiFukRJHVp4z7409C8CGrzTRZooOZlnq-OhlThoE6HCeZYmbJDLUJjm5dRY0r_3jO3E3BFtVLgwi2uAzrJtLe5avn1TfbJVtR4ZeIzEMo8T7ghfZWtWZgR_ZdmIkGLgTEZMNUmQacru9e3XTNnFcsk4ORhR67hcasn1VfhhjyzAHU9AazK0ILHmo9_O0rt6_lW387-SnYbKUmmynWBszMIS9WZj4VIBwDvQe4jO76byYLBJ2iVaUs6MuGRlWVNd4YztV7U5228nvmd32ZxdVQAw77JtUPkbql513e52CJO0OY_Gmebg-2G_vNp2y74KjXClyx0OO5NaFiSSG4-gprUnHmMrxKkwb9PWUQhUIEWo39blKEiHdBIPISENC1_cWYKT31MNFYJpLgeaWL0JfcxHJUJFJw6ghlRSuqAEfwBCrsii56Y3xGFvnpCFjC11soItL6GqwWY15Lkpy_Pn0nMGierKEoQYrA7Tj8q99jUm-hGQCSXUt_T5qHcaa7dZpfHp0drIM467xvW0UzwqM5C99XCWBkqdrdl1-AO-H4N4 |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Virtual+Reality+Image+Quality+Assessment+With+Human+Perception+Guider+for+Omnidirectional+Image&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Kim%2C+Hak+Gu&rft.au=Lim%2C+Heoun-Taek&rft.au=Ro%2C+Yong+Man&rft.date=2020-04-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=30&rft.issue=4&rft.spage=917&rft.epage=928&rft_id=info:doi/10.1109%2FTCSVT.2019.2898732&rft.externalDocID=8638985 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |