Learning Generalized Spatial-Temporal Deep Feature Representation for No-Reference Video Quality Assessment
In this work, we propose a no-reference video quality assessment method, aiming to achieve high-generalization capability in cross-content, -resolution and -frame rate quality prediction. In particular, we evaluate the quality of a video by learning effective feature representations in spatial-tempo...
Saved in:
Published in | IEEE transactions on circuits and systems for video technology Vol. 32; no. 4; pp. 1903 - 1916 |
---|---|
Main Authors | , , , , , |
Format | Journal Article |
Language | English |
Published |
New York
IEEE
01.04.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this work, we propose a no-reference video quality assessment method, aiming to achieve high-generalization capability in cross-content, -resolution and -frame rate quality prediction. In particular, we evaluate the quality of a video by learning effective feature representations in spatial-temporal domain. In the spatial domain, to tackle the resolution and content variations, we impose the Gaussian distribution constraints on the quality features. The unified distribution can significantly reduce the domain gap between different video samples, resulting in more generalized quality feature representation. Along the temporal dimension, inspired by the mechanism of visual perception, we propose a pyramid temporal aggregation module by involving the short-term and long-term memory to aggregate the frame-level quality. Experiments show that our method outperforms the state-of-the-art methods on cross-dataset settings, and achieves comparable performance on intra-dataset configurations, demonstrating the high-generalization capability of the proposed method. The codes are released at https://github.com/Baoliang93/GSTVQA |
---|---|
AbstractList | In this work, we propose a no-reference video quality assessment method, aiming to achieve high-generalization capability in cross-content, -resolution and -frame rate quality prediction. In particular, we evaluate the quality of a video by learning effective feature representations in spatial-temporal domain. In the spatial domain, to tackle the resolution and content variations, we impose the Gaussian distribution constraints on the quality features. The unified distribution can significantly reduce the domain gap between different video samples, resulting in more generalized quality feature representation. Along the temporal dimension, inspired by the mechanism of visual perception, we propose a pyramid temporal aggregation module by involving the short-term and long-term memory to aggregate the frame-level quality. Experiments show that our method outperforms the state-of-the-art methods on cross-dataset settings, and achieves comparable performance on intra-dataset configurations, demonstrating the high-generalization capability of the proposed method. The codes are released at https://github.com/Baoliang93/GSTVQA |
Author | Fan, Hongfei Wang, Shiqi Li, Guo Chen, Baoliang Lu, Fangbo Zhu, Lingyu |
Author_xml | – sequence: 1 givenname: Baoliang orcidid: 0000-0003-4884-6956 surname: Chen fullname: Chen, Baoliang email: blchen6-c@my.cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong – sequence: 2 givenname: Lingyu surname: Zhu fullname: Zhu, Lingyu email: lingyzhu@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong – sequence: 3 givenname: Guo surname: Li fullname: Li, Guo email: liguo136009@foxmail.com organization: Hongfei Fan are with Kingsoft Cloud, Beijing, China – sequence: 4 givenname: Fangbo surname: Lu fullname: Lu, Fangbo organization: Hongfei Fan are with Kingsoft Cloud, Beijing, China – sequence: 5 givenname: Hongfei surname: Fan fullname: Fan, Hongfei email: fanhongfei@kingsoft.com organization: Hongfei Fan are with Kingsoft Cloud, Beijing, China – sequence: 6 givenname: Shiqi orcidid: 0000-0002-3583-959X surname: Wang fullname: Wang, Shiqi email: shiqwang@cityu.edu.hk organization: Department of Computer Science, City University of Hong Kong, Hong Kong |
BookMark | eNp9kU1PAjEQhhuDiYD-Ab008bzYj51leyQoaEI0AnLddJdZswjt2i4H_PUWl3jw4GmayfvMZJ72SMdYg4RcczbgnKm75XixWg4EE3wgWZoCgzPS5QBpJASDTngz4FEqOFyQnvcbxnicxsMu-ZihdqYy73SKBp3eVl-4potaN5XeRkvc1TY06T1iTSeom71DOsfaoUfThJA1tLSOPttojiU6NAXSVbVGS1_3YVhzoCPv0ftdiF-S81JvPV6dap-8TR6W48do9jJ9Go9mUSEUNBEvhrnMJcdhgrEqmWCqVApSpQuNKKBMAKAUoshZAigTCaEwJXIuFY-5lH1y286tnf3co2-yjd07E1ZmIomHsYDgJ6TSNlU4673DMiuq9qLG6WqbcZYd1WY_arOj2uykNqDiD1q7aqfd4X_opoUqRPwFVAzhU5j8BuZMh0c |
CODEN | ITCTEM |
CitedBy_id | crossref_primary_10_1109_TBC_2024_3511927 crossref_primary_10_1109_TCSVT_2024_3469180 crossref_primary_10_1109_ACCESS_2022_3167446 crossref_primary_10_1007_s11432_024_4133_3 crossref_primary_10_1109_TMM_2023_3312851 crossref_primary_10_1109_LSP_2021_3136487 crossref_primary_10_1109_TCSVT_2024_3418941 crossref_primary_10_1109_TBC_2024_3382949 crossref_primary_10_1109_TIP_2024_3393754 crossref_primary_10_1109_TBC_2022_3192997 crossref_primary_10_3390_s23083998 crossref_primary_10_1016_j_knosys_2024_111655 crossref_primary_10_1109_TBC_2023_3312932 crossref_primary_10_1109_TCSVT_2021_3114509 crossref_primary_10_1109_TCYB_2023_3338615 crossref_primary_10_1109_TIP_2021_3130536 crossref_primary_10_1016_j_entcom_2023_100581 crossref_primary_10_1109_TCSVT_2024_3432152 crossref_primary_10_1016_j_patcog_2024_111011 crossref_primary_10_1109_TPAMI_2023_3319332 crossref_primary_10_1007_s11042_022_13383_0 crossref_primary_10_1016_j_ins_2022_07_053 crossref_primary_10_1109_TBC_2023_3342707 crossref_primary_10_1109_TMM_2024_3414549 crossref_primary_10_1109_TIP_2023_3310344 crossref_primary_10_1016_j_asoc_2024_111377 crossref_primary_10_1109_TCSVT_2022_3207148 crossref_primary_10_1007_s11063_022_10939_x crossref_primary_10_1109_TCSVT_2022_3177518 crossref_primary_10_1109_TCSVT_2022_3209007 crossref_primary_10_1109_TIP_2023_3343099 crossref_primary_10_1109_TCSVT_2024_3367904 crossref_primary_10_1007_s11042_023_18068_w crossref_primary_10_3390_s23031511 crossref_primary_10_1007_s00530_024_01396_8 crossref_primary_10_1109_TCSVT_2022_3227039 crossref_primary_10_1109_TCSVT_2023_3249741 crossref_primary_10_1109_TCSVT_2023_3325427 |
Cites_doi | 10.1109/TIP.2017.2729891 10.1109/TCSVT.2017.2710419 10.1109/ICCV.2015.293 10.1109/ICASSP40776.2020.9053273 10.1109/ICIP40778.2020.9191278 10.1109/CVPR.2012.6247954 10.1162/neco.1997.9.8.1735 10.1109/TMM.2018.2817070 10.1109/ICIP.2016.7532610 10.1109/CVPR.2014.368 10.1109/CVPR42600.2020.00372 10.1109/CVPR42600.2020.00373 10.1063/1.5010804 10.1109/TIP.2018.2831899 10.1145/3394171.3413717 10.1109/TIP.2018.2799331 10.1109/TIP.2019.2923051 10.1109/ICIP.2019.8803395 10.1109/TCSVT.2019.2952675 10.1109/CVPR42600.2020.01415 10.1109/LSP.2017.2691160 10.1109/TPAMI.2020.3045810 10.1109/TMM.2014.2373812 10.1109/TIP.2011.2147325 10.1117/1.JEI.22.4.043025 10.1109/TCSVT.2017.2707479 10.1109/TCSVT.2018.2886771 10.1109/TCSVT.2017.2683504 10.3115/v1/D14-1179 10.1109/TIP.2012.2191563 10.1109/TCSVT.2015.2412773 10.1109/TIP.2017.2760518 10.1145/3343031.3351028 10.1109/CVPR.2019.00233 10.1109/TIP.2015.2426416 10.1109/LSP.2012.2227726 10.1109/TCSVT.2015.2430711 10.1145/3240508.3240643 10.1109/TIP.2018.2869673 10.1109/TIP.2015.2502725 10.1007/978-3-319-10578-9_41 10.1007/978-3-319-46478-7_47 10.1145/2812802 10.1109/QoMEX.2017.7965673 10.1007/s10479-011-0841-3 10.1109/TIP.2014.2355716 10.1109/CVPR.2018.00566 10.1109/ICIP.2016.7532789 10.1364/AO.58.000340 10.1109/ICCV.2017.609 10.5555/3454287.3455008 10.1145/3394171.3413845 10.1109/ICCV.2019.00350 10.1609/aaai.v32i1.12258 10.1109/TIP.2016.2562513 10.2352/ISSN.2470-1173.2017.12.IQSP-221 10.1109/CVPR.2009.5206848 10.1109/CVPR.2012.6247789 10.1109/TIP.2012.2214050 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2022 |
DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
DOI | 10.1109/TCSVT.2021.3088505 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
DatabaseTitleList | Technology Research Database |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering |
EISSN | 1558-2205 |
EndPage | 1916 |
ExternalDocumentID | 10_1109_TCSVT_2021_3088505 9452150 |
Genre | orig-research |
GrantInformation_xml | – fundername: Applied Research Grant (ARG) grantid: 9667192 – fundername: Hong Kong Research Grants Council, Early Career Scheme (RGC ECS) grantid: 21211018 funderid: 10.13039/501100002920 – fundername: City University of Hong Kong, Teaching Development Grants (CITYU TDG) grantid: 6000713 funderid: 10.13039/100007567 – fundername: National Natural Science Foundation of China grantid: 62022002 funderid: 10.13039/501100001809 – fundername: General Research Fund (GRF) grantid: 11203220 funderid: 10.13039/501100002920 |
GroupedDBID | -~X 0R~ 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 O9- OCL P2P RIA RIE RNS RXW TAE TN5 VH1 AAYXX CITATION RIG 7SC 7SP 8FD JQ2 L7M L~C L~D |
ID | FETCH-LOGICAL-c295t-1c7b3b31e76e49f0209f99589acaee25f6555f22cb065e363565e092b13914133 |
IEDL.DBID | RIE |
ISSN | 1051-8215 |
IngestDate | Mon Jun 30 06:17:42 EDT 2025 Thu Apr 24 22:55:10 EDT 2025 Tue Jul 01 00:41:15 EDT 2025 Wed Aug 27 02:40:50 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c295t-1c7b3b31e76e49f0209f99589acaee25f6555f22cb065e363565e092b13914133 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
ORCID | 0000-0003-4884-6956 0000-0002-3583-959X |
PQID | 2647425505 |
PQPubID | 85433 |
PageCount | 14 |
ParticipantIDs | crossref_citationtrail_10_1109_TCSVT_2021_3088505 ieee_primary_9452150 proquest_journals_2647425505 crossref_primary_10_1109_TCSVT_2021_3088505 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2022-04-01 |
PublicationDateYYYYMMDD | 2022-04-01 |
PublicationDate_xml | – month: 04 year: 2022 text: 2022-04-01 day: 01 |
PublicationDecade | 2020 |
PublicationPlace | New York |
PublicationPlace_xml | – name: New York |
PublicationTitle | IEEE transactions on circuits and systems for video technology |
PublicationTitleAbbrev | TCSVT |
PublicationYear | 2022 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref13 ref57 ref12 ref56 ref15 ref59 ref14 ref58 ref53 ref52 ref11 ref55 ref10 ref54 Schmidt (ref50) ref17 Sinha (ref48) 2017 ref19 Erfani (ref39) ref18 Simonyan (ref51) 2014 Camps (ref16) Xie (ref40) ref46 ref45 ref42 ref41 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 ref5 ref35 ref34 ref37 ref36 Andrew (ref44) ref31 ref30 ref33 ref32 ref2 ref70 Tu (ref1) 2020 ref24 ref68 ref23 ref67 ref26 ref25 Group (ref66) 2000 ref69 ref20 ref64 ref63 ref22 Goodfellow (ref47) 2014 ref21 ref65 ref28 ref27 ref29 ref60 Muandet (ref38) ref62 ref61 |
References_xml | – ident: ref61 doi: 10.1109/TIP.2017.2729891 – ident: ref18 doi: 10.1109/TCSVT.2017.2710419 – ident: ref41 doi: 10.1109/ICCV.2015.293 – ident: ref46 doi: 10.1109/ICASSP40776.2020.9053273 – start-page: 5014 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref50 article-title: Adversarially robust generalization requires more data – ident: ref26 doi: 10.1109/ICIP40778.2020.9191278 – start-page: 10 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref38 article-title: Domain generalization via invariant feature representation – ident: ref28 doi: 10.1109/CVPR.2012.6247954 – ident: ref56 doi: 10.1162/neco.1997.9.8.1735 – ident: ref58 doi: 10.1109/TMM.2018.2817070 – ident: ref59 doi: 10.1109/ICIP.2016.7532610 – ident: ref17 doi: 10.1109/CVPR.2014.368 – ident: ref29 doi: 10.1109/CVPR42600.2020.00372 – ident: ref7 doi: 10.1109/CVPR42600.2020.00373 – ident: ref53 doi: 10.1063/1.5010804 – ident: ref27 doi: 10.1109/TIP.2018.2831899 – year: 2014 ident: ref51 article-title: Very deep convolutional networks for large-scale image recognition publication-title: arXiv:1409.1556 – ident: ref36 doi: 10.1145/3394171.3413717 – ident: ref65 doi: 10.1109/TIP.2018.2799331 – ident: ref33 doi: 10.1109/TIP.2019.2923051 – ident: ref37 doi: 10.1109/ICIP.2019.8803395 – ident: ref21 doi: 10.1109/TCSVT.2019.2952675 – ident: ref6 doi: 10.1109/CVPR42600.2020.01415 – ident: ref34 doi: 10.1109/LSP.2017.2691160 – ident: ref52 doi: 10.1109/TPAMI.2020.3045810 – ident: ref14 doi: 10.1109/TMM.2014.2373812 – ident: ref15 doi: 10.1109/TIP.2011.2147325 – ident: ref20 doi: 10.1117/1.JEI.22.4.043025 – ident: ref10 doi: 10.1109/TCSVT.2017.2707479 – ident: ref25 doi: 10.1109/TCSVT.2018.2886771 – ident: ref60 doi: 10.1109/TCSVT.2017.2683504 – year: 2017 ident: ref48 article-title: Certifying some distributional robustness with principled adversarial training publication-title: arXiv:1710.10571 – ident: ref55 doi: 10.3115/v1/D14-1179 – ident: ref19 doi: 10.1109/TIP.2012.2191563 – ident: ref24 doi: 10.1109/TCSVT.2015.2412773 – ident: ref30 doi: 10.1109/TIP.2017.2760518 – ident: ref35 doi: 10.1145/3343031.3351028 – start-page: 585 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref40 article-title: Controllable invariance through adversarial feature learning – ident: ref45 doi: 10.1109/CVPR.2019.00233 – ident: ref3 doi: 10.1109/TIP.2015.2426416 – ident: ref67 doi: 10.1109/LSP.2012.2227726 – ident: ref32 doi: 10.1109/TCSVT.2015.2430711 – ident: ref5 doi: 10.1145/3240508.3240643 – start-page: 1247 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref44 article-title: Deep canonical correlation analysis – start-page: 1455 volume-title: Proc. 25th Int. Joint Conf. Artif. Intell. ident: ref39 article-title: Robust domain generalisation by enforcing distribution invariance – ident: ref11 doi: 10.1109/TIP.2018.2869673 – ident: ref4 doi: 10.1109/TIP.2015.2502725 – ident: ref42 doi: 10.1007/978-3-319-10578-9_41 – ident: ref57 doi: 10.1007/978-3-319-46478-7_47 – ident: ref62 doi: 10.1145/2812802 – ident: ref9 doi: 10.1109/QoMEX.2017.7965673 – ident: ref70 doi: 10.1007/s10479-011-0841-3 – ident: ref13 doi: 10.1109/TIP.2014.2355716 – ident: ref49 doi: 10.1109/CVPR.2018.00566 – ident: ref31 doi: 10.1109/ICIP.2016.7532789 – ident: ref23 doi: 10.1364/AO.58.000340 – ident: ref43 doi: 10.1109/ICCV.2017.609 – ident: ref63 doi: 10.5555/3454287.3455008 – year: 2020 ident: ref1 article-title: UGC-VQA: Benchmarking blind video quality assessment for user generated content publication-title: arXiv:2005.14354 – year: 2014 ident: ref47 article-title: Explaining and harnessing adversarial examples publication-title: arXiv:1412.6572 – ident: ref68 doi: 10.1145/3394171.3413845 – start-page: 1 volume-title: Proc. 1st Int. Conf. Med. Imag. Deep Learn. ident: ref16 article-title: One-class Gaussian process regressor for quality assessment of transperineal ultrasound images – ident: ref54 doi: 10.1109/ICCV.2019.00350 – ident: ref8 doi: 10.1609/aaai.v32i1.12258 – year: 2000 ident: ref66 article-title: Final report from the video quality experts group on the validation of objective models of video quality assessment – ident: ref12 doi: 10.1109/TIP.2016.2562513 – ident: ref22 doi: 10.2352/ISSN.2470-1173.2017.12.IQSP-221 – ident: ref64 doi: 10.1109/CVPR.2009.5206848 – ident: ref69 doi: 10.1109/CVPR.2012.6247789 – ident: ref2 doi: 10.1109/TIP.2012.2214050 |
SSID | ssj0014847 |
Score | 2.6227894 |
Snippet | In this work, we propose a no-reference video quality assessment method, aiming to achieve high-generalization capability in cross-content, -resolution and... |
SourceID | proquest crossref ieee |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 1903 |
SubjectTerms | Configuration management Datasets deep neural networks Domains Feature extraction generalization capability Image quality Learning Nonlinear distortion Normal distribution Quality assessment Representations Streaming media temporal aggregation Training Video quality assessment Visual perception |
Title | Learning Generalized Spatial-Temporal Deep Feature Representation for No-Reference Video Quality Assessment |
URI | https://ieeexplore.ieee.org/document/9452150 https://www.proquest.com/docview/2647425505 |
Volume | 32 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELYKEwy8EeUlD2yQ0jh2Eo-IhxBSO0CoukWxc0GIKq1KMsCv5-y4UQUIsWU4W5Y-2_ddfPcdIWc8KEIMk5mnQXGPRyYJIMsQkKgIRaAkxKGpRh4Mw_tn_jAW4w65aGthAMAmn0HPfNq3_Hyqa_Or7FJydDYmQF_BwK2p1WpfDHhsm4khXfC9GK0WBTJ9eZlcP40SDAWZ3wvwUAnTqm7JCdmuKj-uYutf7jbJYLGyJq3krVdXqqc_v4k2_nfpW2TDEU161eyMbdKBcoesL8kP7pI3J676Qp349Osn5NQ0KcZN6SWNaNWE3gDMqKGK9Rzoo82cdQVLJUXKS4dTr5WrpaPXHKa0Ueb4oFet7uceeb67Ta7vPdd8wdNMisrzdaQCFfgQhcBlgaxSFlKKWGY6A2ACoRSiYEwrJDEQGJk7AX3JFFJKHz1jsE9Wy2kJB4SKSGYK3aDWMYLEQllkLMepOdeqyP2gS_wFGql2yuSmQcYktRFKX6YWwdQgmDoEu-S8HTNrdDn-tN41kLSWDo0uOV6Anrqj-54iQ4zwIsNRh7-POiJrzNRA2PSdY7JazWs4QWZSqVO7Jb8Af0nesA |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT-MwEB4h9gAcdpeXKE8fuC0pjWMn8RHBovJoDxAQtyh2JqgCtVW3PcCvZ-y4EVoQ4pbD2LL02Z5v4plvAA5FVMUUJvPAoBaBSGwSQFEQIEkVy0grTGNbjdzrx907cfkgHxbgqKmFQUSXfIZt--ne8suRmdlfZcdKkLOxAfoP8vsyrKu1mjcDkbp2YkQYwiAlu3mJTEcdZ6e39xkFgzxsR3SspG1W984Nub4qHy5j52HOf0FvvrY6seSpPZvqtnn9T7bxu4v_DT891WQn9d5YhQUcrsHKOwHCdXjy8qqPzMtPD16xZLZNMW3LIKtlq57ZGeKYWbI4myC7cbmzvmRpyIj0sv4oaARr2f2gxBGrtTle2Emj_LkBd-d_s9Nu4NsvBIYrOQ1Ck-hIRyEmMQpVEa9UlVIyVYUpELkkMKWsODeaaAxGVuhOYkdxTaQyJN8YbcLicDTELWAyUYUmR2hMSiDxWFUFL2lqIYyuyjBqQThHIzdem9y2yHjOXYzSUblDMLcI5h7BFvxpxoxrZY4vrdctJI2lR6MFu3PQc394_-XEERO6ymjU9uejDmCpm_Wu8-uL_tUOLHNbEeGSeXZhcTqZ4R7xlKned9vzDUWI4fk |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+Generalized+Spatial-Temporal+Deep+Feature+Representation+for+No-Reference+Video+Quality+Assessment&rft.jtitle=IEEE+transactions+on+circuits+and+systems+for+video+technology&rft.au=Chen%2C+Baoliang&rft.au=Zhu%2C+Lingyu&rft.au=Li%2C+Guo&rft.au=Lu%2C+Fangbo&rft.date=2022-04-01&rft.pub=IEEE&rft.issn=1051-8215&rft.volume=32&rft.issue=4&rft.spage=1903&rft.epage=1916&rft_id=info:doi/10.1109%2FTCSVT.2021.3088505&rft.externalDocID=9452150 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1051-8215&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1051-8215&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1051-8215&client=summon |