Learning to Rank for Blind Image Quality Assessment
Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and...
Saved in:
Published in | IEEE transaction on neural networks and learning systems Vol. 26; no. 10; pp. 2275 - 2290 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.10.2015
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Subjects | |
Online Access | Get full text |
ISSN | 2162-237X 2162-2388 2162-2388 |
DOI | 10.1109/TNNLS.2014.2377181 |
Cover
Loading…
Abstract | Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image <inline-formula> <tex-math notation="LaTeX">{\boldsymbol {I}}_{\boldsymbol {a}} </tex-math></inline-formula> is better than that of image <inline-formula> <tex-math notation="LaTeX">{\boldsymbol {I}}_{\boldsymbol {b}} </tex-math></inline-formula> for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. |
---|---|
AbstractList | Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image ${\boldsymbol {I}}_{\boldsymbol {a}}$ is better than that of image ${\boldsymbol {I}}_{\boldsymbol {b}}$ for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image [Formula Omitted] is better than that of image [Formula Omitted] for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image <inline-formula> <tex-math notation="LaTeX">{\boldsymbol {I}}_{\boldsymbol {a}} </tex-math></inline-formula> is better than that of image <inline-formula> <tex-math notation="LaTeX">{\boldsymbol {I}}_{\boldsymbol {b}} </tex-math></inline-formula> for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories.Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods typically require subjects to score a large number of images to train a robust model. However, subjective quality scores are imprecise, biased, and inconsistent, and it is challenging to obtain a large-scale database, or to extend existing databases, because of the inconvenience of collecting images, training the subjects, conducting subjective experiments, and realigning human quality evaluations. To combat these limitations, this paper explores and exploits preference image pairs (PIPs) such as the quality of image Ia is better than that of image Ib for training a robust BIQA model. The preference label, representing the relative quality of two images, is generally precise and consistent, and is not sensitive to image content, distortion type, or subject identity; such PIPs can be generated at a very low cost. The proposed BIQA method is one of learning to rank. We first formulate the problem of learning the mapping from the image features to the preference label as one of classification. In particular, we investigate the utilization of a multiple kernel learning algorithm based on group lasso to provide a solution. A simple but effective strategy to estimate perceptual image quality scores is then presented. Experiments show that the proposed BIQA method is highly effective and achieves a performance comparable with that of state-of-the-art BIQA algorithms. Moreover, the proposed method can be easily extended to new distortion categories. |
Author | Li, Xuelong Tao, Dacheng Gao, Fei Gao, Xinbo |
Author_xml | – sequence: 1 givenname: Fei surname: Gao fullname: Gao, Fei email: gaofeihifly@gmail.com organization: Video and Image Processing System Laboratory, School of Electronic Engineering, Xidian University, Xi'an, P. R. China – sequence: 2 givenname: Dacheng surname: Tao fullname: Tao, Dacheng email: dacheng.tao@uts.edu.au organization: Centre for Quantum Computation and Intelligent Systems, Faculty of Engineering and Information Technology, University of Technology, Sydney, 235 Jones Street, Ultimo, Australia – sequence: 3 givenname: Xinbo surname: Gao fullname: Gao, Xinbo email: xbgao@ieee.org organization: State Key Laboratory of Integrated Services Networks, School of Electronic Engineering, Xidian University, Xi'an, China – sequence: 4 givenname: Xuelong surname: Li fullname: Li, Xuelong email: xuelong_li@opt.ac.cn organization: Center for OPTical IMagery Analysis and Learning (OPTIMAL), State Key Laboratory of Transient Optics and Photonics, Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, P. R. China |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/25616080$$D View this record in MEDLINE/PubMed |
BookMark | eNqN0U1LwzAABuAgEzfn_oCCFLx46cxnkx51-DEYE3WCt5K26chs05m0h_17Uzd32EHMIQnkeRPa9xT0TG0UAOcIjhGC8c1iPp-9jTFEdIwJ50igIzDAKMIhJkL09nv-0Qcj51bQjwiyiMYnoI9ZhCIo4ACQmZLWaLMMmjp4leYzKGob3JXa5MG0kksVvLSy1M0muHVOOVcp05yB40KWTo126xC8P9wvJk_h7PlxOrmdhRmNRRNKQSUTMUolzXNGIk5kSuOswAznOJU8FTCFPMOE-tmfoYIRzihnAlIqYk6G4Hp779rWX61yTVJpl6mylEbVrUsQ5x2FJPoHxV56Tzy9OqCrurXGf0inYoxohGOvLneqTSuVJ2urK2k3ye-f8wBvQWZr56wq9gTBpGso-Wko6RpKdg35kDgIZbqRja5NY6Uu_45ebKNaKbV_i3uCGSffUJiZiw |
CODEN | ITNNAL |
CitedBy_id | crossref_primary_10_1109_JSTSP_2016_2632422 crossref_primary_10_1049_iet_ipr_2018_6485 crossref_primary_10_1109_TIP_2018_2869688 crossref_primary_10_1146_annurev_vision_100419_120301 crossref_primary_10_1109_ACCESS_2020_3033122 crossref_primary_10_1016_j_neucom_2021_08_048 crossref_primary_10_1016_j_sigpro_2016_01_019 crossref_primary_10_1016_j_neucom_2023_126437 crossref_primary_10_1109_TIP_2019_2942514 crossref_primary_10_1016_j_patrec_2018_10_012 crossref_primary_10_1109_TNNLS_2015_2511069 crossref_primary_10_1016_j_optcom_2017_06_102 crossref_primary_10_1109_TNNLS_2021_3127720 crossref_primary_10_1016_j_jvcir_2015_09_009 crossref_primary_10_1109_TIP_2017_2708503 crossref_primary_10_1109_TMM_2019_2894958 crossref_primary_10_1007_s11432_019_2757_1 crossref_primary_10_1109_TIP_2017_2718185 crossref_primary_10_1109_TMM_2022_3190700 crossref_primary_10_1016_j_sigpro_2018_09_002 crossref_primary_10_1109_TIP_2021_3061932 crossref_primary_10_1109_TVCG_2018_2805355 crossref_primary_10_1109_ACCESS_2017_2656878 crossref_primary_10_1109_TIM_2023_3307754 crossref_primary_10_1109_TNNLS_2018_2890017 crossref_primary_10_1007_s11751_018_0318_x crossref_primary_10_1109_LSP_2022_3232289 crossref_primary_10_1109_TNNLS_2019_2933590 crossref_primary_10_1007_s11263_024_02001_1 crossref_primary_10_1016_j_image_2021_116444 crossref_primary_10_1109_TBC_2016_2638620 crossref_primary_10_1109_TCYB_2017_2664499 crossref_primary_10_1007_s11042_018_6186_z crossref_primary_10_1109_ACCESS_2018_2832722 crossref_primary_10_1007_s11042_022_14225_9 crossref_primary_10_1016_j_future_2020_12_023 crossref_primary_10_1109_TSMC_2015_2455020 crossref_primary_10_1016_j_patcog_2018_04_016 crossref_primary_10_1109_ACCESS_2018_2815608 crossref_primary_10_1016_j_displa_2016_06_002 crossref_primary_10_1109_ACCESS_2025_3531416 crossref_primary_10_1109_TBC_2024_3464418 crossref_primary_10_1109_TGRS_2019_2891679 crossref_primary_10_1109_TNNLS_2017_2649101 crossref_primary_10_1049_iet_ipr_2019_0809 crossref_primary_10_1109_TCYB_2019_2924589 crossref_primary_10_1109_TIP_2021_3092822 crossref_primary_10_1109_TMM_2021_3114551 crossref_primary_10_1007_s10994_021_06122_3 crossref_primary_10_1109_TIP_2016_2538462 crossref_primary_10_1007_s10489_020_01787_0 crossref_primary_10_1016_j_cag_2025_104176 crossref_primary_10_1016_j_neucom_2017_01_054 crossref_primary_10_1016_j_patrec_2017_07_015 crossref_primary_10_1016_j_neucom_2020_04_011 crossref_primary_10_1109_TMM_2023_3284988 crossref_primary_10_1007_s10489_021_02904_3 crossref_primary_10_1109_TIP_2019_2910666 crossref_primary_10_1109_TCSVT_2021_3093483 crossref_primary_10_1109_TIP_2020_2988437 crossref_primary_10_1109_TPAMI_2019_2899857 crossref_primary_10_1109_TCSVT_2016_2543099 crossref_primary_10_1109_ACCESS_2018_2889992 crossref_primary_10_1109_TIP_2020_3016502 crossref_primary_10_1109_LGRS_2020_3047789 crossref_primary_10_1109_TMM_2018_2794262 crossref_primary_10_1007_s00371_024_03441_z crossref_primary_10_1016_j_ins_2017_10_053 crossref_primary_10_1016_j_neucom_2016_06_070 crossref_primary_10_1109_TCSVT_2022_3231041 crossref_primary_10_1016_j_jvcir_2018_12_005 crossref_primary_10_1007_s11042_016_3519_7 crossref_primary_10_1109_TIP_2016_2631888 crossref_primary_10_1109_TBDATA_2019_2895605 crossref_primary_10_1016_j_dsp_2020_102834 crossref_primary_10_1109_ACCESS_2019_2931012 crossref_primary_10_1016_j_knosys_2025_113027 crossref_primary_10_1371_journal_pone_0176632 crossref_primary_10_1117_1_OE_58_7_073101 crossref_primary_10_5005_jp_journals_10080_1446 crossref_primary_10_1007_s11042_018_6524_1 crossref_primary_10_1109_ACCESS_2019_2901063 crossref_primary_10_1016_j_sigpro_2017_07_020 crossref_primary_10_1109_TCSVT_2020_3030895 crossref_primary_10_1109_TIP_2019_2922072 crossref_primary_10_1145_3129505 |
Cites_doi | 10.1109/TIP.2012.2190086 10.1109/LSP.2011.2179293 10.1587/transinf.E94.D.1854 10.1016/j.patcog.2014.05.002 10.1037/0033-295X.101.2.266 10.1109/ICIP.1996.559637 10.1109/CVPR.2013.133 10.1109/TSMCB.2012.2217957 10.1109/TIP.2006.881959 10.1109/TNNLS.2013.2291772 10.1109/TNN.2011.2120620 10.1145/1961189.1961199 10.1117/1.3267105 10.1109/TIP.2011.2147325 10.1145/1273496.1273513 10.1007/s11432-011-4421-6 10.1007/s11263-013-0645-9 10.1016/j.artint.2008.08.002 10.1109/TIP.2012.2214050 10.1109/LSP.2012.2227726 10.1093/biomet/39.3-4.324 10.1109/TNNLS.2013.2271356 10.1109/TIP.2005.859378 10.1109/ACSSC.2012.6489326 10.1016/j.neucom.2014.06.029 10.1109/CVPR.2011.5995446 10.1109/ACSSC.2012.6489321 10.1007/s11432-012-4678-4 10.1023/A:1021889010444 10.1109/TIP.2014.2311377 10.1007/s10791-009-9109-9 10.1109/TCYB.2013.2264285 10.1109/CVPR.2013.132 10.1109/TNNLS.2013.2258174 10.1109/TNNLS.2013.2253798 10.1093/bioinformatics/bth294 10.1109/TNN.2011.2160875 10.1109/TNNLS.2014.2336852 10.1037/0096-1523.12.4.496 10.1109/TNNLS.2012.2237183 10.1117/12.845389 10.1109/TIP.2012.2191563 |
ContentType | Journal Article |
Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Oct 2015 |
Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) Oct 2015 |
DBID | 97E RIA RIE AAYXX CITATION NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
DOI | 10.1109/TNNLS.2014.2377181 |
DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Aluminium Industry Abstracts Biotechnology Research Abstracts Calcium & Calcified Tissue Abstracts Ceramic Abstracts Chemoreception Abstracts Computer and Information Systems Abstracts Corrosion Abstracts Electronics & Communications Abstracts Engineered Materials Abstracts Materials Business File Mechanical & Transportation Engineering Abstracts Neurosciences Abstracts Solid State and Superconductivity Abstracts METADEX Technology Research Database ANTE: Abstracts in New Technology & Engineering Engineering Research Database Aerospace Database Materials Research Database ProQuest Computer Science Collection Civil Engineering Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional Biotechnology and BioEngineering Abstracts MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed Materials Research Database Technology Research Database Computer and Information Systems Abstracts – Academic Mechanical & Transportation Engineering Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Materials Business File Aerospace Database Engineered Materials Abstracts Biotechnology Research Abstracts Chemoreception Abstracts Advanced Technologies Database with Aerospace ANTE: Abstracts in New Technology & Engineering Civil Engineering Abstracts Aluminium Industry Abstracts Electronics & Communications Abstracts Ceramic Abstracts Neurosciences Abstracts METADEX Biotechnology and BioEngineering Abstracts Computer and Information Systems Abstracts Professional Solid State and Superconductivity Abstracts Engineering Research Database Calcium & Calcified Tissue Abstracts Corrosion Abstracts MEDLINE - Academic |
DatabaseTitleList | PubMed Technology Research Database Materials Research Database MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 2162-2388 |
EndPage | 2290 |
ExternalDocumentID | 3855538131 25616080 10_1109_TNNLS_2014_2377181 7014257 |
Genre | orig-research Research Support, Non-U.S. Gov't Journal Article |
GrantInformation_xml | – fundername: National Natural Science Foundation of China grantid: 61125106; 61125204; 61432014; 61172146 funderid: 10.13039/501100001809 – fundername: Shaanxi Innovative Research Team for Key Science and Technology grantid: 2012KCT-02 – fundername: Key Research Program of the Chinese Academy of Sciences grantid: KGZD-EW-T03 – fundername: Fundamental Research Funds for the Central Universities grantid: K5051202048; BDZ021403; JB149901 – fundername: Microsoft Research Asia Project based Funding grantid: FY13-RES-OPP-034 – fundername: Australian Research Council Projects grantid: DP-140102164; FT-130101457; LP-140100569 – fundername: Program for Changjiang Scholars and Innovative Research Team in University of China grantid: IRT13088 |
GroupedDBID | 0R~ 4.4 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACIWK ACPRK AENEX AFRAH AGQYO AGSQL AHBIQ AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ EBS EJD IFIPE IPLJI JAVBF M43 MS~ O9- OCL PQQKQ RIA RIE RNS AAYXX CITATION RIG NPM 7QF 7QO 7QP 7QQ 7QR 7SC 7SE 7SP 7SR 7TA 7TB 7TK 7U5 8BQ 8FD F28 FR3 H8D JG9 JQ2 KR7 L7M L~C L~D P64 7X8 |
ID | FETCH-LOGICAL-c498t-a84a5891ba4dd53673ab49cf252d2ba7b80b07c23407c3ab1f537547580448973 |
IEDL.DBID | RIE |
ISSN | 2162-237X 2162-2388 |
IngestDate | Fri Jul 11 12:20:00 EDT 2025 Fri Jul 11 06:10:13 EDT 2025 Mon Jun 30 07:14:24 EDT 2025 Mon Jul 21 06:02:15 EDT 2025 Tue Jul 01 00:27:19 EDT 2025 Thu Apr 24 23:13:01 EDT 2025 Wed Aug 27 02:19:03 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | false |
IsScholarly | true |
Issue | 10 |
Keywords | Image quality assessment (IQA) learning preferences universal blind IQA (BIQA) learning to rank multiple kernel learning (MKL) |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c498t-a84a5891ba4dd53673ab49cf252d2ba7b80b07c23407c3ab1f537547580448973 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
OpenAccessLink | http://ir.opt.ac.cn/handle/181661/25482 |
PMID | 25616080 |
PQID | 1729214629 |
PQPubID | 85436 |
PageCount | 16 |
ParticipantIDs | crossref_primary_10_1109_TNNLS_2014_2377181 proquest_miscellaneous_1778044036 proquest_journals_1729214629 ieee_primary_7014257 crossref_citationtrail_10_1109_TNNLS_2014_2377181 pubmed_primary_25616080 proquest_miscellaneous_1720447783 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2015-10-01 |
PublicationDateYYYYMMDD | 2015-10-01 |
PublicationDate_xml | – month: 10 year: 2015 text: 2015-10-01 day: 01 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Piscataway |
PublicationTitle | IEEE transaction on neural networks and learning systems |
PublicationTitleAbbrev | TNNLS |
PublicationTitleAlternate | IEEE Trans Neural Netw Learn Syst |
PublicationYear | 2015 |
Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
References | ref57 ref13 ref56 yu (ref36) 2014 ref59 ref58 ref14 rakotomamonjy (ref46) 2008; 9 (ref51) 2000 ref53 ref52 ref54 ref10 zheng (ref43) 2007 ref18 he (ref7) 2012 gao (ref11) 2013; 24 ref50 li (ref55) 2014; pp ref45 ref48 ref47 ref41 ref44 ponomarenko (ref19) 2009; 10 hou (ref12) 2014; pp sheikh (ref15) 2005 halonen (ref30) 2011; 7867 gönen (ref49) 2011; 12 ref9 ref4 ref3 ref6 ponomarenko (ref16) 2013 ref5 ref40 ref35 ref34 ref37 ref31 ref33 ref1 ref39 ref38 xu (ref32) 2010 tsukida (ref21) 2011 ye (ref8) 2012 ref24 herbrich (ref42) 1999 (ref23) 2012 ref26 ref25 ref20 ref22 ref28 ref27 ref29 li (ref2) 2011; 22 larson (ref17) 2010; 19 |
References_xml | – ident: ref10 doi: 10.1109/TIP.2012.2190086 – ident: ref27 doi: 10.1109/LSP.2011.2179293 – ident: ref31 doi: 10.1587/transinf.E94.D.1854 – ident: ref59 doi: 10.1016/j.patcog.2014.05.002 – start-page: 106 year: 2013 ident: ref16 article-title: Color image database TID2013: Peculiarities and preliminary results publication-title: Proc 4th Eur Workshop Vis Inf Process (EUVIP) – ident: ref20 doi: 10.1037/0033-295X.101.2.266 – ident: ref24 doi: 10.1109/ICIP.1996.559637 – ident: ref26 doi: 10.1109/CVPR.2013.133 – ident: ref33 doi: 10.1109/TSMCB.2012.2217957 – ident: ref14 doi: 10.1109/TIP.2006.881959 – volume: 7867 start-page: 78670z-1 year: 2011 ident: ref30 article-title: Naturalness and interestingness of test images for visual quality evaluation publication-title: Proc SPIE Image Qual Syst Perform VIII – ident: ref48 doi: 10.1109/TNNLS.2013.2291772 – year: 2012 ident: ref23 publication-title: Methodology for the Subjective Assessment of the Quality of Television Pictures – volume: 22 start-page: 793 year: 2011 ident: ref2 article-title: Blind image quality assessment using a general regression neural network publication-title: IEEE Trans Neural Netw doi: 10.1109/TNN.2011.2120620 – year: 2005 ident: ref15 publication-title: LIVE Image Quality Assessment Database Release 2 – volume: 10 start-page: 30 year: 2009 ident: ref19 article-title: TID2008-A database for evaluation of full-reference visual quality assessment metrics publication-title: Adv Modern Radioelectron – ident: ref50 doi: 10.1145/1961189.1961199 – volume: 19 start-page: 11006 year: 2010 ident: ref17 article-title: Most apparent distortion: Full-reference image quality assessment and the role of strategy publication-title: J Electron Imag doi: 10.1117/1.3267105 – ident: ref3 doi: 10.1109/TIP.2011.2147325 – ident: ref38 doi: 10.1145/1273496.1273513 – ident: ref58 doi: 10.1007/s11432-011-4421-6 – ident: ref53 doi: 10.1007/s11263-013-0645-9 – ident: ref41 doi: 10.1016/j.artint.2008.08.002 – ident: ref4 doi: 10.1109/TIP.2012.2214050 – ident: ref28 doi: 10.1109/LSP.2012.2227726 – start-page: 1697 year: 2007 ident: ref43 article-title: A general boosting method and its application to learning ranking functions for web search publication-title: Advances in Neural Information Processing Systems 20 – ident: ref29 doi: 10.1093/biomet/39.3-4.324 – year: 2000 ident: ref51 publication-title: Final report from the video quality experts group on the validation of objective models of video quality assessment – start-page: 115 year: 1999 ident: ref42 article-title: Large margin rank boundaries for ordinal regression publication-title: Advances in neural information processing systems – year: 2011 ident: ref21 article-title: How to analyze paired comparison data – start-page: 1098 year: 2012 ident: ref8 article-title: Unsupervised feature learning framework for no-reference image quality assessment publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – volume: 9 start-page: 2491 year: 2008 ident: ref46 article-title: SimpleMKL publication-title: J Mach Learn Res – volume: 24 start-page: 2013 year: 2013 ident: ref11 article-title: Universal blind image quality assessment metrics via natural scene statistics and multiple kernel learning publication-title: IEEE Trans Neural Netw Learn Syst doi: 10.1109/TNNLS.2013.2271356 – ident: ref52 doi: 10.1109/TIP.2005.859378 – volume: pp start-page: 1 year: 2014 ident: ref55 article-title: Scene parsing from an MAP perspective publication-title: IEEE Trans Cybern – ident: ref6 doi: 10.1109/ACSSC.2012.6489326 – ident: ref56 doi: 10.1016/j.neucom.2014.06.029 – ident: ref5 doi: 10.1109/CVPR.2011.5995446 – ident: ref18 doi: 10.1109/ACSSC.2012.6489321 – ident: ref57 doi: 10.1007/s11432-012-4678-4 – ident: ref44 doi: 10.1023/A:1021889010444 – year: 2014 ident: ref36 article-title: Learning to rank using user clicks and visual features for image retrieval publication-title: IEEE Trans Cybern – ident: ref35 doi: 10.1109/TIP.2014.2311377 – ident: ref40 doi: 10.1007/s10791-009-9109-9 – start-page: 1175 year: 2010 ident: ref32 article-title: Simple and efficient multiple kernel learning by group lasso publication-title: Proc 27th Int Conf Mach Learn (ICML) – ident: ref34 doi: 10.1109/TCYB.2013.2264285 – ident: ref9 doi: 10.1109/CVPR.2013.132 – ident: ref54 doi: 10.1109/TNNLS.2013.2258174 – ident: ref37 doi: 10.1109/TNNLS.2013.2253798 – start-page: 1146 year: 2012 ident: ref7 article-title: Sparse representation for blind image quality assessment publication-title: Proc IEEE Conf Comput Vis Pattern Recognit (CVPR) – ident: ref45 doi: 10.1093/bioinformatics/bth294 – ident: ref39 doi: 10.1109/TNN.2011.2160875 – ident: ref13 doi: 10.1109/TNNLS.2014.2336852 – volume: pp start-page: 1 year: 2014 ident: ref12 article-title: Saliency-guided deep framework for image quality assessment publication-title: IEEE Trans Multimedia – ident: ref25 doi: 10.1037/0096-1523.12.4.496 – ident: ref47 doi: 10.1109/TNNLS.2012.2237183 – ident: ref22 doi: 10.1117/12.845389 – ident: ref1 doi: 10.1109/TIP.2012.2191563 – volume: 12 start-page: 2211 year: 2011 ident: ref49 article-title: Multiple kernel learning algorithms publication-title: J Mach Learn Res |
SSID | ssj0000605649 |
Score | 2.5051305 |
Snippet | Blind image quality assessment (BIQA) aims to predict perceptual image quality scores without access to reference images. State-of-the-art BIQA methods... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2275 |
SubjectTerms | Algorithms Assessments Blinds Customer satisfaction Feature extraction Image coding Image quality Image quality assessment (IQA) Learning learning preferences learning to rank Mathematical models multiple kernel learning (MKL) Noise Observers Quality of service Training Transform coding universal blind IQA (BIQA) |
Title | Learning to Rank for Blind Image Quality Assessment |
URI | https://ieeexplore.ieee.org/document/7014257 https://www.ncbi.nlm.nih.gov/pubmed/25616080 https://www.proquest.com/docview/1729214629 https://www.proquest.com/docview/1720447783 https://www.proquest.com/docview/1778044036 |
Volume | 26 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwEB4tnHopj4USXnKl3kqWxHZi-wgIBKi7h3aR9hbZjsMBmq3a7KH99Yydh1AFiFskO4ntGdvf2DPfAHyRVZUqXWaxpI7F3GRJrKymMbOIJUyZMWV9vPN0ll_f8dtFthjByRAL45wLzmdu4h_DXX65tCt_VHYqEM-jiq3BGhpubazWcJ6SIC7PA9qlaU5jysSij5FJ1Ol8Nvv2wzty8QmW4HrsM8Tgbp_mgRDy2ZYUcqy8DjfDtnO1AdO-wa23ycNk1ZiJ_fcfl-N7e7QJHzv8Sc5ahdmCkau3YaPP7UC6qT4G1hGv3pNmSb7r-oEguiXnCEpLcvMTFyHSsm_8JWcDt-cO3F1dzi-u4y7BQmy5kk2sJdc-q6DRvESx5IJpw5WtaEZLarQwMjGJsJSh1WexLK0ynzEXTYwErTol2C6s18va7QHJFXXOJo6mrOK8SqVh-BUqMy1LLlgVQdqPcWE79nGfBOOxCFZIooogosKLqOhEFMHX4Z1fLffGm7XHfnyHmt3QRnDYi7LopuefAlGb8hnNqYrg81CME8vflujaLVehDvZSCMnequP5mziigAg-tWoy_L_Xrv2X23UAH7D1WesXeAjrze-VO0J805jjoNhPPF_wHQ |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcoBLW1qggQKu1Btkm_gR28cWtdrC7h7KVtpbZDsOh0IWQfYAv56x8xBCUPUWyU5ie8b2N_bMNwAnqq5zbSqRKupZyq3IUu0MTZlDLGErwbQL8c7zRTG94R9WYrUF78ZYGO99dD7zk_AY7_KrtduEo7JTiXgeVewBPBQhGLeL1hpPVDJE5kXEuzQvaEqZXA1RMpk-XS4Ws0_BlYtPsARX5JAjBvf7vIiUkH9sSjHLyv8BZ9x4LndhPjS58ze5nWxaO3G__mJzvG-f9mCnR6DkrFOZJ7Dlm33YHbI7kH6yHwDrqVc_k3ZNrk1zSxDfknOEpRW5-orLEOn4N36Ss5Hd8yncXF4s30_TPsVC6rhWbWoUNyGvoDW8QsEUkhnLtaupoBW1RlqV2Uw6ytDuc1iW1yLkzEUjI0O7Tkv2DLabdeMPgRSaeu8yT3NWc17nyjL8ClXCqIpLVieQD2Ncup5_PKTB-FJGOyTTZRRRGURU9iJK4O34zreOfePO2gdhfMea_dAmcDSIsuwn6I8ScZsOOc2pTuB4LMapFe5LTOPXm1gHeymlYnfVCQxOHHFAAs87NRn_P2jXi3-36w08mi7ns3J2tfj4Eh5jT0TnJXgE2-33jX-FaKe1r6OS_wbr3vNl |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Learning+to+Rank+for+Blind+Image+Quality+Assessment&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Gao%2C+Fei&rft.au=Tao%2C+Dacheng&rft.au=Gao%2C+Xinbo&rft.au=Li%2C+Xuelong&rft.date=2015-10-01&rft.issn=2162-237X&rft.eissn=2162-2388&rft.volume=26&rft.issue=10&rft.spage=2275&rft.epage=2290&rft_id=info:doi/10.1109%2FTNNLS.2014.2377181&rft.externalDBID=NO_FULL_TEXT |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon |