Deep Label Distribution Learning With Label Ambiguity
Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some...
Saved in:
Published in | IEEE transactions on image processing Vol. 26; no. 6; pp. 2825 - 2838 |
---|---|
Main Authors | , , , , |
Format | Journal Article |
Language | English |
Published |
United States
IEEE
01.06.2017
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback-Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks. |
---|---|
AbstractList | Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback-Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks. Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback-Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks.Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is one of the most important factors for its success. However, it is difficult to collect sufficient training images with precise labels in some domains, such as apparent age estimation, head pose estimation, multilabel classification, and semantic segmentation. Fortunately, there is ambiguous information among labels, which makes these tasks different from traditional classification. Based on this observation, we convert the label of each image into a discrete label distribution, and learn the label distribution by minimizing a Kullback-Leibler divergence between the predicted and ground-truth label distributions using deep ConvNets. The proposed deep label distribution learning (DLDL) method effectively utilizes the label ambiguity in both feature learning and classifier learning, which help prevent the network from overfitting even when the training set is small. Experimental results show that the proposed approach produces significantly better results than the state-of-the-art methods for age estimation and head pose estimation. At the same time, it also improves recognition performance for multi-label classification and semantic segmentation tasks. |
Author | Bin-Bin Gao Chao Xing Chen-Wei Xie Xin Geng Jianxin Wu |
Author_xml | – sequence: 1 givenname: Bin-Bin surname: Gao fullname: Gao, Bin-Bin – sequence: 2 givenname: Chao surname: Xing fullname: Xing, Chao – sequence: 3 givenname: Chen-Wei surname: Xie fullname: Xie, Chen-Wei – sequence: 4 givenname: Jianxin orcidid: 0000-0002-2085-7568 surname: Wu fullname: Wu, Jianxin – sequence: 5 givenname: Xin surname: Geng fullname: Geng, Xin |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/28371776$$D View this record in MEDLINE/PubMed |
BookMark | eNp9kDtPwzAQgC1URGlhR0JCGVlS7mzHj7FqeVSKBEMRo5WkTjHKo8TJ0H9PqoYODEy-4fvu5G9CRlVdWUJuEGaIoB_Wq7cZBZQzKpTWWp2RS9QcQwBOR_0MkQwlcj0mE--_AJBHKC7ImComUUpxSaKltbsgTlJbBEvn28alXevqKoht0lSu2gYfrv0cgHmZum3n2v0VOc-Twtvr4Z2S96fH9eIljF-fV4t5HGYMdRtm1KYoRQ4bDoyllMpUKIEKGKpMS0UhQkpB6U0qmRSASjFNudYCueKQsym5P-7dNfV3Z31rSuczWxRJZevOm17gKHQv9ejdgHZpaTdm17gyafbm96s9AEcga2rvG5ufEARzqGn6muZQ0ww1e0X8UTLXJoc8bZO44j_x9ig6a-3pjlQamOLsB00pfRA |
CODEN | IIPRE4 |
CitedBy_id | crossref_primary_10_1016_j_ins_2022_05_094 crossref_primary_10_1109_JSEN_2024_3362996 crossref_primary_10_1109_TIP_2024_3364539 crossref_primary_10_1016_j_bspc_2023_105083 crossref_primary_10_1109_ACCESS_2023_3271517 crossref_primary_10_1109_TMM_2021_3116430 crossref_primary_10_1007_s11277_022_09501_8 crossref_primary_10_1007_s44267_023_00004_z crossref_primary_10_1109_TAFFC_2023_3331776 crossref_primary_10_1109_TIP_2018_2886785 crossref_primary_10_1016_j_neucom_2024_128022 crossref_primary_10_1007_s11432_023_3954_7 crossref_primary_10_1016_j_knosys_2023_110389 crossref_primary_10_1155_2018_1090565 crossref_primary_10_1109_TMI_2024_3405794 crossref_primary_10_3389_fdata_2022_1025806 crossref_primary_10_1109_TCSVT_2020_3032650 crossref_primary_10_1016_j_media_2023_102944 crossref_primary_10_1109_TBDATA_2023_3338023 crossref_primary_10_1109_JAS_2023_123591 crossref_primary_10_1007_s10489_023_04963_0 crossref_primary_10_1016_j_inffus_2023_02_019 crossref_primary_10_1109_TIM_2021_3091489 crossref_primary_10_1109_TMM_2021_3089422 crossref_primary_10_1016_j_jvcir_2019_03_025 crossref_primary_10_1016_j_neucom_2020_09_068 crossref_primary_10_1109_ACCESS_2020_3010815 crossref_primary_10_1016_j_psep_2023_02_079 crossref_primary_10_1109_TAFFC_2020_2994159 crossref_primary_10_1016_j_neucom_2018_12_053 crossref_primary_10_1109_JBHI_2022_3179619 crossref_primary_10_3233_JIFS_210251 crossref_primary_10_1109_ACCESS_2019_2960769 crossref_primary_10_1007_s11432_020_3356_4 crossref_primary_10_32604_cmc_2024_047641 crossref_primary_10_1007_s13042_024_02343_9 crossref_primary_10_1016_j_jvoice_2022_10_020 crossref_primary_10_1016_j_media_2024_103162 crossref_primary_10_1145_3578518 crossref_primary_10_1021_acssusresmgt_5c00040 crossref_primary_10_1109_TAFFC_2022_3163609 crossref_primary_10_1007_s13042_024_02295_0 crossref_primary_10_1587_transinf_2023EDL8056 crossref_primary_10_1587_transinf_2018EDL8136 crossref_primary_10_1109_TNSRE_2024_3516216 crossref_primary_10_1109_TBIOM_2021_3080300 crossref_primary_10_1109_TIFS_2024_3520020 crossref_primary_10_1016_j_knosys_2022_109992 crossref_primary_10_3390_diagnostics13101719 crossref_primary_10_1016_j_asoc_2025_112963 crossref_primary_10_1016_j_jvcir_2020_102930 crossref_primary_10_1109_TFUZZ_2024_3419144 crossref_primary_10_1109_TKDE_2021_3054465 crossref_primary_10_1109_TAFFC_2023_3283595 crossref_primary_10_1016_j_ins_2024_120836 crossref_primary_10_1016_j_knosys_2024_111429 crossref_primary_10_1016_j_patcog_2024_110974 crossref_primary_10_1016_j_cviu_2020_102961 crossref_primary_10_1016_j_ipm_2022_103173 crossref_primary_10_1007_s00530_022_01022_5 crossref_primary_10_1093_bioinformatics_btz295 crossref_primary_10_1016_j_knosys_2021_107163 crossref_primary_10_1109_TMM_2023_3256065 crossref_primary_10_1109_TMI_2018_2868333 crossref_primary_10_1109_ACCESS_2024_3445178 crossref_primary_10_3390_s21196661 crossref_primary_10_1016_j_future_2022_12_018 crossref_primary_10_1007_s00521_023_08563_4 crossref_primary_10_1007_s00138_022_01318_6 crossref_primary_10_1117_1_JEI_28_1_013029 crossref_primary_10_1109_TIP_2023_3327540 crossref_primary_10_1109_TNNLS_2023_3341807 crossref_primary_10_1016_j_media_2020_101759 crossref_primary_10_1007_s11704_022_1446_5 crossref_primary_10_1117_1_JEI_28_1_013025 crossref_primary_10_1016_j_bspc_2022_104487 crossref_primary_10_1109_ACCESS_2022_3154403 crossref_primary_10_3390_axioms11040181 crossref_primary_10_1364_OE_479638 crossref_primary_10_1109_ACCESS_2019_2959584 crossref_primary_10_1109_TCSVT_2020_2981117 crossref_primary_10_1016_j_neucom_2022_07_076 crossref_primary_10_1121_10_0011741 crossref_primary_10_1016_j_media_2023_102916 crossref_primary_10_1016_j_patcog_2024_111006 crossref_primary_10_1109_TMI_2024_3371948 crossref_primary_10_1007_s00158_019_02288_6 crossref_primary_10_1016_j_media_2023_102911 crossref_primary_10_1016_j_patrec_2025_01_005 crossref_primary_10_1007_s13042_022_01567_x crossref_primary_10_1016_j_patrec_2019_05_002 crossref_primary_10_1016_j_neucom_2020_07_149 crossref_primary_10_1007_s00371_021_02323_y crossref_primary_10_1109_TMM_2022_3144070 crossref_primary_10_1177_08953996251319652 crossref_primary_10_1007_s11704_020_8272_4 crossref_primary_10_1016_j_ins_2024_121113 crossref_primary_10_1109_ACCESS_2019_2928970 crossref_primary_10_1007_s40962_024_01335_3 crossref_primary_10_1016_j_fcr_2022_108693 crossref_primary_10_1109_TNNLS_2021_3133262 crossref_primary_10_1109_TKDE_2021_3073157 crossref_primary_10_1109_JBHI_2022_3190923 crossref_primary_10_1109_TMM_2023_3267887 crossref_primary_10_1007_s11042_022_14094_2 crossref_primary_10_1145_3512935 crossref_primary_10_1109_TAFFC_2020_3022732 crossref_primary_10_1016_j_ecoinf_2022_101849 crossref_primary_10_1007_s00530_023_01219_2 crossref_primary_10_3390_rs13040755 crossref_primary_10_1007_s10115_018_1244_4 crossref_primary_10_1016_j_ijar_2022_08_009 crossref_primary_10_1016_j_neucom_2018_12_074 crossref_primary_10_1109_ACCESS_2023_3333871 crossref_primary_10_1109_TMI_2019_2959209 crossref_primary_10_1109_TPAMI_2022_3156885 crossref_primary_10_1049_iet_ipr_2019_1291 crossref_primary_10_1002_aisy_202400048 crossref_primary_10_1016_j_ins_2024_120954 crossref_primary_10_1109_TCDS_2021_3075280 crossref_primary_10_1109_TCSS_2023_3311013 crossref_primary_10_1016_j_inffus_2020_08_024 crossref_primary_10_1016_j_neucom_2022_11_022 crossref_primary_10_1016_j_bspc_2024_107382 crossref_primary_10_1109_TMI_2021_3091178 crossref_primary_10_1016_j_ins_2021_11_005 crossref_primary_10_1007_s13042_024_02385_z crossref_primary_10_1109_TITS_2023_3297948 crossref_primary_10_1587_transinf_2020EDL8038 crossref_primary_10_1109_ACCESS_2020_2964281 crossref_primary_10_1109_TAFFC_2022_3225238 crossref_primary_10_1109_ACCESS_2021_3062380 crossref_primary_10_1109_TCSVT_2017_2782709 crossref_primary_10_1007_s10489_024_05845_9 crossref_primary_10_1109_TCSVT_2021_3098712 crossref_primary_10_1016_j_engappai_2023_107061 crossref_primary_10_1007_s13042_019_00958_x crossref_primary_10_1109_TPAMI_2021_3082623 crossref_primary_10_1016_j_ins_2021_08_076 crossref_primary_10_3390_app13064019 crossref_primary_10_3390_s21093222 crossref_primary_10_1007_s11263_018_1131_1 crossref_primary_10_1016_j_patcog_2019_107178 crossref_primary_10_1109_TIFS_2023_3313356 crossref_primary_10_1016_j_fsigen_2023_103004 crossref_primary_10_1109_TIFS_2021_3114066 crossref_primary_10_1109_TIFS_2020_2969552 crossref_primary_10_1007_s13042_023_01858_x crossref_primary_10_1007_s13042_023_02090_3 crossref_primary_10_1016_j_knosys_2020_106690 crossref_primary_10_1016_j_compbiomed_2024_109256 crossref_primary_10_1109_TCDS_2021_3116604 crossref_primary_10_1007_s10489_024_05999_6 crossref_primary_10_1109_TCSVT_2019_2936410 crossref_primary_10_1016_j_compmedimag_2022_102176 crossref_primary_10_1007_s10994_022_06247_z crossref_primary_10_1109_TPAMI_2020_3029486 crossref_primary_10_1109_TNNLS_2021_3103178 crossref_primary_10_1109_TNNLS_2023_3297261 crossref_primary_10_1109_TMM_2018_2875357 crossref_primary_10_1109_TMI_2022_3202759 crossref_primary_10_1109_TNNLS_2022_3162316 crossref_primary_10_1109_TMM_2023_3304454 crossref_primary_10_1016_j_eswa_2024_124682 crossref_primary_10_1002_cpe_6660 crossref_primary_10_1155_2021_1996803 crossref_primary_10_1109_TCYB_2021_3083245 crossref_primary_10_1016_j_est_2024_114928 crossref_primary_10_1109_TIFS_2022_3218431 crossref_primary_10_1016_j_ins_2020_07_071 crossref_primary_10_1109_TNNLS_2021_3090358 crossref_primary_10_1016_j_patcog_2022_109056 crossref_primary_10_3390_rs15143578 crossref_primary_10_1016_j_neucom_2020_12_090 crossref_primary_10_1007_s13042_024_02525_5 crossref_primary_10_1109_TAFFC_2023_3334520 crossref_primary_10_1155_2022_3880201 crossref_primary_10_3390_sym12010146 crossref_primary_10_1016_j_imavis_2022_104555 crossref_primary_10_1016_j_jvcir_2018_11_006 crossref_primary_10_1109_TIP_2022_3188061 crossref_primary_10_1145_3359164 crossref_primary_10_1007_s10044_019_00857_5 crossref_primary_10_1109_LGRS_2021_3109728 crossref_primary_10_1109_TPAMI_2022_3187079 crossref_primary_10_1016_j_aei_2024_102717 crossref_primary_10_1016_j_mlwa_2024_100569 crossref_primary_10_1016_j_knosys_2019_105245 crossref_primary_10_1145_3152118 crossref_primary_10_1109_TMM_2022_3142398 crossref_primary_10_1111_cgf_142643 crossref_primary_10_3934_mbe_2024198 crossref_primary_10_1016_j_future_2022_10_009 crossref_primary_10_1007_s11263_020_01295_1 crossref_primary_10_1109_ACCESS_2023_3283148 crossref_primary_10_3390_s23239390 crossref_primary_10_3390_s22186828 crossref_primary_10_1016_j_asoc_2024_112254 crossref_primary_10_1109_ACCESS_2023_3275765 crossref_primary_10_1016_j_patcog_2022_109197 crossref_primary_10_1016_j_jksuci_2022_10_008 crossref_primary_10_1109_TKDE_2021_3092406 crossref_primary_10_1007_s11042_021_11765_4 crossref_primary_10_1016_j_eswa_2023_120710 crossref_primary_10_1109_TIP_2022_3193749 crossref_primary_10_1007_s11263_023_01758_1 crossref_primary_10_1109_ACCESS_2020_3034801 crossref_primary_10_1109_TPAMI_2019_2937294 crossref_primary_10_1016_j_neucom_2020_05_010 crossref_primary_10_1109_TCSVT_2023_3327113 crossref_primary_10_1109_TPAMI_2023_3273712 crossref_primary_10_3389_fnins_2023_1136934 crossref_primary_10_1007_s42979_023_01796_z crossref_primary_10_1016_j_knosys_2020_105684 crossref_primary_10_1145_3414843 crossref_primary_10_1109_TMC_2023_3296501 crossref_primary_10_1109_ACCESS_2023_3333904 crossref_primary_10_1109_ACCESS_2024_3443179 crossref_primary_10_1038_s41598_023_43864_7 crossref_primary_10_1016_j_neucom_2025_129777 crossref_primary_10_1109_TII_2021_3075989 crossref_primary_10_1109_TNNLS_2022_3214610 crossref_primary_10_1016_j_patcog_2024_110322 crossref_primary_10_3390_app10093089 crossref_primary_10_1109_TCSVT_2021_3096061 crossref_primary_10_1016_j_ijar_2018_10_009 crossref_primary_10_1016_j_neucom_2020_01_046 crossref_primary_10_1007_s00371_023_02854_6 crossref_primary_10_1109_TIP_2022_3163544 crossref_primary_10_1007_s00521_021_06218_w crossref_primary_10_1109_TIFS_2020_2965298 crossref_primary_10_1109_TCYB_2020_2973450 crossref_primary_10_1016_j_asoc_2020_107046 crossref_primary_10_1016_j_asoc_2021_107585 |
Cites_doi | 10.1109/TPAMI.2013.51 10.1109/CVPR.2014.237 10.1007/978-3-319-16811-1_6 10.1109/CVPR.2013.446 10.1109/FGR.2006.78 10.1145/2733373.2807412 10.1109/CVPR.2014.81 10.1007/978-3-319-16811-1_10 10.1109/CVPR.2015.7298965 10.1109/CVPR.2016.308 10.1109/ICCV.2015.123 10.1109/34.868688 10.1007/BF01450852 10.1109/TIP.2015.2481327 10.1016/j.patrec.2015.06.006 10.1109/CVPR.2011.5995458 10.1109/ICCVW.2015.41 10.1109/CVPR.2014.222 10.1109/ICCVW.2015.40 10.1109/TIP.2014.2387379 10.1109/ICCV.2011.6126343 10.1109/TIP.2017.2655445 10.1109/TKDE.2016.2545658 10.1109/CVPR.2014.414 10.1109/CVPR.2016.90 10.1109/TPAMI.2015.2491929 10.1109/CVPR.2016.486 10.1007/s11263-009-0275-4 10.1109/ICCV.2015.324 10.1109/ICCVW.2011.6130513 10.1109/CVPRW.2015.7301354 10.1007/s11263-016-0940-3 10.1109/CVPR.2013.112 10.1109/TIP.2015.2405483 10.1109/CVPR.2011.5995330 10.1109/TIP.2016.2545300 |
ContentType | Journal Article |
DBID | 97E RIA RIE AAYXX CITATION NPM 7X8 |
DOI | 10.1109/TIP.2017.2689998 |
DatabaseName | IEEE Xplore (IEEE) IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed MEDLINE - Academic |
DatabaseTitle | CrossRef PubMed MEDLINE - Academic |
DatabaseTitleList | PubMed MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Applied Sciences Engineering |
EISSN | 1941-0042 |
EndPage | 2838 |
ExternalDocumentID | 28371776 10_1109_TIP_2017_2689998 7890384 |
Genre | orig-research Journal Article |
GrantInformation_xml | – fundername: Collaborative Innovation Center of Novel Software Technology and Industrialization – fundername: Collaborative Innovation Center of Wireless Communications Technology – fundername: National Natural Science Foundation of China grantid: 61422203; 61622203; 61232007 funderid: 10.13039/501100001809 – fundername: Jiangsu Natural Science Funds for Distinguished Young Scholar grantid: BK20140022 |
GroupedDBID | --- -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P HZ~ H~9 ICLAB IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P RIA RIE RNS TAE TN5 VH1 AAYOK AAYXX CITATION RIG NPM 7X8 |
ID | FETCH-LOGICAL-c319t-c2eb176f0d4033b227b686180318c978205122089db73760188392499614840f3 |
IEDL.DBID | RIE |
ISSN | 1057-7149 1941-0042 |
IngestDate | Fri Jul 11 09:31:01 EDT 2025 Thu Apr 03 07:05:31 EDT 2025 Thu Apr 24 23:03:22 EDT 2025 Tue Jul 01 02:03:14 EDT 2025 Tue Aug 26 17:00:45 EDT 2025 |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 6 |
Language | English |
License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c319t-c2eb176f0d4033b227b686180318c978205122089db73760188392499614840f3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0002-2085-7568 |
PMID | 28371776 |
PQID | 1884169392 |
PQPubID | 23479 |
PageCount | 14 |
ParticipantIDs | pubmed_primary_28371776 proquest_miscellaneous_1884169392 ieee_primary_7890384 crossref_primary_10_1109_TIP_2017_2689998 crossref_citationtrail_10_1109_TIP_2017_2689998 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2017-June 2017-6-00 2017-Jun 20170601 |
PublicationDateYYYYMMDD | 2017-06-01 |
PublicationDate_xml | – month: 06 year: 2017 text: 2017-June |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States |
PublicationTitle | IEEE transactions on image processing |
PublicationTitleAbbrev | TIP |
PublicationTitleAlternate | IEEE Trans Image Process |
PublicationYear | 2017 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
References | ref13 ref12 minka (ref24) 2005 ref15 ref14 ref11 ref10 zhao (ref48) 2016 ref18 krahenbuhl (ref50) 2011 mathias (ref26) 2014 chen (ref51) 2015 yin (ref33) 2009; 46 ref46 ref45 ref47 ref42 gourier (ref32) 2004 ref44 van der maaten (ref52) 2008; 9 ref43 ref49 ref8 ref7 ref9 ref4 ref3 ref6 krizhevsky (ref1) 2012 ref5 parkhi (ref17) 2015 ref35 ref34 zitnick (ref39) 2014 ref36 ref31 ref30 ref2 gong (ref41) 2013 ref38 simonyan (ref16) 2015 wei (ref37) 2014 ref23 zeiler (ref19) 2014 ref25 ref20 yang (ref40) 2016 ref22 ref21 ref28 ref27 ref29 |
References_xml | – ident: ref4 doi: 10.1109/TPAMI.2013.51 – year: 2005 ident: ref24 article-title: Divergence measures and message passing – start-page: 391 year: 2014 ident: ref39 article-title: Edge boxes: Locating object proposals from edges publication-title: Proc Eur Conf Comput Vis – ident: ref7 doi: 10.1109/CVPR.2014.237 – year: 2013 ident: ref41 article-title: Deep convolutional ranking for multilabel image annotation – ident: ref14 doi: 10.1007/978-3-319-16811-1_6 – ident: ref15 doi: 10.1109/CVPR.2013.446 – ident: ref22 doi: 10.1109/FGR.2006.78 – ident: ref21 doi: 10.1145/2733373.2807412 – ident: ref2 doi: 10.1109/CVPR.2014.81 – volume: 9 start-page: 2579 year: 2008 ident: ref52 article-title: Visualizing data using t-SNE publication-title: J Mach Learn Res – start-page: 1 year: 2015 ident: ref16 article-title: Very deep convolutional networks for large-scale image recognition publication-title: Proc Int Conf Learn Represent – ident: ref28 doi: 10.1007/978-3-319-16811-1_10 – start-page: 818 year: 2014 ident: ref19 article-title: Visualizing and understanding convolutional networks publication-title: Proc Eur Conf Comput Vis – ident: ref3 doi: 10.1109/CVPR.2015.7298965 – start-page: 109 year: 2011 ident: ref50 article-title: Efficient inference in fully connected CRFs with Gaussian edge potentials publication-title: Proc Adv Neural Inf Process Syst – ident: ref18 doi: 10.1109/CVPR.2016.308 – ident: ref20 doi: 10.1109/ICCV.2015.123 – ident: ref46 doi: 10.1109/34.868688 – ident: ref35 doi: 10.1007/BF01450852 – start-page: 6 year: 2015 ident: ref17 article-title: Deep face recognition publication-title: Proc Brit Mach Vis Conf – ident: ref13 doi: 10.1109/TIP.2015.2481327 – ident: ref29 doi: 10.1016/j.patrec.2015.06.006 – ident: ref12 doi: 10.1109/CVPR.2011.5995458 – volume: 46 start-page: 1009 year: 2009 ident: ref33 article-title: BJUT-3D large scale 3D face database and information processing publication-title: J Comput Res Develop – ident: ref30 doi: 10.1109/ICCVW.2015.41 – ident: ref45 doi: 10.1109/CVPR.2014.222 – ident: ref23 doi: 10.1109/ICCVW.2015.40 – year: 2014 ident: ref37 article-title: CNN: Single-label to multi-label – start-page: 1 year: 2016 ident: ref48 article-title: Regional gating neural networks for multi-label image classification publication-title: Proc Brit Mach Vis Conf – ident: ref27 doi: 10.1109/TIP.2014.2387379 – start-page: 280 year: 2016 ident: ref40 article-title: Exploit bounding box annotations for multi-label object recognition publication-title: Proc IEEE Conf Comput Vis Pattern Recognit – start-page: 1097 year: 2012 ident: ref1 article-title: ImageNet classification with deep convolutional neural networks publication-title: Proc Adv Neural Inf Process Syst – start-page: 1 year: 2004 ident: ref32 article-title: Estimating face orientation from robust detection of salient facial structures publication-title: Proc FG Net Workshop Visual Observation of Deictic Gestures – ident: ref49 doi: 10.1109/ICCV.2011.6126343 – ident: ref9 doi: 10.1109/TIP.2017.2655445 – ident: ref25 doi: 10.1109/TKDE.2016.2545658 – ident: ref38 doi: 10.1109/CVPR.2014.414 – ident: ref10 doi: 10.1109/CVPR.2016.90 – ident: ref42 doi: 10.1109/TPAMI.2015.2491929 – ident: ref8 doi: 10.1109/CVPR.2016.486 – ident: ref6 doi: 10.1007/s11263-009-0275-4 – ident: ref11 doi: 10.1109/ICCV.2015.324 – ident: ref34 doi: 10.1109/ICCVW.2011.6130513 – ident: ref36 doi: 10.1109/CVPRW.2015.7301354 – ident: ref31 doi: 10.1007/s11263-016-0940-3 – ident: ref43 doi: 10.1109/CVPR.2013.112 – ident: ref5 doi: 10.1109/TIP.2015.2405483 – start-page: 1 year: 2015 ident: ref51 article-title: Semantic image segmentation with deep convolutional nets and fully connected CRFs publication-title: Proc Int Conf Learn Representat – ident: ref44 doi: 10.1109/CVPR.2011.5995330 – start-page: 720 year: 2014 ident: ref26 article-title: Face detection without bells and whistles publication-title: Proc Eur Conf Comput Vis – ident: ref47 doi: 10.1109/TIP.2016.2545300 |
SSID | ssj0014516 |
Score | 2.6663425 |
Snippet | Convolutional neural networks (ConvNets) have achieved excellent recognition performance in various visual recognition tasks. A large labeled training set is... |
SourceID | proquest pubmed crossref ieee |
SourceType | Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2825 |
SubjectTerms | age estimation Correlation deep learning Head head pose estimation Image segmentation Label distribution Pose estimation semantic segmentation Semantics Training |
Title | Deep Label Distribution Learning With Label Ambiguity |
URI | https://ieeexplore.ieee.org/document/7890384 https://www.ncbi.nlm.nih.gov/pubmed/28371776 https://www.proquest.com/docview/1884169392 |
Volume | 26 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8QwDLaACQbej-OlIrEg0btcm2uSEfEQIEAMINiqJnEPBNwhaBd-PXGbVggBYuuQtI0dN59r-zPALotFbN1BF2Z5koVcWRtKwzCUuRYDbfsSLRUnX14lp7f8_H5wPwH7bS0MIlbJZ9ilyyqWb8empF9lPSrajCWfhEnnuNW1Wm3EgBrOVpHNgQiFg_1NSJKp3s3ZNeVwiW6UOO9CUYs-4nzpCyIa-XIaVe1Vfkea1YlzMgeXzbvWiSZP3bLQXfPxjcbxv4uZh1kPPYODeq8swASOFmHOw9DAG_n7Isx84ShcgsER4mtwkWl8Do6IZdc3yAo8M-swuHssHvyAgxf9OCwdsF-G25Pjm8PT0PdaCI0zwiI0kftoiyRnlrM41lEkdCKTviSbN4pI9RwyiJhUVosqj0YSsnLuEhGJcpbHKzA1Go9wDQJk1irMMkyU4Zg5hzBXDpYabrnKZZR3oNfIPDWeiJz6YTynlUPCVOoUlpLCUq-wDuy1M15rEo4_xi6RrNtxXswd2GnUmjoDoqhINsJx-Z66lXBipFFRB1ZrfbeTm22y_vNNN2CaHl1njm3CVPFW4pbDKIXerjbnJ9rO3TY |
linkProvider | IEEE |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwzV1Lb9QwEB6VcgAOFFoeyzNIcOCQXa_jxPaBQ8VS7dJtxWEregvxI6Wi7FZsIgS_hb_Cf2MmcaIKAbdK3HJwrNjz2TOTmfkG4DlLZOJQ0cVFmRWx0M7FyjIfq9LI1Lix8o6Kkw8Os-mReHucHm_Aj74WxnvfJJ_5IT02sXy3sjX9KhtR0WaiREih3PffvqKDtn41m6A0X3C-92bxehqHHgKxRXBVseV4GcmsZE6wJDGcS5OpbKwIy1YTWRxqPM6UdkY2-SGKLAZ0A4ggU7AywXmvwFW0M1LeVof1MQpqcdvEUlMZS3Q0uiAo06PF7B1ljckhz9Cf0dQUkFhmxpKoTS7ov6ahy99t20bH7W3Bz2532tSWT8O6MkP7_TfiyP91-27BzWBcR7vtabgNG365DVvB0I7CNbbehhsXWBh3IJ14fx7NC-PPognxCIcWYFHgnj2J3p9WH8OA3c_m9KRG1-UOHF3KUu7C5nK19Pch8sw57YvCZ9oKX6DLW2o0vK1wQpeKlwMYdTLObaBap44fZ3njcjGdI0ByAkgeADKAl_0b5y3NyD_G7pBs-3FBrAN41sEoxyuC4j7F0q_qdY4rEcS5o_kA7rX46l_uYPngz5M-hWvTxcE8n88O9x_CdfqMNk_uEWxWX2r_GC2yyjxpDkYEHy4bSr8ASiQzOw |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Deep+Label+Distribution+Learning+With+Label+Ambiguity&rft.jtitle=IEEE+transactions+on+image+processing&rft.au=Bin-Bin+Gao&rft.au=Chao+Xing&rft.au=Chen-Wei+Xie&rft.au=Jianxin+Wu&rft.date=2017-06-01&rft.pub=IEEE&rft.issn=1057-7149&rft.volume=26&rft.issue=6&rft.spage=2825&rft.epage=2838&rft_id=info:doi/10.1109%2FTIP.2017.2689998&rft_id=info%3Apmid%2F28371776&rft.externalDocID=7890384 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1057-7149&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1057-7149&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1057-7149&client=summon |