Ranking and combining multiple predictors without labeled data
In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assesse...
Saved in:
Published in | Proceedings of the National Academy of Sciences - PNAS Vol. 111; no. 4; pp. 1253 - 1258 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
United States
National Academy of Sciences
28.01.2014
National Acad Sciences |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. |
---|---|
AbstractList | In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier's accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier's accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. [PUBLICATION ABSTRACT] A key challenge in a broad range of decision-making and classification problems is how to rank and combine the possibly conflicting suggestions of several advisers of unknown reliability. We provide mathematical insights of striking conceptual simplicity that explain mutual relationships between independent advisers. These insights enable the design of efficient, robust, and reliable methods to rank the advisers’ performances and construct improved predictions in the absence of ground truth. Furthermore, these methods are robust to the presence of small subgroups of malicious advisers (cartels) attempting to veer the combined decisions to their interest. In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier’s accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to ( i ) reliably rank them and ( ii ) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier's accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth.In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over multiple questions or queries. This scenario is different from the standard supervised setting, where each classifier's accuracy can be assessed using available labeled data, and raises two questions: Given only the predictions of several classifiers over a large set of unlabeled test data, is it possible to (i) reliably rank them and (ii) construct a metaclassifier more accurate than most classifiers in the ensemble? Here we present a spectral approach to address these questions. First, assuming conditional independence between classifiers, we show that the off-diagonal entries of their covariance matrix correspond to a rank-one matrix. Moreover, the classifiers can be ranked using the leading eigenvector of this covariance matrix, because its entries are proportional to their balanced accuracies. Second, via a linear approximation to the maximum likelihood estimator, we derive the Spectral Meta-Learner (SML), an unsupervised ensemble classifier whose weights are equal to these eigenvector entries. On both simulated and real data, SML typically achieves a higher accuracy than most classifiers in the ensemble and can provide a better starting point than majority voting for estimating the maximum likelihood solution. Furthermore, SML is robust to the presence of small malicious groups of classifiers designed to veer the ensemble prediction away from the (unknown) ground truth. |
Author | Parisi, Fabio Kluger, Yuval Strino, Francesco Nadler, Boaz |
Author_xml | – sequence: 1 givenname: Fabio surname: Parisi fullname: Parisi, Fabio – sequence: 2 givenname: Francesco surname: Strino fullname: Strino, Francesco – sequence: 3 givenname: Boaz surname: Nadler fullname: Nadler, Boaz – sequence: 4 givenname: Yuval surname: Kluger fullname: Kluger, Yuval |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/24474744$$D View this record in MEDLINE/PubMed |
BookMark | eNqNkc1v1DAQxS1URLcLZ05AJC5c0s7EdhxfKlUVX1IlJKBny3HsrZfEXuIExH-Po-22UAkJzcGy5jfPz_NOyFGIwRLyHOEUQdCzXdDpFCuUIAUiPiIrBIllzSQckRVAJcqGVeyYnKS0BQDJG3hCjivGRC62Iuefdfjmw6bQoStMHFofltsw95Pf9bbYjbbzZopjKn766SbOU9Hr1va2Kzo96afksdN9ss9uzzW5fvf26-WH8urT-4-XF1el4Y2cSp29WifQ8VzatB13dYXUsYqy2oAzFFE2jrfGCE5pzZ1FYLSzdSuFhIauyfledze3g-2MDdOoe7Ub_aDHXypqr_7uBH-jNvGHohKhzq-vyZtbgTF-n22a1OCTsX2vg41zUtgABdEI_A-UyUpgnTeZ0dcP0G2cx5A3sVBMShCcZ-rln-bvXB9SyADfA2aMKY3WKeMnPfm4_MX3CkEtaaslbXWfdp47ezB3kP73xMHK0rijERXLWF79mrzYA9uUQ7-3SkUtgS4Cr_Z9p6PSm9Endf2lAqwBkDZNDvU3Cm3Igg |
CitedBy_id | crossref_primary_10_1109_LGRS_2020_2990332 crossref_primary_10_1109_MC_2016_294 crossref_primary_10_1145_3371925 crossref_primary_10_1080_24725854_2024_2417258 crossref_primary_10_1016_j_eswa_2019_06_022 crossref_primary_10_1016_j_jneumeth_2017_10_011 crossref_primary_10_1109_TNSRE_2023_3241241 crossref_primary_10_3390_e27030307 crossref_primary_10_1093_bioadv_vbae093 crossref_primary_10_1016_j_conengprac_2019_03_013 crossref_primary_10_1038_s41592_019_0509_5 crossref_primary_10_1073_pnas_2100761118 crossref_primary_10_1007_s13748_024_00360_x crossref_primary_10_1093_pq_pqx046 crossref_primary_10_1016_j_atmosres_2024_107520 crossref_primary_10_1109_LSP_2019_2918945 crossref_primary_10_1016_j_jneumeth_2020_108855 crossref_primary_10_3389_fninf_2019_00016 crossref_primary_10_1109_TCDS_2020_3007453 crossref_primary_10_1109_TSP_2018_2860562 crossref_primary_10_1007_s10827_018_0694_8 crossref_primary_10_1016_j_knosys_2023_110809 crossref_primary_10_1093_nar_gkv865 crossref_primary_10_3389_fnins_2016_00430 crossref_primary_10_1016_j_rse_2016_01_010 crossref_primary_10_1175_JHM_D_20_0097_1 crossref_primary_10_14778_3275536_3275541 crossref_primary_10_3390_math13030420 crossref_primary_10_1016_j_ins_2016_05_042 crossref_primary_10_2139_ssrn_2607083 crossref_primary_10_1109_TNSRE_2017_2699784 crossref_primary_10_1093_nar_gkab1032 crossref_primary_10_1016_j_rse_2019_111219 crossref_primary_10_1098_rsos_181806 crossref_primary_10_1038_s41467_023_36492_2 crossref_primary_10_14778_3157794_3157797 crossref_primary_10_1109_TIT_2020_3045613 crossref_primary_10_1007_s00778_020_00613_w crossref_primary_10_1093_bioinformatics_btac112 crossref_primary_10_1109_JSTARS_2021_3137231 crossref_primary_10_1109_TBME_2023_3303289 crossref_primary_10_1109_TGRS_2019_2928452 crossref_primary_10_1093_bib_bbae612 crossref_primary_10_1137_20M1365715 crossref_primary_10_1109_LSP_2021_3052135 crossref_primary_10_1038_s41467_019_09799_2 crossref_primary_10_1186_s40537_019_0186_3 crossref_primary_10_2139_ssrn_3603331 crossref_primary_10_1109_TSG_2018_2816027 crossref_primary_10_1016_j_patcog_2022_108721 crossref_primary_10_1093_bib_bbac588 crossref_primary_10_1109_TAC_2017_2727679 crossref_primary_10_1007_s00778_019_00552_1 crossref_primary_10_1038_ng_3477 |
Cites_doi | 10.1016/0167-9473(93)E0056-A 10.1145/1401890.1401965 10.1016/S1574-0706(05)01004-9 10.1016/j.ejca.2007.01.025 10.1016/S1574-0706(05)01010-4 10.1016/0895-4356(88)90110-2 10.1080/01621459.1974.10480137 10.1093/nar/gks048 10.1016/j.dss.2010.08.028 10.1371/journal.pone.0026074 10.1109/Allerton.2011.6120180 10.1126/scitranslmed.3006112 10.2139/ssrn.1719622 |
ContentType | Journal Article |
Copyright | copyright © 1993—2008 National Academy of Sciences of the United States of America Copyright National Academy of Sciences Jan 28, 2014 |
Copyright_xml | – notice: copyright © 1993—2008 National Academy of Sciences of the United States of America – notice: Copyright National Academy of Sciences Jan 28, 2014 |
DBID | FBQ AAYXX CITATION CGR CUY CVF ECM EIF NPM 7QG 7QL 7QP 7QR 7SN 7SS 7T5 7TK 7TM 7TO 7U9 8FD C1K FR3 H94 M7N P64 RC3 7X8 7S9 L.6 5PM |
DOI | 10.1073/pnas.1219097111 |
DatabaseName | AGRIS CrossRef Medline MEDLINE MEDLINE (Ovid) MEDLINE MEDLINE PubMed Animal Behavior Abstracts Bacteriology Abstracts (Microbiology B) Calcium & Calcified Tissue Abstracts Chemoreception Abstracts Ecology Abstracts Entomology Abstracts (Full archive) Immunology Abstracts Neurosciences Abstracts Nucleic Acids Abstracts Oncogenes and Growth Factors Abstracts Virology and AIDS Abstracts Technology Research Database Environmental Sciences and Pollution Management Engineering Research Database AIDS and Cancer Research Abstracts Algology Mycology and Protozoology Abstracts (Microbiology C) Biotechnology and BioEngineering Abstracts Genetics Abstracts MEDLINE - Academic AGRICOLA AGRICOLA - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef MEDLINE Medline Complete MEDLINE with Full Text PubMed MEDLINE (Ovid) Virology and AIDS Abstracts Oncogenes and Growth Factors Abstracts Technology Research Database Nucleic Acids Abstracts Ecology Abstracts Neurosciences Abstracts Biotechnology and BioEngineering Abstracts Environmental Sciences and Pollution Management Entomology Abstracts Genetics Abstracts Animal Behavior Abstracts Bacteriology Abstracts (Microbiology B) Algology Mycology and Protozoology Abstracts (Microbiology C) AIDS and Cancer Research Abstracts Chemoreception Abstracts Immunology Abstracts Engineering Research Database Calcium & Calcified Tissue Abstracts MEDLINE - Academic AGRICOLA AGRICOLA - Academic |
DatabaseTitleList | Virology and AIDS Abstracts CrossRef MEDLINE AGRICOLA MEDLINE - Academic |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: EIF name: MEDLINE url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search sourceTypes: Index Database – sequence: 3 dbid: FBQ name: AGRIS url: http://www.fao.org/agris/Centre.asp?Menu_1ID=DB&Menu_2ID=DB1&Language=EN&Content=http://www.fao.org/agris/search?Language=EN sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Sciences (General) |
DocumentTitleAlternate | Ranking and combining predictors without labels |
EISSN | 1091-6490 |
EndPage | 1258 |
ExternalDocumentID | PMC3910607 3207865271 24474744 10_1073_pnas_1219097111 111_4_1253 23769031 US201600138821 |
Genre | Research Support, Non-U.S. Gov't Journal Article Research Support, N.I.H., Extramural Feature |
GrantInformation_xml | – fundername: NCI NIH HHS grantid: R01 CA158167 – fundername: NCI NIH HHS grantid: R0-1 CA158167 |
GroupedDBID | --- -DZ -~X .55 .GJ 0R~ 123 29P 2AX 2FS 2WC 3O- 4.4 53G 5RE 5VS 692 6TJ 79B 85S AACGO AAFWJ AANCE AAYJJ ABBHK ABOCM ABPLY ABPPZ ABTLG ABXSQ ABZEH ACGOD ACHIC ACIWK ACKIV ACNCT ACPRK ADQXQ ADULT AENEX AEUPB AEXZC AFFNX AFHIN AFOSN AFQQW AFRAH ALMA_UNASSIGNED_HOLDINGS AQVQM AS~ BKOMP CS3 D0L DCCCD DIK DU5 E3Z EBS EJD F5P FBQ FRP GX1 H13 HGD HH5 HQ3 HTVGU HYE IPSME JAAYA JBMMH JENOY JHFFW JKQEH JLS JLXEF JPM JSG JST KQ8 L7B LU7 MVM N9A NEJ NHB N~3 O9- OK1 P-O PNE PQQKQ R.V RHI RNA RNS RPM RXW SA0 SJN TAE TN5 UKR VOH W8F WH7 WHG WOQ WOW X7M XSW Y6R YBH YKV YSK ZCA ZCG ~02 ~KM ADXHL - 02 0R 1AW 55 AAPBV ABFLS ABPTK ADACO ADZLD ASUFR DNJUQ DOOOF DWIUU DZ F20 JSODD KM PQEST RHF VQA X XHC ZA5 AAYXX CITATION CGR CUY CVF ECM EIF NPM 7QG 7QL 7QP 7QR 7SN 7SS 7T5 7TK 7TM 7TO 7U9 8FD C1K FR3 H94 M7N P64 RC3 7X8 7S9 L.6 5PM |
ID | FETCH-LOGICAL-c589t-a073ef71f5f5facbd5f6213f42346c0fc31198f5bcc753365fe1043de6b979083 |
ISSN | 0027-8424 1091-6490 |
IngestDate | Thu Aug 21 13:35:12 EDT 2025 Thu Jul 10 23:42:53 EDT 2025 Fri Jul 11 01:18:46 EDT 2025 Mon Jun 30 07:45:09 EDT 2025 Mon Jul 21 06:05:26 EDT 2025 Tue Jul 01 01:52:58 EDT 2025 Thu Apr 24 23:07:22 EDT 2025 Wed Nov 11 00:30:31 EST 2020 Thu May 29 08:40:44 EDT 2025 Thu Apr 03 09:39:50 EDT 2025 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 4 |
Keywords | spectral analysis classifier balanced accuracy crowdsourcing cartels unsupervised learning |
Language | English |
License | Freely available online through the PNAS open access option. |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c589t-a073ef71f5f5facbd5f6213f42346c0fc31198f5bcc753365fe1043de6b979083 |
Notes | http://dx.doi.org/10.1073/pnas.1219097111 SourceType-Scholarly Journals-1 ObjectType-Feature-1 content type line 14 ObjectType-Article-1 ObjectType-Feature-2 content type line 23 Author contributions: F.P., F.S., B.N., and Y.K. designed research, performed research, analyzed data, and wrote the paper. Edited by Peter J. Bickel, University of California, Berkeley, CA, and approved December 17, 2013 (received for review November 1, 2012) 1F.P. and F.S. contributed equally to this work. |
OpenAccessLink | http://europepmc.org/articles/PMC3910607 |
PMID | 24474744 |
PQID | 1494990755 |
PQPubID | 42026 |
PageCount | 6 |
ParticipantIDs | crossref_primary_10_1073_pnas_1219097111 pubmedcentral_primary_oai_pubmedcentral_nih_gov_3910607 proquest_miscellaneous_1492716580 crossref_citationtrail_10_1073_pnas_1219097111 proquest_journals_1494990755 proquest_miscellaneous_1803078717 pnas_primary_111_4_1253 pubmed_primary_24474744 jstor_primary_23769031 fao_agris_US201600138821 |
ProviderPackageCode | RNA PNE CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2014-01-28 |
PublicationDateYYYYMMDD | 2014-01-28 |
PublicationDate_xml | – month: 01 year: 2014 text: 2014-01-28 day: 28 |
PublicationDecade | 2010 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Washington |
PublicationTitle | Proceedings of the National Academy of Sciences - PNAS |
PublicationTitleAlternate | Proc Natl Acad Sci U S A |
PublicationYear | 2014 |
Publisher | National Academy of Sciences National Acad Sciences |
Publisher_xml | – name: National Academy of Sciences – name: National Acad Sciences |
References | Stock JH (e_1_3_3_12_2) 2006 Welinder P (e_1_3_3_21_2) 2010 e_1_3_3_17_2 e_1_3_3_16_2 Witten IH (e_1_3_3_25_2) 2011 Snow R (e_1_3_3_28_2) 2008 e_1_3_3_14_2 e_1_3_3_30_2 e_1_3_3_10_2 Raykar VC (e_1_3_3_23_2) 2010; 11 Jin R (e_1_3_3_26_2) 2003 Smyth P (e_1_3_3_19_2) 1994 Timmermann A (e_1_3_3_13_2) 2006 Dawid AP (e_1_3_3_15_2) 1979; 28 e_1_3_3_6_2 e_1_3_3_5_2 e_1_3_3_8_2 Dietterich TG (e_1_3_3_24_2) 2000 e_1_3_3_7_2 e_1_3_3_9_2 Linstone H (e_1_3_3_11_2) 1975 e_1_3_3_27_2 e_1_3_3_29_2 Whitehill J (e_1_3_3_18_2) 2009; 22 e_1_3_3_2_2 e_1_3_3_20_2 e_1_3_3_1_2 e_1_3_3_4_2 e_1_3_3_22_2 e_1_3_3_3_2 17329094 - Eur J Cancer. 2007 Apr;43(6):1002-10 22046256 - PLoS One. 2011;6(10):e26074 23596205 - Sci Transl Med. 2013 Apr 17;5(181):181re1 3054000 - J Clin Epidemiol. 1988;41(9):923-37 22307239 - Nucleic Acids Res. 2012 May;40(9):e70 |
References_xml | – start-page: 254 volume-title: Proceedings of the Conference on Empirical Methods in Natural Language Processing year: 2008 ident: e_1_3_3_28_2 – volume: 22 start-page: 2035 year: 2009 ident: e_1_3_3_18_2 article-title: Whose vote should count more: Optimal integration of labels from labelers of unknown expertise publication-title: Adv Neural Inf Process Syst – ident: e_1_3_3_27_2 doi: 10.1016/0167-9473(93)E0056-A – ident: e_1_3_3_20_2 doi: 10.1145/1401890.1401965 – ident: e_1_3_3_3_2 – ident: e_1_3_3_9_2 – start-page: 135 volume-title: Handbook of Economic Forecasting year: 2006 ident: e_1_3_3_13_2 doi: 10.1016/S1574-0706(05)01004-9 – volume-title: Data Mining: Practical Machine Learning Tools and Techniques year: 2011 ident: e_1_3_3_25_2 – volume-title: Delphi Method: Techniques and Applications year: 1975 ident: e_1_3_3_11_2 – ident: e_1_3_3_4_2 – ident: e_1_3_3_8_2 doi: 10.1016/j.ejca.2007.01.025 – start-page: 515 volume-title: Handbook of Economic Forecasting year: 2006 ident: e_1_3_3_12_2 doi: 10.1016/S1574-0706(05)01010-4 – ident: e_1_3_3_17_2 – volume: 11 start-page: 12971322 year: 2010 ident: e_1_3_3_23_2 article-title: Learning from crowds publication-title: J Mach Learn Res – ident: e_1_3_3_29_2 doi: 10.1016/0895-4356(88)90110-2 – ident: e_1_3_3_22_2 – start-page: 1 volume-title: Lecture Notes in Computer Science year: 2000 ident: e_1_3_3_24_2 – ident: e_1_3_3_2_2 – ident: e_1_3_3_5_2 – ident: e_1_3_3_14_2 doi: 10.1080/01621459.1974.10480137 – ident: e_1_3_3_7_2 doi: 10.1093/nar/gks048 – start-page: 1085 volume-title: Advances in Neural Information Processing Systems year: 1994 ident: e_1_3_3_19_2 – ident: e_1_3_3_1_2 doi: 10.1016/j.dss.2010.08.028 – volume: 28 start-page: 20 year: 1979 ident: e_1_3_3_15_2 publication-title: J R Stat Soc Ser C Appl Stat – ident: e_1_3_3_30_2 doi: 10.1371/journal.pone.0026074 – start-page: 2424 volume-title: Advances in Neural Information Processing Systems 23 year: 2010 ident: e_1_3_3_21_2 – ident: e_1_3_3_16_2 doi: 10.1109/Allerton.2011.6120180 – ident: e_1_3_3_10_2 doi: 10.1126/scitranslmed.3006112 – ident: e_1_3_3_6_2 doi: 10.2139/ssrn.1719622 – start-page: 897 volume-title: Advances in Neural Information Processing Systems year: 2003 ident: e_1_3_3_26_2 – reference: 17329094 - Eur J Cancer. 2007 Apr;43(6):1002-10 – reference: 22046256 - PLoS One. 2011;6(10):e26074 – reference: 3054000 - J Clin Epidemiol. 1988;41(9):923-37 – reference: 22307239 - Nucleic Acids Res. 2012 May;40(9):e70 – reference: 23596205 - Sci Transl Med. 2013 Apr 17;5(181):181re1 |
SSID | ssj0009580 |
Score | 2.47003 |
Snippet | In a broad range of classification and decision-making problems, one is given the advice or predictions of several classifiers, of unknown reliability, over... A key challenge in a broad range of decision-making and classification problems is how to rank and combine the possibly conflicting suggestions of several... |
SourceID | pubmedcentral proquest pubmed crossref pnas jstor fao |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 1253 |
SubjectTerms | Approximation Cartels covariance Covariance matrices Datasets Decision making Eigenvectors Estimate reliability Initial guess Likelihood Functions Machine learning Majority voting Matrix Maximum likelihood estimation Maximum likelihood method Models, Theoretical Physical Sciences prediction Simulation Test data |
Title | Ranking and combining multiple predictors without labeled data |
URI | https://www.jstor.org/stable/23769031 http://www.pnas.org/content/111/4/1253.abstract https://www.ncbi.nlm.nih.gov/pubmed/24474744 https://www.proquest.com/docview/1494990755 https://www.proquest.com/docview/1492716580 https://www.proquest.com/docview/1803078717 https://pubmed.ncbi.nlm.nih.gov/PMC3910607 |
Volume | 111 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1bb9MwFLbYeOEFMWAsMFCQeBiqMnJpYucFaSCmiYeqglWCp8hJnFFppFUvPOzX8x3HdtqpQ4AqRVXsOJfz-fizfS6MvWkSsGCVl0GN0R4TlFIEMi9DdLywqeKsBkXWBrKj7GIy_Pwt3dho194lq_K0utnpV_I_UsU5yJW8ZP9Bsq5RnMB_yBdHSBjHv5LxF6kTH1jftFIne-htBOcL2oXR6XRouZUskCFzjDP1wLikOV46duPY0loNjOwy4VnvdGI0wXIQDMajPoXxmDIZarOAc1l2hl2drfBiqjN7d-k71LJyRSNZGxfEDzN547T-9fqqO_t9_cu8vFmRiMiKxXp4q06LgoQE2bDLA-rUrFGq081FBK00wbGSndoc6odSELdySUEwcgp31TWyIdv5Ty1csBRMjLpQkrcCaNuiPXY_xlwisUs6LjKzCG3MJ568u3U3ChZtrt9iLnuNnFkTVoqLi6t2zVFum9pucJfLR-yhmXT4Zx2CDtg91T5mB1aY_omJPf72CXtvIOUDUr6DlG8h5feQ8g2kfAMpnyD1lE3OP11-vAhMio2gSkW-CiReWTU8alL8ZFXWaZPFUdKAZA-zCt01iaJcNGlZVZjXJlnaKMzfk1plZc5z0PdDtt_OWnXEfNWoOhJAJKccaGEuyrrOY6EqsAHB5dBjp_bzFZWJP09pUK4LbQfBk4I-YtF_eo-duAvmXeiVu6seQR6FvALai8nXmMIm0ha8iFF0qIXkmiAzsBxDmcee6VZc05gJDwvCoseOrSAL09txMx3GCQQ79dhrVwxdTBtsslWzta4T8wicPvxDHUHDquARpwfQ2OgfzSDNY3wLNa4CxYLfLmmnP3RM-AS0Pwv58zvbfMEe9H31mO2vFmv1Enx6Vb7S_eE3ZmHH3g |
linkProvider | National Library of Medicine |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Ranking+and+combining+multiple+predictors+without+labeled+data&rft.jtitle=Proceedings+of+the+National+Academy+of+Sciences+-+PNAS&rft.au=Parisi%2C+Fabio&rft.au=Strino%2C+Francesco&rft.au=Nadler%2C+Boaz&rft.au=Kluger%2C+Yuval&rft.date=2014-01-28&rft.eissn=1091-6490&rft.volume=111&rft.issue=4&rft.spage=1253&rft_id=info:doi/10.1073%2Fpnas.1219097111&rft_id=info%3Apmid%2F24474744&rft.externalDocID=24474744 |
thumbnail_m | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=http%3A%2F%2Fwww.pnas.org%2Fcontent%2F111%2F4.cover.gif |
thumbnail_s | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=http%3A%2F%2Fwww.pnas.org%2Fcontent%2F111%2F4.cover.gif |