Adaptive parameter selection for kernel ridge regression
This paper focuses on parameter selection issues of kernel ridge regression (KRR). Due to special spectral properties of KRR, we find that delicate subdivision of the parameter interval shrinks the difference between two successive KRR estimates. Based on this observation, we develop an early-stoppi...
Saved in:
Published in | Applied and computational harmonic analysis Vol. 73; p. 101671 |
---|---|
Main Author | |
Format | Journal Article |
Language | English |
Published |
Elsevier Inc
01.11.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | This paper focuses on parameter selection issues of kernel ridge regression (KRR). Due to special spectral properties of KRR, we find that delicate subdivision of the parameter interval shrinks the difference between two successive KRR estimates. Based on this observation, we develop an early-stopping type parameter selection strategy for KRR according to the so-called Lepskii-type principle. Theoretical verifications are presented in the framework of learning theory to show that KRR equipped with the proposed parameter selection strategy succeeds in achieving optimal learning rates and adapts to different norms, providing a new record of parameter selection for kernel methods. |
---|---|
AbstractList | This paper focuses on parameter selection issues of kernel ridge regression (KRR). Due to special spectral properties of KRR, we find that delicate subdivision of the parameter interval shrinks the difference between two successive KRR estimates. Based on this observation, we develop an early-stopping type parameter selection strategy for KRR according to the so-called Lepskii-type principle. Theoretical verifications are presented in the framework of learning theory to show that KRR equipped with the proposed parameter selection strategy succeeds in achieving optimal learning rates and adapts to different norms, providing a new record of parameter selection for kernel methods. |
ArticleNumber | 101671 |
Author | Lin, Shao-Bo |
Author_xml | – sequence: 1 givenname: Shao-Bo surname: Lin fullname: Lin, Shao-Bo email: sblin1983@gmail.com organization: Center for Intelligent Decision-Making and Machine Learning, School of Management, Xi'an Jiaotong University, Xi'an 710049, China |
BookMark | eNp9kMtqwzAQRbVIoUnaH-jKP-B0Rn7J0E0IfUGgm3YtZGmUKnXsMDKB_n3tpqsushq4wxnumYWYdX1HQtwhrBCwvN-vjP00Kwky_w0qnIk5QpmlhYTsWixi3AMg5kU9F2rtzHEIJ0qOhs2BBuIkUkt2CH2X-J6TL-KO2oSD21HCtGOKcdzdiCtv2ki3f3MpPp4e3zcv6fbt-XWz3qY2AxhS2dRW5gXWhpwqa1_YrPBZXeVN5RsH1hcNKQ_oazSlhMZJWXlUqqy8QleV2VKo813LfYxMXtswmKndwCa0GkFPknqvJ2s9Weuz9YjKf-iRw8Hw92Xo4QzRKHUKxDraQJ0lF3j8inZ9uIT_ALxwdMg |
CitedBy_id | crossref_primary_10_2478_amns_2024_2837 crossref_primary_10_1016_j_jeurceramsoc_2024_116900 |
Cites_doi | 10.1016/j.acha.2018.03.001 10.1007/s10208-010-9064-2 10.1007/s102080010030 10.1007/s00365-006-0659-y 10.1088/1361-6420/aa72b2 10.1142/S0219530516400017 10.1090/S0273-0979-04-01025-0 10.1142/S0219530510001564 10.1007/s10208-017-9359-7 10.1016/j.acha.2005.03.001 10.1007/s10208-006-0196-8 10.1142/S0219530519500039 10.1162/neco.2008.05-07-517 10.1137/1135065 10.1016/j.neucom.2018.02.009 |
ContentType | Journal Article |
Copyright | 2024 Elsevier Inc. |
Copyright_xml | – notice: 2024 Elsevier Inc. |
DBID | AAYXX CITATION |
DOI | 10.1016/j.acha.2024.101671 |
DatabaseName | CrossRef |
DatabaseTitle | CrossRef |
DatabaseTitleList | |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Engineering Mathematics |
ExternalDocumentID | 10_1016_j_acha_2024_101671 S1063520324000484 |
GrantInformation_xml | – fundername: Natural Science Foundation of China grantid: 62276209 funderid: https://doi.org/10.13039/501100001809 – fundername: National Key R&D Program of China grantid: 2020YFA0713900 funderid: https://doi.org/10.13039/501100012166 |
GroupedDBID | --K --M .~1 0R~ 0SF 1B1 1RT 1~. 1~5 23M 4.4 457 4G. 5GY 5VS 6I. 7-5 71M 8P~ 9JN AACTN AAEDT AAEDW AAFTH AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AASFE AAXKI AAXUO ABAOU ABFNM ABJNI ABMAC ABVKL ABXDB ACDAQ ACGFS ACRLP ADBBV ADEZE ADFGL ADMUD ADVLN AEBSH AEKER AENEX AEXQZ AFKWA AFTJW AGHFR AGUBO AGYEJ AHHHB AIEXJ AIGVJ AIKHN AITUG AJOXV AKRWK ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ ARUGR ASPBG AVWKF AXJTR AZFZN BKOJK BLXMC CAG COF CS3 DM4 EBS EFBJH EJD EO8 EO9 EP2 EP3 F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q GBLVA HVGLF HZ~ IHE IXB J1W KOM LG5 M26 M41 MCRUF MHUIS MO0 N9A NCXOZ O-L O9- OAUVE OK1 OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SDF SDG SDP SES SEW SPC SPCBC SSW SSZ T5K WUQ XPP ZMT ~G- AATTM AAYWO AAYXX ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AFXIZ AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKYEP ANKPU APXCP BNPGV CITATION SSH |
ID | FETCH-LOGICAL-c300t-2b9c24519aed869f5c35f3974b7fbd0cf5be8f01f91a620bd227f18867f81d763 |
IEDL.DBID | .~1 |
ISSN | 1063-5203 |
IngestDate | Thu Apr 24 23:03:24 EDT 2025 Tue Jul 01 03:49:06 EDT 2025 Sat Sep 07 15:50:45 EDT 2024 |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Kernel ridge regression Parameter selection Learning theory Lepskii principle |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c300t-2b9c24519aed869f5c35f3974b7fbd0cf5be8f01f91a620bd227f18867f81d763 |
ParticipantIDs | crossref_citationtrail_10_1016_j_acha_2024_101671 crossref_primary_10_1016_j_acha_2024_101671 elsevier_sciencedirect_doi_10_1016_j_acha_2024_101671 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | November 2024 2024-11-00 |
PublicationDateYYYYMMDD | 2024-11-01 |
PublicationDate_xml | – month: 11 year: 2024 text: November 2024 |
PublicationDecade | 2020 |
PublicationTitle | Applied and computational harmonic analysis |
PublicationYear | 2024 |
Publisher | Elsevier Inc |
Publisher_xml | – name: Elsevier Inc |
References | De Vito, Pereverzyev, Rosasco (br0170) 2010; 10 Lin, Wang, Zhou (br0320) 2020; 21 Lepskii (br0160) 1991; 35 Blanchard, Mathé, Mücke (br0130) 2019 Raskutti, Wainwright, Yu (br0300) 2014; 15 Smale, Zhou (br0180) 2005; 19 Gerfo, Rosasco, Odone, Vito, Verri (br0060) 2008; 20 Christmann, Xiang, Zhou (br0080) 2018; 289 Celisse, Wahl (br0110) 2021; 22 Bhatia (br0330) 2013 Engl, Hanke, Neubauer (br0150) 1996 Lu, Mathé, Pereverzyev (br0270) 2019; 17 Köhler, Christmann (br0090) 2022; 23 Blanchard, Mücke (br0240) 2018; 18 Lin, Lei, Zhou (br0050) 2019; 20 Lu, Mathé, Pereverzev (br0120) 2020; 48 Smale, Zhou (br0310) 2004; 41 Caponnetto, De Vito (br0010) 2007; 7 Steinwart, Christmann (br0230) 2008 Blanchard, Krämer (br0260) 2016; 14 Smale, Zhou (br0190) 2007; 26 Cucker, Smale (br0070) 2002; 2 Fischer, Steinwart (br0280) 2020; 21 Caponnetto, Yao (br0100) 2010; 8 Guo, Lin, Zhou (br0290) 2017; 33 Steinwart, Hush, Scovel (br0200) 2009 Meister, Steinwart (br0040) 2016; 17 Györfi, Kohler, Krzyżak, Walk (br0140) 2002 Rudi, Camoriano, Rosasco (br0020) 2015 Lin, Guo, Zhou (br0210) 2017; 18 Chang, Lin, Zhou (br0250) 2017; 18 Zhang, Duchi, Wainwright (br0030) 2015; 16 Cucker, Zhou (br0220) 2007 Blanchard (10.1016/j.acha.2024.101671_br0130) Raskutti (10.1016/j.acha.2024.101671_br0300) 2014; 15 Cucker (10.1016/j.acha.2024.101671_br0070) 2002; 2 Smale (10.1016/j.acha.2024.101671_br0180) 2005; 19 Caponnetto (10.1016/j.acha.2024.101671_br0010) 2007; 7 De Vito (10.1016/j.acha.2024.101671_br0170) 2010; 10 Lin (10.1016/j.acha.2024.101671_br0050) 2019; 20 Lu (10.1016/j.acha.2024.101671_br0120) 2020; 48 Köhler (10.1016/j.acha.2024.101671_br0090) 2022; 23 Bhatia (10.1016/j.acha.2024.101671_br0330) 2013 Smale (10.1016/j.acha.2024.101671_br0190) 2007; 26 Györfi (10.1016/j.acha.2024.101671_br0140) 2002 Lin (10.1016/j.acha.2024.101671_br0320) 2020; 21 Lepskii (10.1016/j.acha.2024.101671_br0160) 1991; 35 Gerfo (10.1016/j.acha.2024.101671_br0060) 2008; 20 Caponnetto (10.1016/j.acha.2024.101671_br0100) 2010; 8 Meister (10.1016/j.acha.2024.101671_br0040) 2016; 17 Smale (10.1016/j.acha.2024.101671_br0310) 2004; 41 Fischer (10.1016/j.acha.2024.101671_br0280) 2020; 21 Rudi (10.1016/j.acha.2024.101671_br0020) 2015 Engl (10.1016/j.acha.2024.101671_br0150) 1996 Cucker (10.1016/j.acha.2024.101671_br0220) 2007 Celisse (10.1016/j.acha.2024.101671_br0110) 2021; 22 Steinwart (10.1016/j.acha.2024.101671_br0200) 2009 Lu (10.1016/j.acha.2024.101671_br0270) 2019; 17 Lin (10.1016/j.acha.2024.101671_br0210) 2017; 18 Zhang (10.1016/j.acha.2024.101671_br0030) 2015; 16 Steinwart (10.1016/j.acha.2024.101671_br0230) 2008 Blanchard (10.1016/j.acha.2024.101671_br0240) 2018; 18 Blanchard (10.1016/j.acha.2024.101671_br0260) 2016; 14 Christmann (10.1016/j.acha.2024.101671_br0080) 2018; 289 Chang (10.1016/j.acha.2024.101671_br0250) 2017; 18 Guo (10.1016/j.acha.2024.101671_br0290) 2017; 33 |
References_xml | – volume: 15 start-page: 335 year: 2014 end-page: 366 ident: br0300 article-title: Early stopping and non-parametric regression: an optimal data-dependent stopping rule publication-title: J. Mach. Learn. Res. – volume: 21 start-page: 1 year: 2020 end-page: 38 ident: br0320 article-title: Distributed kernel ridge regression with communications publication-title: J. Mach. Learn. Res. – start-page: 79 year: 2009 end-page: 93 ident: br0200 article-title: Optimal rates for regularized least squares regression publication-title: COLT – volume: 7 start-page: 331 year: 2007 end-page: 368 ident: br0010 article-title: Optimal rates for the regularized least-squares algorithm publication-title: Found. Comput. Math. – volume: 20 start-page: 1738 year: 2019 end-page: 1773 ident: br0050 article-title: Boosted kernel ridge regression: optimal learning rates and early stopping publication-title: J. Mach. Learn. Res. – volume: 20 start-page: 1873 year: 2008 end-page: 1897 ident: br0060 article-title: Spectral algorithms for supervised learning publication-title: Neural Comput. – volume: 26 start-page: 153 year: 2007 end-page: 172 ident: br0190 article-title: Learning theory estimates via integral operators and their approximations publication-title: Constr. Approx. – volume: 18 start-page: 971 year: 2018 end-page: 1013 ident: br0240 article-title: Optimal rates for regularization of statistical inverse learning problems publication-title: Found. Comput. Math. – volume: 19 start-page: 285 year: 2005 end-page: 302 ident: br0180 article-title: Shannon sampling II: connections to learning theory publication-title: Appl. Comput. Harmon. Anal. – year: 1996 ident: br0150 article-title: Regularization of Inverse Problems, vol. 375 – volume: 17 start-page: 931 year: 2019 end-page: 946 ident: br0270 article-title: Analysis of regularized Nyström subsampling for regression functions of low smoothness publication-title: Anal. Appl. – volume: 17 start-page: 6722 year: 2016 end-page: 6765 ident: br0040 article-title: Optimal learning rates for localized svms publication-title: J. Mach. Learn. Res. – volume: 23 start-page: 4305 year: 2022 end-page: 4345 ident: br0090 article-title: Total stability of svms and localized svms publication-title: J. Mach. Learn. Res. – volume: 18 start-page: 3202 year: 2017 end-page: 3232 ident: br0210 article-title: Distributed learning with regularized least squares publication-title: J. Mach. Learn. Res. – volume: 21 start-page: 1 year: 2020 end-page: 38 ident: br0280 article-title: Sobolev norm learning rates for regularized least-squares algorithms publication-title: J. Mach. Learn. Res. – volume: 22 start-page: 1 year: 2021 end-page: 59 ident: br0110 article-title: Analyzing the discrepancy principle for kernelized spectral filter learning algorithms publication-title: J. Mach. Learn. Res. – volume: 41 start-page: 279 year: 2004 end-page: 305 ident: br0310 article-title: Shannon sampling and function reconstruction from point values publication-title: Bull. Am. Math. Soc. – year: 2002 ident: br0140 article-title: A Distribution-Free Theory of Nonparametric Regression, vol. 1 – year: 2019 ident: br0130 article-title: Lepskii principle in supervised learning – volume: 33 year: 2017 ident: br0290 article-title: Learning theory of distributed spectral algorithms publication-title: Inverse Probl. – year: 2013 ident: br0330 article-title: Matrix Analysis, vol. 169 – volume: 48 start-page: 123 year: 2020 end-page: 148 ident: br0120 article-title: Balancing principle in supervised learning for a general regularization scheme publication-title: Appl. Comput. Harmon. Anal. – volume: 18 start-page: 1493 year: 2017 end-page: 1514 ident: br0250 article-title: Distributed semi-supervised learning with kernel ridge regression publication-title: J. Mach. Learn. Res. – start-page: 1657 year: 2015 end-page: 1665 ident: br0020 article-title: Less is more: Nyström computational regularization publication-title: NIPS – year: 2008 ident: br0230 article-title: Support Vector Machines – volume: 35 start-page: 454 year: 1991 end-page: 466 ident: br0160 article-title: On a problem of adaptive estimation in Gaussian white noise publication-title: Theory Probab. Appl. – volume: 8 start-page: 161 year: 2010 end-page: 183 ident: br0100 article-title: Cross-validation based adaptation for regularization operators in learning theory publication-title: Anal. Appl. – year: 2007 ident: br0220 article-title: Learning Theory: An Approximation Theory Viewpoint, vol. 24 – volume: 14 start-page: 763 year: 2016 end-page: 794 ident: br0260 article-title: Convergence rates of kernel conjugate gradient for random design regression publication-title: Anal. Appl. – volume: 2 start-page: 413 year: 2002 end-page: 428 ident: br0070 article-title: Best choices for regularization parameters in learning theory: on the bias-variance problem publication-title: Found. Comput. Math. – volume: 10 start-page: 455 year: 2010 end-page: 479 ident: br0170 article-title: Adaptive kernel methods using the balancing principle publication-title: Found. Comput. Math. – volume: 16 start-page: 3299 year: 2015 end-page: 3340 ident: br0030 article-title: Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates publication-title: J. Mach. Learn. Res. – volume: 289 start-page: 101 year: 2018 end-page: 118 ident: br0080 article-title: Total stability of kernel methods publication-title: Neurocomputing – volume: 48 start-page: 123 issue: 1 year: 2020 ident: 10.1016/j.acha.2024.101671_br0120 article-title: Balancing principle in supervised learning for a general regularization scheme publication-title: Appl. Comput. Harmon. Anal. doi: 10.1016/j.acha.2018.03.001 – volume: 10 start-page: 455 issue: 4 year: 2010 ident: 10.1016/j.acha.2024.101671_br0170 article-title: Adaptive kernel methods using the balancing principle publication-title: Found. Comput. Math. doi: 10.1007/s10208-010-9064-2 – volume: 2 start-page: 413 year: 2002 ident: 10.1016/j.acha.2024.101671_br0070 article-title: Best choices for regularization parameters in learning theory: on the bias-variance problem publication-title: Found. Comput. Math. doi: 10.1007/s102080010030 – volume: 20 start-page: 1738 issue: 1 year: 2019 ident: 10.1016/j.acha.2024.101671_br0050 article-title: Boosted kernel ridge regression: optimal learning rates and early stopping publication-title: J. Mach. Learn. Res. – volume: 26 start-page: 153 issue: 2 year: 2007 ident: 10.1016/j.acha.2024.101671_br0190 article-title: Learning theory estimates via integral operators and their approximations publication-title: Constr. Approx. doi: 10.1007/s00365-006-0659-y – volume: 21 start-page: 1 issue: 93 year: 2020 ident: 10.1016/j.acha.2024.101671_br0320 article-title: Distributed kernel ridge regression with communications publication-title: J. Mach. Learn. Res. – year: 2008 ident: 10.1016/j.acha.2024.101671_br0230 – start-page: 1657 year: 2015 ident: 10.1016/j.acha.2024.101671_br0020 article-title: Less is more: Nyström computational regularization – volume: 17 start-page: 6722 issue: 1 year: 2016 ident: 10.1016/j.acha.2024.101671_br0040 article-title: Optimal learning rates for localized svms publication-title: J. Mach. Learn. Res. – volume: 16 start-page: 3299 issue: 1 year: 2015 ident: 10.1016/j.acha.2024.101671_br0030 article-title: Divide and conquer kernel ridge regression: a distributed algorithm with minimax optimal rates publication-title: J. Mach. Learn. Res. – volume: 21 start-page: 1 issue: 205 year: 2020 ident: 10.1016/j.acha.2024.101671_br0280 article-title: Sobolev norm learning rates for regularized least-squares algorithms publication-title: J. Mach. Learn. Res. – volume: 33 issue: 7 year: 2017 ident: 10.1016/j.acha.2024.101671_br0290 article-title: Learning theory of distributed spectral algorithms publication-title: Inverse Probl. doi: 10.1088/1361-6420/aa72b2 – volume: 14 start-page: 763 issue: 06 year: 2016 ident: 10.1016/j.acha.2024.101671_br0260 article-title: Convergence rates of kernel conjugate gradient for random design regression publication-title: Anal. Appl. doi: 10.1142/S0219530516400017 – volume: 41 start-page: 279 issue: 3 year: 2004 ident: 10.1016/j.acha.2024.101671_br0310 article-title: Shannon sampling and function reconstruction from point values publication-title: Bull. Am. Math. Soc. doi: 10.1090/S0273-0979-04-01025-0 – year: 2002 ident: 10.1016/j.acha.2024.101671_br0140 – ident: 10.1016/j.acha.2024.101671_br0130 – year: 1996 ident: 10.1016/j.acha.2024.101671_br0150 – volume: 22 start-page: 1 issue: 76 year: 2021 ident: 10.1016/j.acha.2024.101671_br0110 article-title: Analyzing the discrepancy principle for kernelized spectral filter learning algorithms publication-title: J. Mach. Learn. Res. – volume: 8 start-page: 161 issue: 02 year: 2010 ident: 10.1016/j.acha.2024.101671_br0100 article-title: Cross-validation based adaptation for regularization operators in learning theory publication-title: Anal. Appl. doi: 10.1142/S0219530510001564 – volume: 18 start-page: 971 issue: 4 year: 2018 ident: 10.1016/j.acha.2024.101671_br0240 article-title: Optimal rates for regularization of statistical inverse learning problems publication-title: Found. Comput. Math. doi: 10.1007/s10208-017-9359-7 – volume: 19 start-page: 285 issue: 3 year: 2005 ident: 10.1016/j.acha.2024.101671_br0180 article-title: Shannon sampling II: connections to learning theory publication-title: Appl. Comput. Harmon. Anal. doi: 10.1016/j.acha.2005.03.001 – volume: 18 start-page: 1493 issue: 1 year: 2017 ident: 10.1016/j.acha.2024.101671_br0250 article-title: Distributed semi-supervised learning with kernel ridge regression publication-title: J. Mach. Learn. Res. – year: 2013 ident: 10.1016/j.acha.2024.101671_br0330 – year: 2007 ident: 10.1016/j.acha.2024.101671_br0220 – volume: 7 start-page: 331 issue: 3 year: 2007 ident: 10.1016/j.acha.2024.101671_br0010 article-title: Optimal rates for the regularized least-squares algorithm publication-title: Found. Comput. Math. doi: 10.1007/s10208-006-0196-8 – start-page: 79 year: 2009 ident: 10.1016/j.acha.2024.101671_br0200 article-title: Optimal rates for regularized least squares regression – volume: 17 start-page: 931 issue: 06 year: 2019 ident: 10.1016/j.acha.2024.101671_br0270 article-title: Analysis of regularized Nyström subsampling for regression functions of low smoothness publication-title: Anal. Appl. doi: 10.1142/S0219530519500039 – volume: 20 start-page: 1873 issue: 7 year: 2008 ident: 10.1016/j.acha.2024.101671_br0060 article-title: Spectral algorithms for supervised learning publication-title: Neural Comput. doi: 10.1162/neco.2008.05-07-517 – volume: 15 start-page: 335 issue: 1 year: 2014 ident: 10.1016/j.acha.2024.101671_br0300 article-title: Early stopping and non-parametric regression: an optimal data-dependent stopping rule publication-title: J. Mach. Learn. Res. – volume: 35 start-page: 454 issue: 3 year: 1991 ident: 10.1016/j.acha.2024.101671_br0160 article-title: On a problem of adaptive estimation in Gaussian white noise publication-title: Theory Probab. Appl. doi: 10.1137/1135065 – volume: 289 start-page: 101 year: 2018 ident: 10.1016/j.acha.2024.101671_br0080 article-title: Total stability of kernel methods publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.02.009 – volume: 18 start-page: 3202 issue: 92 year: 2017 ident: 10.1016/j.acha.2024.101671_br0210 article-title: Distributed learning with regularized least squares publication-title: J. Mach. Learn. Res. – volume: 23 start-page: 4305 issue: 1 year: 2022 ident: 10.1016/j.acha.2024.101671_br0090 article-title: Total stability of svms and localized svms publication-title: J. Mach. Learn. Res. |
SSID | ssj0011459 |
Score | 2.4130816 |
Snippet | This paper focuses on parameter selection issues of kernel ridge regression (KRR). Due to special spectral properties of KRR, we find that delicate subdivision... |
SourceID | crossref elsevier |
SourceType | Enrichment Source Index Database Publisher |
StartPage | 101671 |
SubjectTerms | Kernel ridge regression Learning theory Lepskii principle Parameter selection |
Title | Adaptive parameter selection for kernel ridge regression |
URI | https://dx.doi.org/10.1016/j.acha.2024.101671 |
Volume | 73 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1NS8NAEB1KvehB_MT6UfbgTWKTzSbZPZZiqZb2oBZ7C9kvrdZY2nr1t7uTpKWCePAUdpmF8LK8fUtm3gBcijCWQmjmKePuqiyRoce5pV4QKzygfUOLdm-DYdwbsbtxNK5BZ1ULg2mVFfeXnF6wdTXTqtBszSaT1oO7zDj14KOjHO5D9ARlLMFdfv21TvNwcr9omIbBHkZXhTNljlemXtB7iLJiIgl-P5w2DpzuHuxWSpG0y5fZh5rJD2Bnwz_QjQZr09XFIfC2zmbIXgT9vN8xz4UsijY3DnvixCl5M_PcTElRo0Xm5rnMgc2PYNS9eez0vKoxgqdC3196VApF0RcmM5rHwkYqjKwTFkwmVmpf2Ugabv3AiiCLqS81pYkNOI8T6-SpY5RjqOcfuTkBgm3jnQLSzGr8ZxuJ0EoqYs0ckxmT8QYEK0RSVbmGY_OKabpKD3tNEcUUUUxLFBtwtV4zKz0z_oyOVkCnP7586kj9j3Wn_1x3Bts4KusJz6G-nH-aCycslrJZ7JwmbLVv-70hPvv3T_1vpALLbQ |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3JTsMwEB2V9gAcEKsoqw_cUNTEcbZjVVGldLnQSr1F8QaFEqq2_D-eLFWRUA8c43ik6MV6fpZn3gA8RK7Po0gySyhzVmUBd60w1NRyfIEbtK1o3u5tOPLjCXueetMadKpaGEyrLLm_4PScrcuRVolmazGbtV7MYcaoBxsd5XAdsj1ooDuVV4dGu9ePR5vLBIflPdNwvoUBZe1MkeaVije0H6IsHwicv_enrT2newxHpVgk7eJ7TqCmslM43LIQNE_Dje_q6gzCtkwXSGAELb0_MdWFrPJONwZ-YvQp-VDLTM1JXqZFluq1SIPNzmHSfRp3YqvsjWAJ17bXFuWRoGgNkyoZ-pH2hOtpoy0YDzSXttAeV6G2HR05qU9tLikNtBOGfqCNQjWkcgH17CtTl0Cwc7wRQZJpide2XuRqTiNfMkNmSqVhE5wKkUSUxuHYv2KeVBli7wmimCCKSYFiEx43MYvCNmPnbK8COvn18xPD6zvirv4Zdw_78Xg4SAa9Uf8aDvBNUV54A_X18lvdGp2x5nflOvoBU4HMew |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Adaptive+parameter+selection+for+kernel+ridge+regression&rft.jtitle=Applied+and+computational+harmonic+analysis&rft.au=Lin%2C+Shao-Bo&rft.date=2024-11-01&rft.issn=1063-5203&rft.volume=73&rft.spage=101671&rft_id=info:doi/10.1016%2Fj.acha.2024.101671&rft.externalDBID=n%2Fa&rft.externalDocID=10_1016_j_acha_2024_101671 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1063-5203&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1063-5203&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1063-5203&client=summon |