Transfer Learning Under High-Dimensional Generalized Linear Models
In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its...
Saved in:
Published in | Journal of the American Statistical Association Vol. 118; no. 544; pp. 2684 - 2697 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
United States
Taylor & Francis
02.10.2023
Taylor & Francis Ltd |
Subjects | |
Online Access | Get full text |
Cover
Loading…
Abstract | In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its
l
1
/
l
2
-estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and sources are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don't know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package
glmtrans
, which is available on CRAN.
Supplementary materials
for this article are available online. |
---|---|
AbstractList | In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its ℓ1 / ℓ2-estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and source are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don't know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans, which is available on CRAN.In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its ℓ1 / ℓ2-estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and source are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don't know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans, which is available on CRAN. In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its l1/l2-estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and sources are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don’t know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans, which is available on CRAN. Supplementary materials for this article are available online. In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on data by borrowing information from useful data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its / -estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and source are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don't know which sources to transfer, an transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans, which is available on CRAN. In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its ℓ 1 / ℓ 2 -estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and source are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don’t know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans , which is available on CRAN. In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by borrowing information from useful source data. Given which sources to transfer, we propose a transfer learning algorithm on GLM, and derive its l 1 / l 2 -estimation error bounds as well as a bound for a prediction error measure. The theoretical analysis shows that when the target and sources are sufficiently close to each other, these bounds could be improved over those of the classical penalized estimator using only target data under mild conditions. When we don't know which sources to transfer, an algorithm-free transferable source detection approach is introduced to detect informative sources. The detection consistency is proved under the high-dimensional GLM transfer learning setting. We also propose an algorithm to construct confidence intervals of each coefficient component, and the corresponding theories are provided. Extensive simulations and a real-data experiment verify the effectiveness of our algorithms. We implement the proposed GLM transfer learning algorithms in a new R package glmtrans , which is available on CRAN. Supplementary materials for this article are available online. |
Author | Feng, Yang Tian, Ye |
Author_xml | – sequence: 1 givenname: Ye surname: Tian fullname: Tian, Ye organization: Department of Statistics, Columbia University – sequence: 2 givenname: Yang orcidid: 0000-0001-7746-7598 surname: Feng fullname: Feng, Yang organization: Department of Biostatistics, School of Global Public Health, New York University |
BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38562655$$D View this record in MEDLINE/PubMed |
BookMark | eNqNkk9v1DAQxS1URLeFjwCKxIVLiseOY0cc-FOgRVrEpZW4WY4z2bpy7NbOgsqnx6vdRdADkIMjy795nnnPR-QgxICEPAV6AlTRlxRaBo3oThhlrCwSmFQPyAIElzWTzdcDstgw9QY6JEc5X9PySaUekUOuRMtaIRbk3UUyIY-YqiWaFFxYVZdhKNtzt7qq37sJQ3YxGF-dYcBkvPuBQ7V0odDV5zigz4_Jw9H4jE92_2Ny-fHDxel5vfxy9un07bK2ouVzja0Faxh0Uvaq50MjGUjO2hEAGO9tz0bTMy6hwxEUb6WwtG-oaKxCIVHwY_J6q3uz7iccLIa59KNvkptMutPROP3nSXBXehW_aaCdYi2XReHFTiHF2zXmWU8uW_TeBIzrrDltaNMUY9h_oByAA2NdQZ_fQ6_jOhXLsmZdMRykkFCoZ793_6vtfRQFeLUFbIo5Jxy1dbOZi_llGOfLFHoTvN4HrzfB613wpVrcq95f8K-6N9s6F8aYJvM9Jj_o2dz5mMbyNKwrs_5d4id0wMH- |
CitedBy_id | crossref_primary_10_1002_sta4_70041 crossref_primary_10_1038_s41598_023_48903_x crossref_primary_10_1111_sjos_12723 crossref_primary_10_1167_tvst_12_12_2 crossref_primary_10_1002_cjs_11813 crossref_primary_10_1080_00949655_2024_2338223 crossref_primary_10_1080_02331888_2024_2361861 crossref_primary_10_1002_sta4_70004 crossref_primary_10_1093_bib_bbad232 crossref_primary_10_1007_s10109_024_00455_y crossref_primary_10_1080_01621459_2024_2403788 crossref_primary_10_1080_01621459_2024_2393463 crossref_primary_10_1126_science_adi6000 crossref_primary_10_12677_AAM_2022_119718 crossref_primary_10_1016_j_csda_2024_107975 crossref_primary_10_1016_j_csda_2023_107899 crossref_primary_10_1007_s11749_023_00870_1 crossref_primary_10_1093_biomtc_ujae144 crossref_primary_10_1080_01621459_2024_2439622 crossref_primary_10_1007_s11121_025_01781_3 crossref_primary_10_1186_s12859_025_06056_w crossref_primary_10_1080_01621459_2023_2184373 crossref_primary_10_1007_s00362_025_01662_5 crossref_primary_10_1080_00401706_2024_2315952 crossref_primary_10_1080_01621459_2024_2356291 crossref_primary_10_3390_axioms13080517 crossref_primary_10_1002_sta4_582 crossref_primary_10_1214_23_AOAS1747 crossref_primary_10_1214_24_AOS2362 crossref_primary_10_1007_s11749_023_00880_z crossref_primary_10_1093_bioinformatics_btad680 crossref_primary_10_1002_sim_10290 crossref_primary_10_1093_biomtc_ujae096 crossref_primary_10_1002_sam_11680 crossref_primary_10_2139_ssrn_4787816 |
Cites_doi | 10.1214/009053606000000281 10.1214/14-AOS1221 10.1111/j.2517-6161.1996.tb02080.x 10.1007/978-1-4899-3242-6 10.1093/biomet/asw065 10.1111/j.1467-9868.2005.00532.x 10.1214/15-EJS1027 10.1287/mnsc.2020.3729 10.1214/21-AOS2102 10.1109/ICDM.2018.00077 10.4018/978-1-60566-766-9.ch011 10.1186/s40537-016-0043-6 10.1016/j.csda.2016.02.015 10.1109/TKDE.2009.191 10.1214/20-AOS1949 10.1214/13-AOS1096 |
ContentType | Journal Article |
Copyright | 2022 American Statistical Association 2022 2022 American Statistical Association |
Copyright_xml | – notice: 2022 American Statistical Association 2022 – notice: 2022 American Statistical Association |
DBID | AAYXX CITATION NPM 8BJ FQK JBE K9. 7X8 7S9 L.6 5PM |
DOI | 10.1080/01621459.2022.2071278 |
DatabaseName | CrossRef PubMed International Bibliography of the Social Sciences (IBSS) International Bibliography of the Social Sciences International Bibliography of the Social Sciences ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic AGRICOLA AGRICOLA - Academic PubMed Central (Full Participant titles) |
DatabaseTitle | CrossRef PubMed International Bibliography of the Social Sciences (IBSS) ProQuest Health & Medical Complete (Alumni) MEDLINE - Academic AGRICOLA AGRICOLA - Academic |
DatabaseTitleList | MEDLINE - Academic AGRICOLA International Bibliography of the Social Sciences (IBSS) PubMed |
Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Statistics |
EISSN | 1537-274X |
EndPage | 2697 |
ExternalDocumentID | PMC10982637 38562655 10_1080_01621459_2022_2071278 2071278 |
Genre | Research Article Journal Article |
GrantInformation_xml | – fundername: National Institutes of Health: NIH grantid: 1R21AG074205-01 – fundername: NIA NIH HHS grantid: R21 AG074205 |
GroupedDBID | -DZ -~X ..I .7F .QJ 0BK 0R~ 29L 30N 4.4 5GY 5RE 692 7WY 85S 8FL AAAVZ AABCJ AAENE AAHBH AAJMT AALDU AAMIU AAPUL AAQRR ABCCY ABEHJ ABFAN ABFIM ABJNI ABLIJ ABLJU ABPAQ ABPEM ABPFR ABPPZ ABTAI ABXUL ABXYU ABYWD ACGFO ACGFS ACGOD ACIWK ACMTB ACNCT ACTIO ACTMH ADCVX ADGTB ADLSF ADMHG AEISY AENEX AEOZL AEPSL AEYOC AFFNX AFSUE AFVYC AFXHP AGDLA AGMYJ AHDZW AIJEM AKBVH AKOOK ALIPV ALMA_UNASSIGNED_HOLDINGS ALQZU AQRUH AVBZW AWYRJ BLEHA CCCUG CJ0 CS3 D0L DGEBU DKSSO DU5 EBS E~A E~B F5P FJW GROUPED_ABI_INFORM_COMPLETE GTTXZ H13 HF~ HZ~ H~9 H~P IAO IEA IGG IOF IPNFZ IPO J.P JAS K60 K6~ KYCEM LU7 M4Z MS~ MW2 N95 NA5 NY~ O9- OFU OK1 P2P RIG RNANH ROSJB RTWRZ RWL RXW S-T SNACF TAE TBQAZ TDBHL TEJ TFL TFT TFW TN5 TTHFI TUROJ U5U UPT UT5 UU3 WH7 WZA YQT YYM ZGOLN ~S~ AAGDL AAHIA AAYXX ADYSH AFRVT AIYEW AMPGV AMVHM CITATION .-4 .GJ 07G 1OL 2AX 3R3 3V. 7X7 88E 88I 8AF 8C1 8FE 8FG 8FI 8FJ 8G5 8P6 8R4 8R5 AAFWJ AAIKQ AAKBW ABBHK ABEFU ABJCF ABPQH ABRLO ABUWG ABXSQ ABYAD ACAGQ ACGEE ACTWD ACUBG ADBBV ADODI ADULT AELPN AEUMN AEUPB AFKRA AFQQW AGCQS AGLEN AGROQ AHMOU AI. AIHAF ALCKM ALRMG AMATQ AMEWO AMXXU AQUVI AZQEC BCCOT BENPR BEZIV BGLVJ BKNYI BKOMP BPHCQ BPLKW BVXVI C06 CCPQU CRFIH DMQIW DQDLB DSRWC DWIFK DWQXO E.L ECEWR EJD FEDTE FRNLG FVMVE FYUFA GNUQQ GROUPED_ABI_INFORM_RESEARCH GUQSH HCIFZ HGD HMCUK HQ6 HVGLF IPSME IVXBP JAAYA JBMMH JBZCM JENOY JHFFW JKQEH JLEZI JLXEF JMS JPL JSODD JST K9- KQ8 L6V LJTGL M0C M0R M0T M1P M2O M2P M7S MVM NHB NPM NUSFT P-O PADUT PQBIZ PQBZA PQQKQ PRG PROAC PSQYO PTHSS Q2X QCRFL RNS S0X SA0 SJN TAQ TFMCV TOXWX UB9 UKHRP UQL VH1 VOH VXZ WHG YXB YYP ZCG ZGI ZUP ZXP 8BJ FQK JBE K9. TASJS 7X8 7S9 L.6 5PM ADXHL |
ID | FETCH-LOGICAL-c563t-e6c1ca21977b8b3d47217326f11123bcb2fab23719ef183675c0b4054c8e57e53 |
ISSN | 0162-1459 1537-274X |
IngestDate | Thu Aug 21 18:34:58 EDT 2025 Wed Jul 02 04:32:55 EDT 2025 Mon Jul 21 11:42:13 EDT 2025 Sat Aug 23 12:36:09 EDT 2025 Wed Feb 19 02:07:32 EST 2025 Tue Jul 01 02:39:34 EDT 2025 Thu Apr 24 23:12:50 EDT 2025 Wed Dec 25 09:02:57 EST 2024 |
IsDoiOpenAccess | false |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Issue | 544 |
Keywords | transfer learning high-dimensional inference sparsity Lasso Generalized linear models negative transfer |
Language | English |
LinkModel | OpenURL |
MergedId | FETCHMERGED-LOGICAL-c563t-e6c1ca21977b8b3d47217326f11123bcb2fab23719ef183675c0b4054c8e57e53 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ORCID | 0000-0001-7746-7598 |
OpenAccessLink | https://figshare.com/articles/journal_contribution/Transfer_Learning_under_High-dimensional_Generalized_Linear_Models/19678366 |
PMID | 38562655 |
PQID | 2907817571 |
PQPubID | 41715 |
PageCount | 14 |
ParticipantIDs | proquest_miscellaneous_3040442652 proquest_journals_2907817571 crossref_primary_10_1080_01621459_2022_2071278 pubmedcentral_primary_oai_pubmedcentral_nih_gov_10982637 crossref_citationtrail_10_1080_01621459_2022_2071278 proquest_miscellaneous_3031131229 pubmed_primary_38562655 informaworld_taylorfrancis_310_1080_01621459_2022_2071278 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | 2023-10-02 |
PublicationDateYYYYMMDD | 2023-10-02 |
PublicationDate_xml | – month: 10 year: 2023 text: 2023-10-02 day: 02 |
PublicationDecade | 2020 |
PublicationPlace | United States |
PublicationPlace_xml | – name: United States – name: Alexandria |
PublicationTitle | Journal of the American Statistical Association |
PublicationTitleAlternate | J Am Stat Assoc |
PublicationYear | 2023 |
Publisher | Taylor & Francis Taylor & Francis Ltd |
Publisher_xml | – name: Taylor & Francis – name: Taylor & Francis Ltd |
References | e_1_3_3_6_1 Zheng C. (e_1_3_3_24_1) 2019 Hajiramezanali E. (e_1_3_3_7_1) 2018 e_1_3_3_18_1 Li S. (e_1_3_3_11_1) 2021 Negahban S. (e_1_3_3_14_1) 2009 e_1_3_3_17_1 e_1_3_3_19_1 Hanneke S. (e_1_3_3_8_1) 2020 e_1_3_3_13_1 e_1_3_3_16_1 e_1_3_3_15_1 e_1_3_3_3_1 Hanneke S. (e_1_3_3_9_1) 2020 e_1_3_3_21_1 e_1_3_3_2_1 e_1_3_3_20_1 e_1_3_3_5_1 e_1_3_3_12_1 e_1_3_3_23_1 e_1_3_3_4_1 Li S. (e_1_3_3_10_1) 2020 e_1_3_3_22_1 |
References_xml | – year: 2020 ident: e_1_3_3_9_1 article-title: On the Value of Target Data in Transfer Learning publication-title: arXiv preprint arXiv:2002.04747 – ident: e_1_3_3_13_1 doi: 10.1214/009053606000000281 – start-page: 1348 volume-title: Advances in Neural Information Processing Systems year: 2009 ident: e_1_3_3_14_1 – ident: e_1_3_3_20_1 doi: 10.1214/14-AOS1221 – year: 2020 ident: e_1_3_3_10_1 article-title: Transfer Learning in Large-Scale Gaussian Graphical Models with False Discovery Rate Control publication-title: arXiv preprint arXiv:2010.11037 – ident: e_1_3_3_18_1 doi: 10.1111/j.2517-6161.1996.tb02080.x – year: 2019 ident: e_1_3_3_24_1 article-title: On Data Enriched Logistic Regression publication-title: arXiv preprint arXiv:1911.06380 – start-page: 31 volume-title: Advances in Neural Information Processing Systems 31 (NIPS 2018) year: 2018 ident: e_1_3_3_7_1 – ident: e_1_3_3_12_1 doi: 10.1007/978-1-4899-3242-6 – ident: e_1_3_3_15_1 doi: 10.1093/biomet/asw065 – year: 2021 ident: e_1_3_3_11_1 article-title: “Transfer Learning for High-Dimensional Linear Regression: Prediction, Estimation and Minimax Optimality publication-title: Journal of the Royal Statistical Society, Series B – ident: e_1_3_3_23_1 doi: 10.1111/j.1467-9868.2005.00532.x – ident: e_1_3_3_5_1 doi: 10.1214/15-EJS1027 – ident: e_1_3_3_2_1 doi: 10.1287/mnsc.2020.3729 – ident: e_1_3_3_17_1 doi: 10.1214/21-AOS2102 – ident: e_1_3_3_21_1 doi: 10.1109/ICDM.2018.00077 – ident: e_1_3_3_19_1 doi: 10.4018/978-1-60566-766-9.ch011 – ident: e_1_3_3_22_1 doi: 10.1186/s40537-016-0043-6 – ident: e_1_3_3_6_1 doi: 10.1016/j.csda.2016.02.015 – year: 2020 ident: e_1_3_3_8_1 article-title: A no-free-lunch Theorem for Multitask Learning publication-title: arXiv preprint arXiv:2006.15785 – ident: e_1_3_3_16_1 doi: 10.1109/TKDE.2009.191 – ident: e_1_3_3_4_1 doi: 10.1214/20-AOS1949 – ident: e_1_3_3_3_1 doi: 10.1214/13-AOS1096 |
SSID | ssj0000788 |
Score | 2.611162 |
Snippet | In this work, we study the transfer learning problem under high-dimensional generalized linear models (GLMs), which aim to improve the fit on target data by... In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on data by borrowing... In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on target data by... In this work, we study the transfer learning problem under highdimensional generalized linear models (GLMs), which aim to improve the fit on target data by... |
SourceID | pubmedcentral proquest pubmed crossref informaworld |
SourceType | Open Access Repository Aggregation Database Index Database Enrichment Source Publisher |
StartPage | 2684 |
SubjectTerms | Algorithms Americans artificial intelligence confidence interval Error analysis Generalized linear models High-dimensional inference Lasso Learning Linear analysis linear models Machine learning Negative transfer prediction Sparsity Statistical models Statistics Transfer learning |
Title | Transfer Learning Under High-Dimensional Generalized Linear Models |
URI | https://www.tandfonline.com/doi/abs/10.1080/01621459.2022.2071278 https://www.ncbi.nlm.nih.gov/pubmed/38562655 https://www.proquest.com/docview/2907817571 https://www.proquest.com/docview/3031131229 https://www.proquest.com/docview/3040442652 https://pubmed.ncbi.nlm.nih.gov/PMC10982637 |
Volume | 118 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB5BkdBeqvJOW1CQuKaKX3FyBASqkOipldpTFDsOVIK02k0v_fWMX9lk2VLgEq3iZHbX33g8Y3u-AXini4aXVamzinU641KoTOUFDjyNcRhHD14au6P79aQ4PuNfzsV5LDQesksGdaRvt-aV_A-qeA9xtVmy_4DsKBRv4GfEF6-IMF7_DmPndZplZEn95qraLt3hjay1vP2ecyOSS1_eonuJ0acj77ElcFZ3-KaTfBNX4XdwdM5b0HQHeP0a6sWoIthHzoBcNGFaDKsK1J9Pmy00ImCEB7buaCnXptJuRnM-tXyFL_X2m0kOZxhRnhWHETm1-W-SUF-6ZwLT9U-HEyvRJys8ee8GF3ZsegiPKIYF1q6x_GQ980pXZ3T87TFjy3Kpb_v-BTyOEmduyYy0dlvosXmCduKSnO7BbsArfe8V4wk8MP1TWIxwrZ7Bh6ghadSQ1GlIuqkh6URDUq8hqdeQ53D2-dPpx-MsVM3ItCjYkJlCE93gRCSlKhVrOcb4Ep30Dmc1ypRWtGsUZZJUpkN7jgGjzhW67VyXRkgj2AvY6a968wrSpkX3UVHR0lZzUaJUQVreGcZMoRgvEuCx02odKOVtZZMfNYnMs6Hba9vtdej2BI7G1649p8p9L1RTROrBLWZ1vvJMze559zDCV4ehu6ppZTmupJAkgbdjMxpWu1vW9ObqBsXidEcYobT60zM85-jjCprAS68R4z-KmpVAOdOV8QFL7D5v6S-_O4J3klcY9TO5f6fQA1isR-0h7AzLG_MaveNBvXFD4hc0c67f |
linkProvider | Taylor & Francis |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1LT9wwEB5ReiiXPugrLW2DxDXb-J0c-0LbFvYEEjcrdhxAoFBB9sKv70ycLLuIlgNneyzZHo-_sWe-AdjxupJFWfisFI3PpFEuc7nGg-fRD5OI4E2gH939mZ4eyl9H6mgpF4bCKsmHbiJRRG-r6XDTY_QYEvcZYQoRbFOeCadkKsO4KR7BY1VqQ7ou8tmNNTZ97UkSyUhmzOL51zAr99MKe-ldGPR2KOXS3bT7DPw4qxiScjaZd27ir28RPj5s2s_h6QBd0y9R117AWmg3YYPQaiR7fglf-5uvCZfpQNt6nPZ1lVKKJsm-UyGBSAKSDmzXp9ehTtEdxt4pVWU7v3oFh7s_Dr5Ns6FIQ-aVFl0WtGe-QrtnjCucqCW6lAYxYYNGlAvnHW8qx4VhZWjQfKB_4nOHKFH6IigTlHgN6-1FG95CWtWIVhxXNa-9VAWOqlgtmyBE0E5InYAct8b6gcGcCmmcWzYSnQ4rZGmF7LBCCUwWYn8ihcd9AuXyvtuufztpYqETK-6R3RqVxA7W4MrykiiVjDIsge1FM55j-pyp2nAxx2HRujLBOC__10fmEiGV4gm8iXq3mJEoEMlqpRIoVjRy0YF4xFdb2tOTnk-c5SU6mcK8e8C8P8GT6cH-nt37Ofv9HjawKYY88i1Y7y7n4QNCt8597M_mX67BMqM |
linkToPdf | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwpV1Lb9QwEB5BkVAvvB8pBYLENUviZ3IEyqq8VhyoxM2KHRsqqrTqZi_99Z2JnaVbAT30bI8l2zPjb5KZbwBeO9WKuqld0fDgCqGlLWyp0PAcxmECEbz29Ef360LtH4hPP-SUTbhMaZUUQ4dIFDH6ajLuky5MGXFvEKUQvzaVmTCqpdIV0_VNuKWo0JKqOMrFH2esx9aTJFKQzFTE869lNp6nDfLSv0HQy5mUF56m-V2w06ZiRsrv2WqwM3d2ie_xWru-B3cScM3fRk27Dzd8_wC2CatGqueH8G5894I_zRNp68987KqUUy5JsUdtBCIFSJ64rg_PfJdjMIyzc-rJdrR8BAfzD9_f7xepRUPhpOJD4ZWrXIteT2tbW94JDCg1IsKALpRx6ywLrWVcV40P6DwwOnGlRYwoXO2l9pI_hq3-uPdPIW87xCqWyY51TsgaV5VVJ4Ln3CvLhcpATDdjXOIvpzYaR6aaaE7TCRk6IZNOKIPZWuwkEnhcJdBcvHYzjF9OQmxzYvgVsruTjpjkC5aGNUSopKWuMni1HkYrpl8zbe-PV7gs-taKV4w1_5sjSoGASrIMnkS1W--I14hjlZQZ1BsKuZ5ALOKbI_3hr5FNvCobDDG53rnGvl_C7W97c_Pl4-LzM9jGkZjvyHZhazhd-eeI2wb7YrTMc7EvMUc |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Transfer+Learning+under+High-dimensional+Generalized+Linear+Models&rft.jtitle=Journal+of+the+American+Statistical+Association&rft.au=Tian%2C+Ye&rft.au=Feng%2C+Yang&rft.date=2023-10-02&rft.issn=0162-1459&rft.volume=118&rft.issue=544&rft.spage=2684&rft_id=info:doi/10.1080%2F01621459.2022.2071278&rft_id=info%3Apmid%2F38562655&rft.externalDocID=38562655 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-1459&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-1459&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-1459&client=summon |