On the approximation of functions by tanh neural networks
We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size o...
Saved in:
Published in | Neural networks Vol. 143; pp. 732 - 750 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
Elsevier Ltd
01.11.2021
|
Subjects | |
Online Access | Get full text |
ISSN | 0893-6080 1879-2782 1879-2782 |
DOI | 10.1016/j.neunet.2021.08.015 |
Cover
Abstract | We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better rates than much deeper ReLU neural networks.
•Explicit bounds for function approximation in Sobolev norms by tanh neural networks.•Tanh networks with 2 hidden layers are at least as expressive as deeper ReLU networks.•Improved convergence rate for neural network approximation of analytic functions. |
---|---|
AbstractList | We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better rates than much deeper ReLU neural networks.
•Explicit bounds for function approximation in Sobolev norms by tanh neural networks.•Tanh networks with 2 hidden layers are at least as expressive as deeper ReLU networks.•Improved convergence rate for neural network approximation of analytic functions. We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better rates than much deeper ReLU neural networks.We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks with the hyperbolic tangent activation function. These bounds provide explicit estimates on the approximation error with respect to the size of the neural networks. We show that tanh neural networks with only two hidden layers suffice to approximate functions at comparable or better rates than much deeper ReLU neural networks. |
Author | Lanthaler, Samuel Mishra, Siddhartha De Ryck, Tim |
Author_xml | – sequence: 1 givenname: Tim orcidid: 0000-0001-6860-1345 surname: De Ryck fullname: De Ryck, Tim email: tim.deryck@sam.math.ethz.com – sequence: 2 givenname: Samuel surname: Lanthaler fullname: Lanthaler, Samuel – sequence: 3 givenname: Siddhartha surname: Mishra fullname: Mishra, Siddhartha |
BookMark | eNqFkD9PwzAQxS1UJNrCN2DIyJLgcxLHYUBCFf8kpC4wW45zUV1Sp9gO0G-PS5kYYLo76b13d78ZmdjBIiHnQDOgwC_XmcXRYsgYZZBRkVEoj8gURFWnrBJsQqZU1HnKqaAnZOb9mlLKRZFPSb20SVhhorZbN3yajQpmsMnQJd1o9b73SbNLgrKrJO5wqo8lfAzu1Z-S4071Hs9-6py83N0-Lx7Sp-X94-LmKdVFCSFFpSjnukFVKl21FJquEKhL1pVAK9EwXXUtaBRVHJuqKyrRcg3QMA51ket8Ti4OufHAtxF9kBvjNfa9sjiMXrKS1xxK4BClVwepdoP3DjupTfj-KDhleglU7nnJtTzwkntekgoZeUVz8cu8dZGH2_1nuz7YMDJ4N-ik1watxtY41EG2g_k74AuEPomm |
CitedBy_id | crossref_primary_10_1016_j_cma_2022_115169 crossref_primary_10_1137_22M1522504 crossref_primary_10_1016_j_matcom_2024_10_039 crossref_primary_10_1016_j_engappai_2023_107256 crossref_primary_10_1093_imanum_drac085 crossref_primary_10_1007_s11868_025_00685_8 crossref_primary_10_1016_j_procs_2023_09_023 crossref_primary_10_1007_s10444_022_09985_9 crossref_primary_10_1142_S0219749924500023 crossref_primary_10_1186_s13059_024_03166_1 crossref_primary_10_1093_icesjms_fsad163 crossref_primary_10_1214_24_AOS2430 crossref_primary_10_1016_j_neunet_2023_01_035 crossref_primary_10_1016_j_rinam_2024_100532 crossref_primary_10_1016_j_engstruct_2023_117290 crossref_primary_10_1016_j_oceaneng_2024_118826 crossref_primary_10_1007_s42985_023_00254_y crossref_primary_10_1016_j_jcp_2023_112495 crossref_primary_10_1007_s10915_022_01939_z crossref_primary_10_1016_j_cma_2024_117211 crossref_primary_10_1137_22M152373X crossref_primary_10_1090_mcom_3960 crossref_primary_10_3390_math10244730 crossref_primary_10_1109_TAI_2024_3416236 crossref_primary_10_1126_sciadv_adl2643 crossref_primary_10_1016_j_jcp_2023_112527 crossref_primary_10_1016_j_cpc_2024_109275 crossref_primary_10_3390_wevj14080227 crossref_primary_10_3390_math12030481 crossref_primary_10_1016_j_jpowsour_2025_236607 crossref_primary_10_1016_j_enganabound_2024_106054 crossref_primary_10_1016_j_spasta_2024_100850 crossref_primary_10_1515_jiip_2022_0015 crossref_primary_10_1016_j_engappai_2023_106862 crossref_primary_10_1016_j_matcom_2024_01_019 crossref_primary_10_1016_j_camwa_2024_06_008 crossref_primary_10_1063_5_0227581 crossref_primary_10_3390_smartcities7060132 crossref_primary_10_1016_j_jcp_2024_113709 crossref_primary_10_1016_j_jcp_2025_113906 crossref_primary_10_1093_mnras_stad1810 crossref_primary_10_3390_w15173026 crossref_primary_10_1109_ACCESS_2022_3148401 crossref_primary_10_1109_ACCESS_2022_3220765 crossref_primary_10_5802_smai_jcm_116 crossref_primary_10_1109_JSEN_2023_3335920 crossref_primary_10_1016_j_jcp_2024_113188 crossref_primary_10_22331_q_2024_12_10_1555 crossref_primary_10_1016_j_mineng_2022_107886 crossref_primary_10_1016_j_trc_2023_104318 crossref_primary_10_1007_s13369_025_10043_x crossref_primary_10_1016_j_cma_2023_116160 crossref_primary_10_1109_ACCESS_2024_3467375 crossref_primary_10_1090_mcom_3934 crossref_primary_10_3390_app14125057 crossref_primary_10_4236_jcc_2021_912001 crossref_primary_10_1109_ACCESS_2022_3153056 crossref_primary_10_1039_D2AN00456A crossref_primary_10_1155_2022_7873226 crossref_primary_10_1002_nme_7377 crossref_primary_10_1016_j_neunet_2024_106761 crossref_primary_10_1007_s10915_022_01950_4 crossref_primary_10_1016_j_jcp_2024_113579 crossref_primary_10_1016_j_neucom_2023_126692 crossref_primary_10_1016_j_jcp_2024_113217 crossref_primary_10_2139_ssrn_4615215 crossref_primary_10_26634_jaim_2_1_20225 |
Cites_doi | 10.1016/S0925-2312(99)00111-3 10.1006/jath.1996.0031 10.1109/72.991414 10.1109/18.256500 10.1137/18M1189336 10.1038/nature14539 10.1080/07468342.2009.11922375 10.1016/j.jcp.2020.109339 10.1073/pnas.1718942115 10.1016/0893-6080(89)90020-8 10.1017/S0962492900002919 10.1162/neco.1997.9.8.1735 10.1016/j.neunet.2017.07.002 10.1090/S0002-9947-96-01501-2 10.1016/j.neunet.2020.05.019 10.1016/j.neucom.2018.07.075 10.3390/e21070627 10.1109/TNNLS.2013.2293637 10.1137/0720068 10.1137/060671139 10.1016/j.neunet.2013.03.015 10.1007/s10955-017-1836-5 10.1090/S0273-0979-01-00923-5 10.1007/s11425-018-9387-x 10.1007/BF00993164 10.1016/j.neunet.2013.07.009 10.1016/j.jcp.2018.10.045 10.1016/j.neunet.2017.12.007 10.1109/TIT.2011.2169531 10.1016/j.cma.2020.113575 10.1162/neco.1996.8.1.164 10.1007/s40304-017-0117-6 10.1109/TIT.2008.2006383 10.1137/080734339 10.1007/s11633-017-1054-2 10.1016/j.jcp.2017.11.039 10.3115/v1/D14-1179 10.1016/j.neunet.2020.11.010 10.1016/j.neunet.2019.12.013 10.1142/S0219530519410136 10.1088/1361-6420/abaf64 10.1007/BF02551274 10.1142/S0219530518500203 10.1142/S0219530519410021 |
ContentType | Journal Article |
Copyright | 2021 The Authors Copyright © 2021 The Authors. Published by Elsevier Ltd.. All rights reserved. |
Copyright_xml | – notice: 2021 The Authors – notice: Copyright © 2021 The Authors. Published by Elsevier Ltd.. All rights reserved. |
DBID | 6I. AAFTH AAYXX CITATION 7X8 |
DOI | 10.1016/j.neunet.2021.08.015 |
DatabaseName | ScienceDirect Open Access Titles Elsevier:ScienceDirect:Open Access CrossRef MEDLINE - Academic |
DatabaseTitle | CrossRef MEDLINE - Academic |
DatabaseTitleList | MEDLINE - Academic |
DeliveryMethod | fulltext_linktorsrc |
Discipline | Computer Science |
EISSN | 1879-2782 |
EndPage | 750 |
ExternalDocumentID | 10_1016_j_neunet_2021_08_015 S0893608021003208 |
GroupedDBID | --- --K --M -~X .DC .~1 0R~ 123 186 1B1 1RT 1~. 1~5 29N 4.4 457 4G. 53G 5RE 5VS 6I. 6TJ 7-5 71M 8P~ 9JM 9JN AABNK AACTN AADPK AAEDT AAEDW AAFTH AAIAV AAIKJ AAKOC AALRI AAOAW AAQFI AAQXK AAXLA AAXUO AAYFN ABAOU ABBOA ABCQJ ABEFU ABFNM ABFRF ABHFT ABIVO ABJNI ABLJU ABMAC ABXDB ABYKQ ACAZW ACDAQ ACGFO ACGFS ACIUM ACNNM ACRLP ACZNC ADBBV ADEZE ADGUI ADJOM ADMUD ADRHT AEBSH AECPX AEFWE AEKER AENEX AFKWA AFTJW AFXIZ AGHFR AGUBO AGWIK AGYEJ AHHHB AHJVU AHZHX AIALX AIEXJ AIKHN AITUG AJBFU AJOXV ALMA_UNASSIGNED_HOLDINGS AMFUW AMRAJ AOUOD ARUGR ASPBG AVWKF AXJTR AZFZN BJAXD BKOJK BLXMC CS3 DU5 EBS EFJIC EFLBG EJD EO8 EO9 EP2 EP3 F0J F5P FDB FEDTE FGOYB FIRID FNPLU FYGXN G-2 G-Q G8K GBLVA GBOLZ HLZ HMQ HVGLF HZ~ IHE J1W JJJVA K-O KOM KZ1 LG9 LMP M2V M41 MHUIS MO0 MOBAO MVM N9A O-L O9- OAUVE OZT P-8 P-9 P2P PC. Q38 R2- RIG ROL RPZ SBC SCC SDF SDG SDP SES SEW SNS SPC SPCBC SSN SST SSV SSW SSZ T5K TAE UAP UNMZH VOH WUQ XPP ZMT ~G- AATTM AAXKI AAYWO AAYXX ABDPE ABWVN ACRPL ACVFH ADCNI ADNMO AEIPS AEUPX AFJKZ AFPUW AGCQF AGQPQ AGRNS AIGII AIIUN AKBMS AKRWK AKYEP ANKPU APXCP BNPGV CITATION SSH 7X8 EFKBS |
ID | FETCH-LOGICAL-c451t-eaa066cbea5ac7d01bf48ec52f51078b2c7fd1ce87107b7f478d6c11b261943c3 |
IEDL.DBID | AIKHN |
ISSN | 0893-6080 1879-2782 |
IngestDate | Fri Sep 05 13:28:09 EDT 2025 Tue Jul 01 01:24:39 EDT 2025 Thu Apr 24 22:57:25 EDT 2025 Fri Feb 23 02:41:14 EST 2024 |
IsDoiOpenAccess | true |
IsOpenAccess | true |
IsPeerReviewed | true |
IsScholarly | true |
Keywords | Tanh Deep learning Function approximation Neural networks |
Language | English |
License | This is an open access article under the CC BY-NC-ND license. |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-c451t-eaa066cbea5ac7d01bf48ec52f51078b2c7fd1ce87107b7f478d6c11b261943c3 |
Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ORCID | 0000-0001-6860-1345 |
OpenAccessLink | https://www.sciencedirect.com/science/article/pii/S0893608021003208 |
PQID | 2569615161 |
PQPubID | 23479 |
PageCount | 19 |
ParticipantIDs | proquest_miscellaneous_2569615161 crossref_citationtrail_10_1016_j_neunet_2021_08_015 crossref_primary_10_1016_j_neunet_2021_08_015 elsevier_sciencedirect_doi_10_1016_j_neunet_2021_08_015 |
ProviderPackageCode | CITATION AAYXX |
PublicationCentury | 2000 |
PublicationDate | November 2021 2021-11-00 20211101 |
PublicationDateYYYYMMDD | 2021-11-01 |
PublicationDate_xml | – month: 11 year: 2021 text: November 2021 |
PublicationDecade | 2020 |
PublicationTitle | Neural networks |
PublicationYear | 2021 |
Publisher | Elsevier Ltd |
Publisher_xml | – name: Elsevier Ltd |
References | Lu, Su, Karniadakis (b48) 2018 Lye, Mishra, Ray (b49) 2020; 410 Makovoz (b51) 1996; 85 Durán (b18) 1983; 20 Opschoor, Petersen, Schwab (b62) 2020; 18 Costarelli, Spigler (b12) 2013; 44 Herrmann, Opschoor, Schwab (b26) 2021 Lavretsky (b41) 2002; 13 Mhaskar (b52) 1996; 8 Montanelli, Du (b58) 2019; 1 Gühring, Raslan (b22) 2021; 134 Lin, Tegmark, Rolnick (b45) 2017; 168 Grohs, Voigtlaender (b20) 2021 Kolmogorov (b35) 1957 Rolnick, D., & Tegmark, M. (2018). The power of deeper networks for expressing natural functions. In Siegel, Xu (b71) 2020; 128 Han, Jentzen, E (b25) 2018; 115 Mishra, Molinaro (b53) 2020 Pinkus (b64) 1999; 8 Candes, Demanet, Ying (b9) 2009; 7 Cybenko (b15) 1989; 2 Constantine, Savits (b11) 1996; 348 Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In Kutyniok, Petersen, Raslan, Schneider (b37) 2019 Barron (b2) 1994; 14 Katsuura (b33) 2009; 40 . Yarotsky (b75) 2018 Montanelli, Yang (b59) 2020; 129 Gühring, Kutyniok, Petersen (b21) 2020; 18 Raissi, Perdikaris, Karniadakis (b67) 2019; 378 Yarotsky (b74) 2017; 94 Li, Kovachki, Azizzadenesheli, Liu, Bhattacharya, Stuart (b43) 2020 Guliyev, Ismailov (b24) 2018; 316 Beck, Jentzen, Kuckuck (b3) 2019 Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In Shalev-Shwartz, Ben-David (b70) 2014 Demanet, Ying (b17) 2010 Costarelli, Spigler (b13) 2013; 48 Candes, Demanet, Ying (b8) 2007; 29 Cucker, Smale (b14) 2002; 39 Lagaris, Likas, Fotiadis (b39) 2000; 9(5) Davidson, Donsig (b16) 2009 Li, Tang, Yu (b44) 2019 Jentzen, Welti (b30) 2020 Schwab, Zech (b69) 2019; 17 Bianchini, Scarselli (b5) 2014; 25 Mishra, Molinaro (b54) 2020 Blanchard, Bennouna (b6) 2020 Boyadzhiev (b7) 2009 Lye, Mishra, Ray, Chandrashekar (b50) 2021; 374 Moak (b57) 1990 LeCun, Bengio, Hinton (b42) 2015; 521 Guliyev, Ismailov (b23) 2018; 98 Raissi, Karniadakis (b66) 2018; 357 Weinan, Wang (b73) 2018; 61 Barron (b1) 1993; 39 Bengio, LeCun (b4) 2007; 34 Laakmann, Petersen (b38) 2020 Ohn, Kim (b61) 2019; 21 Longo, Mishra, Schwab, Rusch (b46) 2021 Weierstrass (b72) 1885; 2 Hornik, Stinchcombe, White (b29) 1989; 2 Mishra, Molinaro (b55) 2020 Herrmann, Schwab, Zech (b27) 2020; 36 Kainen, Kůrková, Vogt (b31) 1999; 29 Kainen, Kurkova, Sanguineti (b32) 2012; 58 Hochreiter, Schmidhuber (b28) 1997; 9 Lanthaler, Mishra, Karniadakis (b40) 2021 Poggio, Mhaskar, Rosasco, Miranda, Liao (b65) 2017; 14 Neyshabur, B., Tomioka, R., & Srebro, N. (2015). In search of the real inductive bias: on the role of implicit regularization in deep learning. In Kurková, Sanguineti (b36) 2008; 54 Opschoor, Schwab, Zech (b63) 2019; 2019 E, Han, Jentzen (b19) 2017; 5 Mishra, Rusch (b56) 2021 Lu, Jin, Karniadakis (b47) 2019 Yarotsky (10.1016/j.neunet.2021.08.015_b74) 2017; 94 Candes (10.1016/j.neunet.2021.08.015_b9) 2009; 7 Herrmann (10.1016/j.neunet.2021.08.015_b26) 2021 Hornik (10.1016/j.neunet.2021.08.015_b29) 1989; 2 Moak (10.1016/j.neunet.2021.08.015_b57) 1990 Lu (10.1016/j.neunet.2021.08.015_b47) 2019 Pinkus (10.1016/j.neunet.2021.08.015_b64) 1999; 8 Lavretsky (10.1016/j.neunet.2021.08.015_b41) 2002; 13 Kainen (10.1016/j.neunet.2021.08.015_b31) 1999; 29 Bianchini (10.1016/j.neunet.2021.08.015_b5) 2014; 25 Barron (10.1016/j.neunet.2021.08.015_b1) 1993; 39 E (10.1016/j.neunet.2021.08.015_b19) 2017; 5 Mishra (10.1016/j.neunet.2021.08.015_b55) 2020 Lye (10.1016/j.neunet.2021.08.015_b49) 2020; 410 Weierstrass (10.1016/j.neunet.2021.08.015_b72) 1885; 2 Laakmann (10.1016/j.neunet.2021.08.015_b38) 2020 Beck (10.1016/j.neunet.2021.08.015_b3) 2019 Barron (10.1016/j.neunet.2021.08.015_b2) 1994; 14 Han (10.1016/j.neunet.2021.08.015_b25) 2018; 115 10.1016/j.neunet.2021.08.015_b10 Katsuura (10.1016/j.neunet.2021.08.015_b33) 2009; 40 Mishra (10.1016/j.neunet.2021.08.015_b56) 2021 Weinan (10.1016/j.neunet.2021.08.015_b73) 2018; 61 Lu (10.1016/j.neunet.2021.08.015_b48) 2018 Grohs (10.1016/j.neunet.2021.08.015_b20) 2021 Schwab (10.1016/j.neunet.2021.08.015_b69) 2019; 17 Poggio (10.1016/j.neunet.2021.08.015_b65) 2017; 14 Herrmann (10.1016/j.neunet.2021.08.015_b27) 2020; 36 Raissi (10.1016/j.neunet.2021.08.015_b66) 2018; 357 Guliyev (10.1016/j.neunet.2021.08.015_b23) 2018; 98 Cucker (10.1016/j.neunet.2021.08.015_b14) 2002; 39 Kainen (10.1016/j.neunet.2021.08.015_b32) 2012; 58 Cybenko (10.1016/j.neunet.2021.08.015_b15) 1989; 2 Davidson (10.1016/j.neunet.2021.08.015_b16) 2009 Mishra (10.1016/j.neunet.2021.08.015_b53) 2020 Bengio (10.1016/j.neunet.2021.08.015_b4) 2007; 34 Constantine (10.1016/j.neunet.2021.08.015_b11) 1996; 348 LeCun (10.1016/j.neunet.2021.08.015_b42) 2015; 521 Hochreiter (10.1016/j.neunet.2021.08.015_b28) 1997; 9 Siegel (10.1016/j.neunet.2021.08.015_b71) 2020; 128 10.1016/j.neunet.2021.08.015_b68 Demanet (10.1016/j.neunet.2021.08.015_b17) 2010 Jentzen (10.1016/j.neunet.2021.08.015_b30) 2020 Boyadzhiev (10.1016/j.neunet.2021.08.015_b7) 2009 Guliyev (10.1016/j.neunet.2021.08.015_b24) 2018; 316 Opschoor (10.1016/j.neunet.2021.08.015_b63) 2019; 2019 10.1016/j.neunet.2021.08.015_b60 Kolmogorov (10.1016/j.neunet.2021.08.015_b35) 1957 Lagaris (10.1016/j.neunet.2021.08.015_b39) 2000; 9(5) Lye (10.1016/j.neunet.2021.08.015_b50) 2021; 374 Montanelli (10.1016/j.neunet.2021.08.015_b58) 2019; 1 Costarelli (10.1016/j.neunet.2021.08.015_b13) 2013; 48 Makovoz (10.1016/j.neunet.2021.08.015_b51) 1996; 85 Durán (10.1016/j.neunet.2021.08.015_b18) 1983; 20 Gühring (10.1016/j.neunet.2021.08.015_b22) 2021; 134 Yarotsky (10.1016/j.neunet.2021.08.015_b75) 2018 Raissi (10.1016/j.neunet.2021.08.015_b67) 2019; 378 Mhaskar (10.1016/j.neunet.2021.08.015_b52) 1996; 8 Mishra (10.1016/j.neunet.2021.08.015_b54) 2020 Kutyniok (10.1016/j.neunet.2021.08.015_b37) 2019 Opschoor (10.1016/j.neunet.2021.08.015_b62) 2020; 18 Lin (10.1016/j.neunet.2021.08.015_b45) 2017; 168 Longo (10.1016/j.neunet.2021.08.015_b46) 2021 10.1016/j.neunet.2021.08.015_b34 Li (10.1016/j.neunet.2021.08.015_b44) 2019 Gühring (10.1016/j.neunet.2021.08.015_b21) 2020; 18 Montanelli (10.1016/j.neunet.2021.08.015_b59) 2020; 129 Ohn (10.1016/j.neunet.2021.08.015_b61) 2019; 21 Blanchard (10.1016/j.neunet.2021.08.015_b6) 2020 Kurková (10.1016/j.neunet.2021.08.015_b36) 2008; 54 Candes (10.1016/j.neunet.2021.08.015_b8) 2007; 29 Costarelli (10.1016/j.neunet.2021.08.015_b12) 2013; 44 Shalev-Shwartz (10.1016/j.neunet.2021.08.015_b70) 2014 Lanthaler (10.1016/j.neunet.2021.08.015_b40) 2021 Li (10.1016/j.neunet.2021.08.015_b43) 2020 |
References_xml | – year: 2018 ident: b48 article-title: Collapse of deep and narrow neural nets – volume: 94 start-page: 103 year: 2017 end-page: 114 ident: b74 article-title: Error bounds for approximations with deep ReLU networks publication-title: Neural Networks – volume: 8 start-page: 164 year: 1996 end-page: 177 ident: b52 article-title: Neural networks for optimal approximation of smooth and analytic functions publication-title: Neural Computation – volume: 128 start-page: 313 year: 2020 end-page: 321 ident: b71 article-title: Approximation rates for neural networks with general activation functions publication-title: Neural Networks – volume: 13 start-page: 274 year: 2002 end-page: 282 ident: b41 article-title: On the geometric convergence of neural approximations publication-title: IEEE Transactions on Neural Networks – year: 2010 ident: b17 article-title: On Chebyshev interpolation of analytic functions publication-title: Preprint – volume: 521 start-page: 436 year: 2015 end-page: 444 ident: b42 article-title: Deep learning publication-title: Nature – volume: 2 start-page: 633 year: 1885 end-page: 639 ident: b72 article-title: Über die analytische Darstellbarkeit sogenannter willkürlicher Functionen einer reellen Veränderlichen publication-title: Sitzungsberichte Der KÖNiglich PreußIschen Akademie Der Wissenschaften Zu Berlin – year: 2020 ident: b43 article-title: Fourier neural operator for parametric partial differential equations – volume: 134 start-page: 107 year: 2021 end-page: 130 ident: b22 article-title: Approximation rates for neural networks with encodable weights in smoothness spaces publication-title: Neural Networks – reference: Kingma, D. P., & Ba, J. (2015). Adam: A method for stochastic optimization. In – volume: 14 start-page: 503 year: 2017 end-page: 519 ident: b65 article-title: Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review publication-title: International Journal of Automation and Computing – year: 2020 ident: b54 article-title: Estimates on the generalization error of physics-informed neural networks (PINNs) for approximating PDEs II: A class of inverse problems – year: 2020 ident: b30 article-title: Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation – year: 2019 ident: b44 article-title: Better approximations of high dimensional smooth functions by deep neural networks with rectified power units – year: 2019 ident: b47 article-title: DeepOnet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators – volume: 378 start-page: 686 year: 2019 end-page: 707 ident: b67 article-title: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations publication-title: Journal of Computational Physics – volume: 9 start-page: 1735 year: 1997 end-page: 1780 ident: b28 article-title: Long short-term memory publication-title: Neural Computation – volume: 2 start-page: 303 year: 1989 end-page: 314 ident: b15 article-title: Approximation by superpositions of a sigmoidal function publication-title: Mathematics of Control, Signals, and Systems – reference: Rolnick, D., & Tegmark, M. (2018). The power of deeper networks for expressing natural functions. In – year: 2021 ident: b46 article-title: Higher-order Quasi-Monte Carlo training of deep neural networks – volume: 7 start-page: 1727 year: 2009 end-page: 1750 ident: b9 article-title: A fast butterfly algorithm for the computation of Fourier integral operators publication-title: Multiscale Modeling and Simulation – volume: 58 start-page: 1203 year: 2012 end-page: 1214 ident: b32 article-title: Dependence of computational models on input dimension: Tractability of approximation and optimization tasks publication-title: IEEE Transactions on Information Theory – volume: 85 start-page: 98 year: 1996 end-page: 109 ident: b51 article-title: Random approximants and neural networks publication-title: Journal of Approximation Theory – volume: 48 start-page: 72 year: 2013 end-page: 77 ident: b13 article-title: Multivariate neural network operators with sigmoidal activation functions publication-title: Neural Networks – volume: 21 start-page: 627 year: 2019 ident: b61 article-title: Smooth function approximation by deep neural networks with general activation functions publication-title: Entropy – volume: 34 start-page: 1 year: 2007 end-page: 41 ident: b4 article-title: Scaling learning algorithms towards AI publication-title: Large-Scale Kernel Machines – volume: 40 start-page: 275 year: 2009 end-page: 278 ident: b33 article-title: Summations involving binomial coefficients publication-title: The College Mathematics Journal – volume: 410 year: 2020 ident: b49 article-title: Deep learning observables in computational fluid dynamics publication-title: Journal of Computational Physics – volume: 18 start-page: 803 year: 2020 end-page: 859 ident: b21 article-title: Error bounds for approximations with deep ReLU neural networks in publication-title: Analysis and Applications – volume: 36 year: 2020 ident: b27 article-title: Deep neural network expression of posterior expectations in Bayesian PDE inversion publication-title: Inverse Problems – reference: Cho, K., van Merrienboer, B., Gulcehre, C., Bougares, F., Schwenk, H., & Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. In – volume: 25 start-page: 1553 year: 2014 end-page: 1565 ident: b5 article-title: On the complexity of neural network classifiers: A comparison between shallow and deep architectures publication-title: IEEE Transactions on Neural Networks and Learning Systems – year: 2014 ident: b70 article-title: Understanding machine learning: from theory to algorithms – volume: 39 start-page: 1 year: 2002 end-page: 49 ident: b14 article-title: On the mathematical foundations of learning publication-title: American Mathematical Society. Bulletin – year: 2020 ident: b6 article-title: The representation power of neural networks: Breaking the curse of dimensionality – year: 2020 ident: b53 article-title: Estimates on the generalization error of physics informed neural networks (PINNs) for approximating PDEs – year: 2009 ident: b16 article-title: Real analysis and applications: theory in practice – year: 2009 ident: b7 article-title: Derivative polynomials for tanh, tan, sech and sec in explicit form – volume: 348 start-page: 503 year: 1996 end-page: 520 ident: b11 article-title: A multivariate Faà di Bruno formula with applications publication-title: Transactions of the American Mathematical Society – volume: 29 start-page: 2464 year: 2007 end-page: 2493 ident: b8 article-title: Fast computation of Fourier integral operators publication-title: SIAM Journal on Scientific Computing – volume: 9(5) start-page: 987 year: 2000 end-page: 1000 ident: b39 article-title: Artificial neural networks for solving ordinary and partial differential equations publication-title: IEEE Transactions on Neural Networks – start-page: 1 year: 1990 end-page: 8 ident: b57 article-title: Combinatorial multinomial matrices and multinomial Stirling numbers publication-title: Proceedings of the Americal Mathematical Society – volume: 357 start-page: 125 year: 2018 end-page: 141 ident: b66 article-title: Hidden physics models: Machine learning of nonlinear partial differential equations publication-title: Journal of Computational Physics – reference: Neyshabur, B., Tomioka, R., & Srebro, N. (2015). In search of the real inductive bias: on the role of implicit regularization in deep learning. In – volume: 17 start-page: 19 year: 2019 end-page: 55 ident: b69 article-title: Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ publication-title: Analysis and Applications – start-page: 639 year: 2018 end-page: 649 ident: b75 article-title: Optimal approximation of continuous functions by very deep ReLU networks publication-title: Conference on learning theory – volume: 374 year: 2021 ident: b50 article-title: Iterative surrogate model optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks publication-title: Computer Methods in Applied Mechanics and Engineering – year: 2021 ident: b56 article-title: Enhancing accuracy of deep learning algorithms by training on low-discrepancy sequences – year: 2019 ident: b37 article-title: A theoretical analysis of deep neural networks and parametric PDEs – volume: 8 start-page: 143 year: 1999 end-page: 195 ident: b64 article-title: Approximation theory of the MLP model in neural networks publication-title: Acta Numerica – volume: 20 start-page: 985 year: 1983 end-page: 988 ident: b18 article-title: On polynomial approximation in Sobolev spaces publication-title: SIAM Journal on Numerical Analysis – year: 2021 ident: b40 article-title: Error estimates for DeepOnets: A deep learning framework in infinite dimensions – volume: 98 start-page: 296 year: 2018 end-page: 304 ident: b23 article-title: On the approximation by single hidden layer feedforward neural networks with fixed weights publication-title: Neural Networks – start-page: 953 year: 1957 end-page: 956 ident: b35 article-title: On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition publication-title: Doklady akademii nauk (vol. 114) – year: 2019 ident: b3 article-title: Full error analysis for the training of deep neural networks – volume: 2 start-page: 359 year: 1989 end-page: 366 ident: b29 article-title: Multilayer feedforward networks are universal approximators publication-title: Neural Networks – volume: 1 start-page: 78 year: 2019 end-page: 92 ident: b58 article-title: New error bounds for deep ReLU networks using sparse grids publication-title: SIAM Journal on Mathematics of Data Science – volume: 5 start-page: 349 year: 2017 end-page: 380 ident: b19 article-title: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations publication-title: Communications in Mathematics and Statistics – volume: 129 start-page: 1 year: 2020 end-page: 6 ident: b59 article-title: Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem publication-title: Neural Networks – volume: 168 start-page: 1223 year: 2017 end-page: 1247 ident: b45 article-title: Why does deep and cheap learning work so well? publication-title: Journal of Statistical Physics – volume: 316 start-page: 262 year: 2018 end-page: 269 ident: b24 article-title: Approximation capability of two hidden layer feedforward neural networks with fixed weights publication-title: Neurocomputing – volume: 18 start-page: 715 year: 2020 end-page: 770 ident: b62 article-title: Deep ReLU networks and high-order finite element methods publication-title: Analysis and Applications – volume: 29 start-page: 47 year: 1999 end-page: 56 ident: b31 article-title: Approximation by neural networks is not continuous publication-title: Neurocomputing – volume: 115 start-page: 8505 year: 2018 end-page: 8510 ident: b25 article-title: Solving high-dimensional partial differential equations using deep learning publication-title: Proceedings of the National Academy of Sciences – year: 2020 ident: b55 article-title: Physics-informed neural networks for simulating radiative transfer – year: 2021 ident: b20 article-title: Proof of the theory-to-practice gap in deep learning via sampling complexity bounds for neural network approximation spaces – volume: 44 start-page: 101 year: 2013 end-page: 106 ident: b12 article-title: Approximation results for neural network operators activated by sigmoidal functions publication-title: Neural Networks – volume: 61 start-page: 1733 year: 2018 end-page: 1740 ident: b73 article-title: Exponential convergence of the deep neural network approximation for analytic functions publication-title: Science China Mathematics – reference: . – volume: 39 start-page: 930 year: 1993 end-page: 945 ident: b1 article-title: Universal approximation bounds for superpositions of a sigmoidal function publication-title: IEEE Transactions on Information Theory – year: 2021 ident: b26 article-title: Constructive deep ReLU neural network approximation – volume: 2019 year: 2019 ident: b63 article-title: Exponential ReLU DNN expression of holomorphic maps in high dimension publication-title: SAM Research Report – volume: 54 start-page: 5681 year: 2008 end-page: 5688 ident: b36 article-title: Geometric upper bounds on rates of variable-basis approximation publication-title: IEEE Transactions on Information Theory – year: 2020 ident: b38 article-title: Efficient approximation of solutions of parametric linear transport equations by ReLU DNNs – volume: 14 start-page: 115 year: 1994 end-page: 133 ident: b2 article-title: Approximation and estimation bounds for artificial neural networks publication-title: Machine Learning – volume: 29 start-page: 47 issue: 1–3 year: 1999 ident: 10.1016/j.neunet.2021.08.015_b31 article-title: Approximation by neural networks is not continuous publication-title: Neurocomputing doi: 10.1016/S0925-2312(99)00111-3 – ident: 10.1016/j.neunet.2021.08.015_b68 – volume: 85 start-page: 98 issue: 1 year: 1996 ident: 10.1016/j.neunet.2021.08.015_b51 article-title: Random approximants and neural networks publication-title: Journal of Approximation Theory doi: 10.1006/jath.1996.0031 – volume: 2 start-page: 633 year: 1885 ident: 10.1016/j.neunet.2021.08.015_b72 article-title: Über die analytische Darstellbarkeit sogenannter willkürlicher Functionen einer reellen Veränderlichen publication-title: Sitzungsberichte Der KÖNiglich PreußIschen Akademie Der Wissenschaften Zu Berlin – volume: 13 start-page: 274 issue: 2 year: 2002 ident: 10.1016/j.neunet.2021.08.015_b41 article-title: On the geometric convergence of neural approximations publication-title: IEEE Transactions on Neural Networks doi: 10.1109/72.991414 – year: 2021 ident: 10.1016/j.neunet.2021.08.015_b46 – volume: 2019 year: 2019 ident: 10.1016/j.neunet.2021.08.015_b63 article-title: Exponential ReLU DNN expression of holomorphic maps in high dimension publication-title: SAM Research Report – year: 2021 ident: 10.1016/j.neunet.2021.08.015_b26 – volume: 39 start-page: 930 issue: 3 year: 1993 ident: 10.1016/j.neunet.2021.08.015_b1 article-title: Universal approximation bounds for superpositions of a sigmoidal function publication-title: IEEE Transactions on Information Theory doi: 10.1109/18.256500 – volume: 1 start-page: 78 issue: 1 year: 2019 ident: 10.1016/j.neunet.2021.08.015_b58 article-title: New error bounds for deep ReLU networks using sparse grids publication-title: SIAM Journal on Mathematics of Data Science doi: 10.1137/18M1189336 – start-page: 639 year: 2018 ident: 10.1016/j.neunet.2021.08.015_b75 article-title: Optimal approximation of continuous functions by very deep ReLU networks – year: 2021 ident: 10.1016/j.neunet.2021.08.015_b20 – start-page: 953 year: 1957 ident: 10.1016/j.neunet.2021.08.015_b35 article-title: On the representation of continuous functions of many variables by superposition of continuous functions of one variable and addition – volume: 521 start-page: 436 issue: 7553 year: 2015 ident: 10.1016/j.neunet.2021.08.015_b42 article-title: Deep learning publication-title: Nature doi: 10.1038/nature14539 – year: 2019 ident: 10.1016/j.neunet.2021.08.015_b47 – volume: 40 start-page: 275 issue: 4 year: 2009 ident: 10.1016/j.neunet.2021.08.015_b33 article-title: Summations involving binomial coefficients publication-title: The College Mathematics Journal doi: 10.1080/07468342.2009.11922375 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b6 – year: 2019 ident: 10.1016/j.neunet.2021.08.015_b3 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b30 – volume: 410 year: 2020 ident: 10.1016/j.neunet.2021.08.015_b49 article-title: Deep learning observables in computational fluid dynamics publication-title: Journal of Computational Physics doi: 10.1016/j.jcp.2020.109339 – volume: 115 start-page: 8505 issue: 34 year: 2018 ident: 10.1016/j.neunet.2021.08.015_b25 article-title: Solving high-dimensional partial differential equations using deep learning publication-title: Proceedings of the National Academy of Sciences doi: 10.1073/pnas.1718942115 – volume: 2 start-page: 359 issn: 0893-6080 issue: 5 year: 1989 ident: 10.1016/j.neunet.2021.08.015_b29 article-title: Multilayer feedforward networks are universal approximators publication-title: Neural Networks doi: 10.1016/0893-6080(89)90020-8 – year: 2019 ident: 10.1016/j.neunet.2021.08.015_b44 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b54 – volume: 8 start-page: 143 issue: 1 year: 1999 ident: 10.1016/j.neunet.2021.08.015_b64 article-title: Approximation theory of the MLP model in neural networks publication-title: Acta Numerica doi: 10.1017/S0962492900002919 – year: 2021 ident: 10.1016/j.neunet.2021.08.015_b40 – year: 2010 ident: 10.1016/j.neunet.2021.08.015_b17 article-title: On Chebyshev interpolation of analytic functions publication-title: Preprint – year: 2009 ident: 10.1016/j.neunet.2021.08.015_b7 – volume: 9 start-page: 1735 issue: 8 year: 1997 ident: 10.1016/j.neunet.2021.08.015_b28 article-title: Long short-term memory publication-title: Neural Computation doi: 10.1162/neco.1997.9.8.1735 – volume: 94 start-page: 103 year: 2017 ident: 10.1016/j.neunet.2021.08.015_b74 article-title: Error bounds for approximations with deep ReLU networks publication-title: Neural Networks doi: 10.1016/j.neunet.2017.07.002 – ident: 10.1016/j.neunet.2021.08.015_b34 – year: 2019 ident: 10.1016/j.neunet.2021.08.015_b37 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b38 – volume: 348 start-page: 503 issue: 2 year: 1996 ident: 10.1016/j.neunet.2021.08.015_b11 article-title: A multivariate Faà di Bruno formula with applications publication-title: Transactions of the American Mathematical Society doi: 10.1090/S0002-9947-96-01501-2 – volume: 128 start-page: 313 year: 2020 ident: 10.1016/j.neunet.2021.08.015_b71 article-title: Approximation rates for neural networks with general activation functions publication-title: Neural Networks doi: 10.1016/j.neunet.2020.05.019 – volume: 316 start-page: 262 year: 2018 ident: 10.1016/j.neunet.2021.08.015_b24 article-title: Approximation capability of two hidden layer feedforward neural networks with fixed weights publication-title: Neurocomputing doi: 10.1016/j.neucom.2018.07.075 – volume: 21 start-page: 627 issue: 7 year: 2019 ident: 10.1016/j.neunet.2021.08.015_b61 article-title: Smooth function approximation by deep neural networks with general activation functions publication-title: Entropy doi: 10.3390/e21070627 – volume: 25 start-page: 1553 issue: 8 year: 2014 ident: 10.1016/j.neunet.2021.08.015_b5 article-title: On the complexity of neural network classifiers: A comparison between shallow and deep architectures publication-title: IEEE Transactions on Neural Networks and Learning Systems doi: 10.1109/TNNLS.2013.2293637 – volume: 20 start-page: 985 issue: 5 year: 1983 ident: 10.1016/j.neunet.2021.08.015_b18 article-title: On polynomial approximation in Sobolev spaces publication-title: SIAM Journal on Numerical Analysis doi: 10.1137/0720068 – volume: 29 start-page: 2464 issue: 6 year: 2007 ident: 10.1016/j.neunet.2021.08.015_b8 article-title: Fast computation of Fourier integral operators publication-title: SIAM Journal on Scientific Computing doi: 10.1137/060671139 – volume: 44 start-page: 101 year: 2013 ident: 10.1016/j.neunet.2021.08.015_b12 article-title: Approximation results for neural network operators activated by sigmoidal functions publication-title: Neural Networks doi: 10.1016/j.neunet.2013.03.015 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b43 – volume: 168 start-page: 1223 issue: 6 year: 2017 ident: 10.1016/j.neunet.2021.08.015_b45 article-title: Why does deep and cheap learning work so well? publication-title: Journal of Statistical Physics doi: 10.1007/s10955-017-1836-5 – year: 2009 ident: 10.1016/j.neunet.2021.08.015_b16 – volume: 39 start-page: 1 issue: 1 year: 2002 ident: 10.1016/j.neunet.2021.08.015_b14 article-title: On the mathematical foundations of learning publication-title: American Mathematical Society. Bulletin doi: 10.1090/S0273-0979-01-00923-5 – volume: 61 start-page: 1733 issue: 10 year: 2018 ident: 10.1016/j.neunet.2021.08.015_b73 article-title: Exponential convergence of the deep neural network approximation for analytic functions publication-title: Science China Mathematics doi: 10.1007/s11425-018-9387-x – volume: 14 start-page: 115 issue: 1 year: 1994 ident: 10.1016/j.neunet.2021.08.015_b2 article-title: Approximation and estimation bounds for artificial neural networks publication-title: Machine Learning doi: 10.1007/BF00993164 – year: 2018 ident: 10.1016/j.neunet.2021.08.015_b48 – year: 2014 ident: 10.1016/j.neunet.2021.08.015_b70 – volume: 48 start-page: 72 year: 2013 ident: 10.1016/j.neunet.2021.08.015_b13 article-title: Multivariate neural network operators with sigmoidal activation functions publication-title: Neural Networks doi: 10.1016/j.neunet.2013.07.009 – volume: 378 start-page: 686 year: 2019 ident: 10.1016/j.neunet.2021.08.015_b67 article-title: Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations publication-title: Journal of Computational Physics doi: 10.1016/j.jcp.2018.10.045 – volume: 98 start-page: 296 year: 2018 ident: 10.1016/j.neunet.2021.08.015_b23 article-title: On the approximation by single hidden layer feedforward neural networks with fixed weights publication-title: Neural Networks doi: 10.1016/j.neunet.2017.12.007 – volume: 58 start-page: 1203 issue: 2 year: 2012 ident: 10.1016/j.neunet.2021.08.015_b32 article-title: Dependence of computational models on input dimension: Tractability of approximation and optimization tasks publication-title: IEEE Transactions on Information Theory doi: 10.1109/TIT.2011.2169531 – volume: 9(5) start-page: 987 year: 2000 ident: 10.1016/j.neunet.2021.08.015_b39 article-title: Artificial neural networks for solving ordinary and partial differential equations publication-title: IEEE Transactions on Neural Networks – volume: 374 year: 2021 ident: 10.1016/j.neunet.2021.08.015_b50 article-title: Iterative surrogate model optimization (ISMO): An active learning algorithm for PDE constrained optimization with deep neural networks publication-title: Computer Methods in Applied Mechanics and Engineering doi: 10.1016/j.cma.2020.113575 – volume: 34 start-page: 1 issue: 5 year: 2007 ident: 10.1016/j.neunet.2021.08.015_b4 article-title: Scaling learning algorithms towards AI publication-title: Large-Scale Kernel Machines – volume: 8 start-page: 164 issue: 1 year: 1996 ident: 10.1016/j.neunet.2021.08.015_b52 article-title: Neural networks for optimal approximation of smooth and analytic functions publication-title: Neural Computation doi: 10.1162/neco.1996.8.1.164 – volume: 5 start-page: 349 issue: 4 year: 2017 ident: 10.1016/j.neunet.2021.08.015_b19 article-title: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations publication-title: Communications in Mathematics and Statistics doi: 10.1007/s40304-017-0117-6 – volume: 54 start-page: 5681 issue: 12 year: 2008 ident: 10.1016/j.neunet.2021.08.015_b36 article-title: Geometric upper bounds on rates of variable-basis approximation publication-title: IEEE Transactions on Information Theory doi: 10.1109/TIT.2008.2006383 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b53 – year: 2020 ident: 10.1016/j.neunet.2021.08.015_b55 – volume: 7 start-page: 1727 issue: 4 year: 2009 ident: 10.1016/j.neunet.2021.08.015_b9 article-title: A fast butterfly algorithm for the computation of Fourier integral operators publication-title: Multiscale Modeling and Simulation doi: 10.1137/080734339 – volume: 14 start-page: 503 issue: 5 year: 2017 ident: 10.1016/j.neunet.2021.08.015_b65 article-title: Why and when can deep-but not shallow-networks avoid the curse of dimensionality: a review publication-title: International Journal of Automation and Computing doi: 10.1007/s11633-017-1054-2 – volume: 357 start-page: 125 year: 2018 ident: 10.1016/j.neunet.2021.08.015_b66 article-title: Hidden physics models: Machine learning of nonlinear partial differential equations publication-title: Journal of Computational Physics doi: 10.1016/j.jcp.2017.11.039 – ident: 10.1016/j.neunet.2021.08.015_b10 doi: 10.3115/v1/D14-1179 – year: 2021 ident: 10.1016/j.neunet.2021.08.015_b56 – volume: 134 start-page: 107 year: 2021 ident: 10.1016/j.neunet.2021.08.015_b22 article-title: Approximation rates for neural networks with encodable weights in smoothness spaces publication-title: Neural Networks doi: 10.1016/j.neunet.2020.11.010 – volume: 129 start-page: 1 year: 2020 ident: 10.1016/j.neunet.2021.08.015_b59 article-title: Error bounds for deep ReLU networks using the Kolmogorov-Arnold superposition theorem publication-title: Neural Networks doi: 10.1016/j.neunet.2019.12.013 – volume: 18 start-page: 715 issue: 05 year: 2020 ident: 10.1016/j.neunet.2021.08.015_b62 article-title: Deep ReLU networks and high-order finite element methods publication-title: Analysis and Applications doi: 10.1142/S0219530519410136 – volume: 36 issue: 12 year: 2020 ident: 10.1016/j.neunet.2021.08.015_b27 article-title: Deep neural network expression of posterior expectations in Bayesian PDE inversion publication-title: Inverse Problems doi: 10.1088/1361-6420/abaf64 – volume: 2 start-page: 303 issn: 1435-568X issue: 4 year: 1989 ident: 10.1016/j.neunet.2021.08.015_b15 article-title: Approximation by superpositions of a sigmoidal function publication-title: Mathematics of Control, Signals, and Systems doi: 10.1007/BF02551274 – ident: 10.1016/j.neunet.2021.08.015_b60 – start-page: 1 year: 1990 ident: 10.1016/j.neunet.2021.08.015_b57 article-title: Combinatorial multinomial matrices and multinomial Stirling numbers publication-title: Proceedings of the Americal Mathematical Society – volume: 17 start-page: 19 issue: 01 year: 2019 ident: 10.1016/j.neunet.2021.08.015_b69 article-title: Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in UQ publication-title: Analysis and Applications doi: 10.1142/S0219530518500203 – volume: 18 start-page: 803 issue: 05 year: 2020 ident: 10.1016/j.neunet.2021.08.015_b21 article-title: Error bounds for approximations with deep ReLU neural networks in Ws,p norms publication-title: Analysis and Applications doi: 10.1142/S0219530519410021 |
SSID | ssj0006843 |
Score | 2.6676145 |
Snippet | We derive bounds on the error, in high-order Sobolev norms, incurred in the approximation of Sobolev-regular as well as analytic functions by neural networks... |
SourceID | proquest crossref elsevier |
SourceType | Aggregation Database Enrichment Source Index Database Publisher |
StartPage | 732 |
SubjectTerms | Deep learning Function approximation Neural networks Tanh |
Title | On the approximation of functions by tanh neural networks |
URI | https://dx.doi.org/10.1016/j.neunet.2021.08.015 https://www.proquest.com/docview/2569615161 |
Volume | 143 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1LS8QwEB509-LFt_hcIniN27RN0h5lUVZFPeiCt9CkCa5IV3QX9OJvN9O0C4qw4LElE8qXycw3dB4AJ5HzTs6HO9TKLKKpzjOqrY1pnItSc1lGMsbi5JtbMRylV4_8cQkGbS0MplU2tj_Y9NpaN2_6DZr91_G4fx95VyuwVJRFOAU8W4ZunOSCd6B7dnk9vJ0bZJGF5Dm_nqJAW0FXp3lVdlZZTKqMQy9PnI_7t4f6ZatrB3SxDqsNcyRn4eM2YMlWm7DWTmUgzSXdgvyuIp7Vkbpb-Mc4lCaSiSPowmotI_qTeE74RLCZpd-yCqng79swujh_GAxpMyCBmpSzKbVF4RmD0bbghfG4Mu3SzBoeO3_TZKZjI13JjPVBUSS1dKnMSmEY0xg2pYlJdqBTTSq7C4RZi73fEud8gMSNKDC0ElYKbTLBXbkHSQuKMk33cBxi8aLaNLFnFaBUCKXC2ZaM7wGdS72G7hkL1ssWb_VDC5Q38Askj9vjUf6C4F-PorKT2bvynC5H2ibY_r93P4AVfAo1iIfQmb7N7JEnI1Pdg-XTL9ZrVO4bD-nehQ |
linkProvider | Elsevier |
linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV3NS8MwFA9zHvTitzg_I3iNa9om6Y4yHFO3eXCD3UKTJjiRbrgN9OLfbl7TKoogeG2TUF7ex-_R33sPoYvAuiDn0h1iRBKQWLUSoowJSdjimWIiC0QIxcn9Ae-O4tsxG9dQu6qFAVpl6fu9Ty-8dfmkWUqzOZtMmg-BC7UcSkVpAFPAkxW0GrNIAK_v8v2L58ETT51zqwksr-rnCpJXbpa5AUpl6Dt5wnTc3-PTD09dhJ_OFtoocSO-8p-2jWom30Gb1UwGXJroLmrd59hhOlz0Cn-d-MJEPLUYAlihY1i9YYcIHzG0snRH5p4IPt9Do871sN0l5XgEomNGF8SkqcMLWpmUpdpJlSobJ0az0Do7E4kKtbAZ1calRIFQwsYiybimVEHSFEc62kf1fJqbA4SpMdD5LbLWpUdM8xQSK24EVzrhzGYNFFVCkbrsHQ4jLJ5lRRJ7kl6UEkQpYbIlZQ1EPnfNfO-MP9aLSt7ymw5I597_2HleXY905gH_PNLcTJdz6RBdC0Abp4f_Pv0MrXWH_Z7s3QzujtA6vPHViMeovnhZmhMHSxbqtFC7D_Aa31A |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=On+the+approximation+of+functions+by+tanh+neural+networks&rft.jtitle=Neural+networks&rft.au=De+Ryck%2C+Tim&rft.au=Lanthaler%2C+Samuel&rft.au=Mishra%2C+Siddhartha&rft.date=2021-11-01&rft.pub=Elsevier+Ltd&rft.issn=0893-6080&rft.eissn=1879-2782&rft.volume=143&rft.spage=732&rft.epage=750&rft_id=info:doi/10.1016%2Fj.neunet.2021.08.015&rft.externalDocID=S0893608021003208 |
thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon |
thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon |
thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon |