Optimal approximation of piecewise smooth functions using deep ReLU neural networks

We study the necessary and sufficient complexity of ReLU neural networks – in terms of depth and number of weights – which is required for approximating classifier functions in an Lp-sense. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[−12,12]d→R, w...

Full description

Saved in:
Bibliographic Details
Published inNeural networks Vol. 108; pp. 296 - 330
Main Authors Petersen, Philipp, Voigtlaender, Felix
Format Journal Article
LanguageEnglish
Published United States Elsevier Ltd 01.12.2018
Subjects
Online AccessGet full text
ISSN0893-6080
1879-2782
1879-2782
DOI10.1016/j.neunet.2018.08.019

Cover

Abstract We study the necessary and sufficient complexity of ReLU neural networks – in terms of depth and number of weights – which is required for approximating classifier functions in an Lp-sense. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[−12,12]d→R, where the different “smooth regions” of f are separated by Cβ hypersurfaces. For given dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to an L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε−2(d−1)∕β) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class Eβ(Rd). By showing that a family of approximating neural networks gives rise to an encoder for Eβ(Rd), we then prove that one cannot approximate a general function f∈Eβ(Rd) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise Cβ(Rd) functions, this minimal depth is given – up to a multiplicative constant – by β∕d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g – defined on a low-dimensional feature space – as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.
AbstractList We study the necessary and sufficient complexity of ReLU neural networks - in terms of depth and number of weights - which is required for approximating classifier functions in an Lp-sense. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[-12,12]d→R, where the different "smooth regions" of f are separated by Cβ hypersurfaces. For given dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to an L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε-2(d-1)∕β) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class Eβ(Rd). By showing that a family of approximating neural networks gives rise to an encoder for Eβ(Rd), we then prove that one cannot approximate a general function f∈Eβ(Rd) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise Cβ(Rd) functions, this minimal depth is given - up to a multiplicative constant - by β∕d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g - defined on a low-dimensional feature space - as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.We study the necessary and sufficient complexity of ReLU neural networks - in terms of depth and number of weights - which is required for approximating classifier functions in an Lp-sense. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[-12,12]d→R, where the different "smooth regions" of f are separated by Cβ hypersurfaces. For given dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to an L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε-2(d-1)∕β) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class Eβ(Rd). By showing that a family of approximating neural networks gives rise to an encoder for Eβ(Rd), we then prove that one cannot approximate a general function f∈Eβ(Rd) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise Cβ(Rd) functions, this minimal depth is given - up to a multiplicative constant - by β∕d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g - defined on a low-dimensional feature space - as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.
We study the necessary and sufficient complexity of ReLU neural networks - in terms of depth and number of weights - which is required for approximating classifier functions in an L -sense. As a model class, we consider the set E (R ) of possibly discontinuous piecewise C functions f:[-12,12] →R, where the different "smooth regions" of f are separated by C hypersurfaces. For given dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from E (R ) up to an L error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε ) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class E (R ). By showing that a family of approximating neural networks gives rise to an encoder for E (R ), we then prove that one cannot approximate a general function f∈E (R ) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise C (R ) functions, this minimal depth is given - up to a multiplicative constant - by β∕d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g - defined on a low-dimensional feature space - as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.
We study the necessary and sufficient complexity of ReLU neural networks – in terms of depth and number of weights – which is required for approximating classifier functions in an Lp-sense. As a model class, we consider the set Eβ(Rd) of possibly discontinuous piecewise Cβ functions f:[−12,12]d→R, where the different “smooth regions” of f are separated by Cβ hypersurfaces. For given dimension d≥2, regularity β>0, and accuracy ε>0, we construct artificial neural networks with ReLU activation function that approximate functions from Eβ(Rd) up to an L2 error of ε. The constructed networks have a fixed number of layers, depending only on d and β, and they have O(ε−2(d−1)∕β) many nonzero weights, which we prove to be optimal. For the proof of optimality, we establish a lower bound on the description complexity of the class Eβ(Rd). By showing that a family of approximating neural networks gives rise to an encoder for Eβ(Rd), we then prove that one cannot approximate a general function f∈Eβ(Rd) using neural networks that are less complex than those produced by our construction. In addition to the optimality in terms of the number of weights, we show that in order to achieve this optimal approximation rate, one needs ReLU networks of a certain minimal depth. Precisely, for piecewise Cβ(Rd) functions, this minimal depth is given – up to a multiplicative constant – by β∕d. Up to a log factor, our constructed networks match this bound. This partly explains the benefits of depth for ReLU networks by showing that deep networks are necessary to achieve efficient approximation of (piecewise) smooth functions. Finally, we analyze approximation in high-dimensional spaces where the function f to be approximated can be factorized into a smooth dimension reducing feature map τ and classifier function g – defined on a low-dimensional feature space – as f=g∘τ. We show that in this case the approximation rate depends only on the dimension of the feature space and not the input dimension.
Author Petersen, Philipp
Voigtlaender, Felix
Author_xml – sequence: 1
  givenname: Philipp
  orcidid: 0000-0003-3566-1020
  surname: Petersen
  fullname: Petersen, Philipp
  email: pc.petersen.pp@gmail.com
– sequence: 2
  givenname: Felix
  surname: Voigtlaender
  fullname: Voigtlaender, Felix
  email: felix@voigtlaender.xyz
BackLink https://www.ncbi.nlm.nih.gov/pubmed/30245431$$D View this record in MEDLINE/PubMed
BookMark eNqFkEuLFDEUhYOMOD2j_0AkSzfV3jw6qbgQZPAFDQPqrEM6dUvTVidlknL035u2ZzYuFC7cCznncPJdkLOYIhLylMGaAVMv9uuIS8S65sD6NbRh5gFZsV6bjuuen5EV9EZ0Cno4Jxel7AFA9VI8IucCuNxIwVbk0_Vcw8FN1M1zTj_bWUOKNI10DujxNhSk5ZBS_UrHJfrjY6FLCfELHRBn-hG3N7QVyS2ilblN-Vt5TB6Obir45G5fkpu3bz5fve-21-8-XL3edl7yvnZaCKVgJ_yw8QpRaeVwo6QThjPFHYdRSmfciNp5rRQb-M57bQC06Y0bhLgkz0-5rfn3BUu1h1A8TpOLmJZiOWNMS6W1bNJnd9Jld8DBzrn9NP-y9yCa4OVJ4HMqJeNofah_WNTswmQZ2CN1u7cn6vZI3UIbZppZ_mW-z_-P7dXJhg3Sj4DZFh8wehxCRl_tkMK_A34Dz2qerw
CitedBy_id crossref_primary_10_1016_j_matpur_2021_07_009
crossref_primary_10_1088_1361_6420_ace9d4
crossref_primary_10_1162_neco_a_01457
crossref_primary_10_1371_journal_pone_0286125
crossref_primary_10_1016_j_neunet_2022_06_013
crossref_primary_10_1016_j_cnsns_2023_107399
crossref_primary_10_3389_fams_2019_00046
crossref_primary_10_1137_18M118709X
crossref_primary_10_1002_gamm_202100006
crossref_primary_10_1142_S0219530522500014
crossref_primary_10_1142_S0219530519410136
crossref_primary_10_1016_j_chemphys_2021_111296
crossref_primary_10_1007_s11227_021_04038_2
crossref_primary_10_1002_mma_10719
crossref_primary_10_1063_5_0070890
crossref_primary_10_3390_app13169282
crossref_primary_10_1080_01621459_2021_1895175
crossref_primary_10_1007_s42967_020_00087_1
crossref_primary_10_1007_s10444_022_09970_2
crossref_primary_10_1007_s42985_021_00102_x
crossref_primary_10_1109_TNNLS_2024_3371025
crossref_primary_10_1016_j_aim_2020_107485
crossref_primary_10_1089_cmb_2023_0002
crossref_primary_10_1214_19_AOS1911
crossref_primary_10_1002_rnc_7315
crossref_primary_10_1137_21M1465718
crossref_primary_10_1007_s10444_024_10106_x
crossref_primary_10_1142_S0219530522500129
crossref_primary_10_1007_s00366_020_01272_9
crossref_primary_10_1137_21M144431X
crossref_primary_10_1007_s12204_023_2658_z
crossref_primary_10_1137_20M1353010
crossref_primary_10_1016_j_neunet_2023_12_027
crossref_primary_10_1007_s00780_021_00462_7
crossref_primary_10_1016_j_cam_2023_115551
crossref_primary_10_1088_1572_9494_aba243
crossref_primary_10_1016_j_cmpb_2022_107087
crossref_primary_10_1007_s13369_022_06769_7
crossref_primary_10_1109_TNNLS_2021_3134675
crossref_primary_10_1007_s00500_023_09199_1
crossref_primary_10_1007_s00365_021_09551_4
crossref_primary_10_1142_S0219530522400097
crossref_primary_10_1016_j_dcmed_2021_06_003
crossref_primary_10_1115_1_4067739
crossref_primary_10_1016_j_neunet_2020_11_010
crossref_primary_10_1093_bib_bbaa299
crossref_primary_10_1108_JM2_12_2019_0284
crossref_primary_10_1016_j_neunet_2021_04_011
crossref_primary_10_1051_m2an_2024074
crossref_primary_10_1142_S0219530519410021
crossref_primary_10_1007_s10208_022_09556_w
crossref_primary_10_1109_TNNLS_2021_3049719
crossref_primary_10_1016_j_cam_2024_116150
crossref_primary_10_1137_20M134695X
crossref_primary_10_1016_j_neunet_2024_106761
crossref_primary_10_1109_TNNLS_2019_2951788
crossref_primary_10_1137_19M125649X
crossref_primary_10_1016_j_cam_2021_114044
crossref_primary_10_1142_S0219530524500234
crossref_primary_10_1007_s10444_020_09834_7
crossref_primary_10_1007_s00365_024_09694_0
crossref_primary_10_1007_s40304_023_00392_0
crossref_primary_10_1137_23M1606769
crossref_primary_10_1137_20M1373876
crossref_primary_10_1007_s00170_020_06289_4
crossref_primary_10_1061__ASCE_ST_1943_541X_0002802
crossref_primary_10_1007_s10543_025_01058_9
crossref_primary_10_1051_0004_6361_202038787
crossref_primary_10_1007_s43670_022_00040_8
crossref_primary_10_1007_s11042_022_14066_6
crossref_primary_10_1016_j_neunet_2020_07_029
crossref_primary_10_1016_j_neunet_2021_07_027
crossref_primary_10_1214_22_AAP1884
crossref_primary_10_1007_s11042_019_08355_w
crossref_primary_10_1109_TIT_2021_3062161
crossref_primary_10_1109_TSC_2020_3026138
crossref_primary_10_1177_00187208231204570
crossref_primary_10_1142_S021902572150020X
crossref_primary_10_1109_ACCESS_2019_2936597
crossref_primary_10_1137_22M1524278
crossref_primary_10_3390_e21070627
crossref_primary_10_1016_j_acha_2023_03_004
crossref_primary_10_1016_j_neunet_2022_06_040
crossref_primary_10_1093_imaiai_iaac029
crossref_primary_10_1016_j_eswa_2022_118736
crossref_primary_10_1214_23_EJS2187
crossref_primary_10_1016_j_jcp_2022_111377
crossref_primary_10_1016_j_neunet_2023_07_012
crossref_primary_10_1016_j_neunet_2024_106223
crossref_primary_10_1073_pnas_1907369117
crossref_primary_10_1088_1361_6501_ac4a18
crossref_primary_10_1137_23M1568107
crossref_primary_10_1016_j_neunet_2020_01_018
crossref_primary_10_1016_j_camwa_2024_06_008
crossref_primary_10_1109_ACCESS_2020_3048956
crossref_primary_10_1007_s00780_024_00538_0
crossref_primary_10_1051_0004_6361_201937039
crossref_primary_10_3390_math10213959
crossref_primary_10_1016_j_acha_2022_12_002
crossref_primary_10_1109_ACCESS_2020_2992480
crossref_primary_10_2139_ssrn_4169695
crossref_primary_10_1016_j_neunet_2023_06_008
crossref_primary_10_3390_e20120982
crossref_primary_10_1088_1361_6420_ac9c25
crossref_primary_10_1090_mcom_3781
crossref_primary_10_1137_22M1488132
crossref_primary_10_1109_ACCESS_2021_3049841
crossref_primary_10_1016_j_na_2022_113161
crossref_primary_10_1109_TIT_2025_3531048
crossref_primary_10_1007_s10915_021_01532_w
crossref_primary_10_1016_j_jcp_2021_110444
crossref_primary_10_1137_20M1360657
crossref_primary_10_1137_20M1344986
crossref_primary_10_1016_j_acha_2024_101652
crossref_primary_10_1137_22M1522504
crossref_primary_10_1109_TIT_2025_3537594
crossref_primary_10_1021_acs_chemrev_3c00708
crossref_primary_10_1111_sjos_12660
crossref_primary_10_1007_s00009_021_01717_5
crossref_primary_10_1016_j_bspc_2021_102772
crossref_primary_10_2320_jinstmet_J2022022
crossref_primary_10_1137_21M1462738
crossref_primary_10_1007_s00365_021_09544_3
crossref_primary_10_1016_j_acha_2019_06_004
crossref_primary_10_1007_s00025_024_02253_w
crossref_primary_10_1142_S0219530520500116
crossref_primary_10_4213_sm9791
crossref_primary_10_1137_21M1393078
crossref_primary_10_1007_s00477_024_02774_4
crossref_primary_10_1007_s00041_023_10027_1
crossref_primary_10_1007_s10208_020_09461_0
crossref_primary_10_1016_j_amc_2023_127907
crossref_primary_10_1016_j_neunet_2021_02_012
crossref_primary_10_1016_j_ifacsc_2024_100290
crossref_primary_10_1016_j_neunet_2021_10_012
crossref_primary_10_1007_s00365_024_09699_9
crossref_primary_10_1145_3588954
crossref_primary_10_1016_j_neunet_2020_05_033
crossref_primary_10_1007_s00365_021_09543_4
crossref_primary_10_1007_s11095_024_03800_4
crossref_primary_10_1002_tee_23243
crossref_primary_10_1002_pamm_202200174
crossref_primary_10_1002_nme_7406
crossref_primary_10_1016_j_neunet_2019_04_024
crossref_primary_10_1137_20M131309X
crossref_primary_10_1007_s10208_021_09546_4
crossref_primary_10_1016_j_neunet_2021_06_004
crossref_primary_10_1137_23M1549870
crossref_primary_10_1016_j_neunet_2021_02_024
crossref_primary_10_12677_CSA_2020_103059
crossref_primary_10_1080_07038992_2022_2056435
crossref_primary_10_1016_j_neunet_2024_106922
crossref_primary_10_1016_j_neunet_2021_10_001
crossref_primary_10_4213_sm9791e
crossref_primary_10_32604_cmes_2023_022566
crossref_primary_10_1016_j_neunet_2021_09_027
crossref_primary_10_1016_j_neunet_2024_106362
crossref_primary_10_1109_LSP_2020_3005051
crossref_primary_10_1007_s10479_024_05872_2
crossref_primary_10_1007_s11766_023_4309_4
crossref_primary_10_1016_j_neunet_2019_12_014
crossref_primary_10_1016_j_neunet_2019_12_013
crossref_primary_10_1016_j_patcog_2024_111309
crossref_primary_10_1109_TNNLS_2020_2979228
crossref_primary_10_3233_JIFS_211417
crossref_primary_10_1016_j_jco_2023_101746
crossref_primary_10_1109_TNNLS_2020_3027613
crossref_primary_10_1016_j_acha_2022_08_002
crossref_primary_10_1007_s10915_021_01718_2
crossref_primary_10_3390_molecules28196782
crossref_primary_10_1007_s00365_021_09542_5
crossref_primary_10_1007_s40324_022_00299_w
crossref_primary_10_1017_S0962492921000052
crossref_primary_10_1007_s00500_021_06447_0
crossref_primary_10_1016_j_cam_2022_114678
crossref_primary_10_1162_neco_a_01364
crossref_primary_10_2139_ssrn_3782722
crossref_primary_10_3390_w12061549
crossref_primary_10_1137_21M1429540
crossref_primary_10_1088_1361_6420_abaf64
crossref_primary_10_1016_j_acha_2023_101605
crossref_primary_10_1007_s10208_023_09607_w
crossref_primary_10_1109_JIOT_2020_3000771
crossref_primary_10_3390_hemato5020011
crossref_primary_10_1016_j_ins_2024_120573
crossref_primary_10_1109_TPAMI_2020_3032422
crossref_primary_10_1214_23_EJS2104
crossref_primary_10_1016_j_cam_2022_114426
crossref_primary_10_1142_S0219530522400103
crossref_primary_10_3389_fphy_2021_650108
crossref_primary_10_1007_s10444_022_09981_z
crossref_primary_10_1137_22M1493318
crossref_primary_10_1109_TIT_2023_3240360
crossref_primary_10_1007_s10208_022_09565_9
crossref_primary_10_1093_imanum_drae011
crossref_primary_10_1007_s00365_021_09541_6
crossref_primary_10_1016_j_neunet_2019_07_011
crossref_primary_10_1090_memo_1410
crossref_primary_10_1016_j_cma_2024_116784
crossref_primary_10_1137_18M1189336
crossref_primary_10_1109_TRO_2024_3411850
crossref_primary_10_1016_j_jco_2023_101779
crossref_primary_10_1016_j_cej_2022_140526
crossref_primary_10_1109_MSP_2024_3401621
crossref_primary_10_1016_j_jco_2023_101784
crossref_primary_10_1016_j_jco_2023_101783
crossref_primary_10_1186_s13634_020_00684_5
crossref_primary_10_1360_SSI_2022_0401
Cites_doi 10.1016/j.jat.2011.06.005
10.1006/acha.1993.1008
10.1007/BF02478259
10.1109/TIP.2005.843753
10.1007/BF00993164
10.1109/TIT.2017.2776228
10.1002/ssu.2980100111
10.1038/nature14539
10.1090/S0002-9904-1943-07859-7
10.1137/060649781
10.1109/18.256500
10.1038/nature16961
10.1162/neco.1990.2.4.480
10.1109/TIT.2008.2008153
10.1002/cpa.21413
10.1016/S0925-2312(98)00111-8
10.1016/j.neunet.2017.07.002
10.1109/72.165597
10.1162/neco.1991.3.2.258
10.1109/MSP.2012.2205597
10.1162/neco.1996.8.1.164
10.1007/s11633-017-1054-2
10.1007/s003650010032
10.4064/fm-22-1-77-108
10.2140/pjm.1963.13.1085
10.1016/0893-6080(89)90020-8
10.1109/5326.897072
10.1002/cpa.10116
10.1007/BF02551274
10.1016/S0893-6080(05)80131-5
10.1017/S0962492900002919
10.1142/S021800149100020X
ContentType Journal Article
Copyright 2018 Elsevier Ltd
Copyright © 2018 Elsevier Ltd. All rights reserved.
Copyright_xml – notice: 2018 Elsevier Ltd
– notice: Copyright © 2018 Elsevier Ltd. All rights reserved.
DBID AAYXX
CITATION
CGR
CUY
CVF
ECM
EIF
NPM
7X8
DOI 10.1016/j.neunet.2018.08.019
DatabaseName CrossRef
Medline
MEDLINE
MEDLINE (Ovid)
MEDLINE
MEDLINE
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
MEDLINE
Medline Complete
MEDLINE with Full Text
PubMed
MEDLINE (Ovid)
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic
MEDLINE

Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: EIF
  name: MEDLINE
  url: https://proxy.k.utb.cz/login?url=https://www.webofscience.com/wos/medline/basic-search
  sourceTypes: Index Database
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1879-2782
EndPage 330
ExternalDocumentID 30245431
10_1016_j_neunet_2018_08_019
S0893608018302454
Genre Journal Article
GroupedDBID ---
--K
--M
-~X
.DC
.~1
0R~
123
186
1B1
1RT
1~.
1~5
29N
4.4
457
4G.
53G
5RE
5VS
6TJ
7-5
71M
8P~
9JM
9JN
AABNK
AACTN
AADPK
AAEDT
AAEDW
AAIAV
AAIKJ
AAKOC
AALRI
AAOAW
AAQFI
AAQXK
AAXLA
AAXUO
AAYFN
ABAOU
ABBOA
ABCQJ
ABEFU
ABFNM
ABFRF
ABHFT
ABIVO
ABJNI
ABLJU
ABMAC
ABXDB
ABYKQ
ACAZW
ACDAQ
ACGFO
ACGFS
ACIUM
ACNNM
ACRLP
ACZNC
ADBBV
ADEZE
ADGUI
ADJOM
ADMUD
ADRHT
AEBSH
AECPX
AEFWE
AEKER
AENEX
AFKWA
AFTJW
AFXIZ
AGHFR
AGUBO
AGWIK
AGYEJ
AHHHB
AHJVU
AHZHX
AIALX
AIEXJ
AIKHN
AITUG
AJBFU
AJOXV
ALMA_UNASSIGNED_HOLDINGS
AMFUW
AMRAJ
AOUOD
ARUGR
ASPBG
AVWKF
AXJTR
AZFZN
BJAXD
BKOJK
BLXMC
CS3
DU5
EBS
EFJIC
EFLBG
EJD
EO8
EO9
EP2
EP3
F0J
F5P
FDB
FEDTE
FGOYB
FIRID
FNPLU
FYGXN
G-2
G-Q
G8K
GBLVA
GBOLZ
HLZ
HMQ
HVGLF
HZ~
IHE
J1W
JJJVA
K-O
KOM
KZ1
LG9
LMP
M2V
M41
MHUIS
MO0
MOBAO
MVM
N9A
O-L
O9-
OAUVE
OZT
P-8
P-9
P2P
PC.
Q38
R2-
RIG
ROL
RPZ
SBC
SCC
SDF
SDG
SDP
SES
SEW
SNS
SPC
SPCBC
SSN
SST
SSV
SSW
SSZ
T5K
TAE
UAP
UNMZH
VOH
WUQ
XPP
ZMT
~G-
AATTM
AAXKI
AAYWO
AAYXX
ABDPE
ABWVN
ACRPL
ACVFH
ADCNI
ADNMO
AEIPS
AEUPX
AFJKZ
AFPUW
AGCQF
AGQPQ
AGRNS
AIGII
AIIUN
AKBMS
AKRWK
AKYEP
ANKPU
APXCP
BNPGV
CITATION
SSH
CGR
CUY
CVF
ECM
EIF
NPM
PKN
7X8
EFKBS
ID FETCH-LOGICAL-c428t-733660b3cd5c6ee676ae564a392162a20f44a9afe7ac7661d2bcc79007989ad33
IEDL.DBID AIKHN
ISSN 0893-6080
1879-2782
IngestDate Thu Sep 04 18:17:03 EDT 2025
Wed Feb 19 02:34:15 EST 2025
Tue Jul 01 01:24:32 EDT 2025
Thu Apr 24 22:56:20 EDT 2025
Fri Feb 23 02:46:10 EST 2024
IsPeerReviewed true
IsScholarly true
Keywords Deep neural networks
Function approximation
Curse of dimension
Metric entropy
Piecewise smooth functions
Sparse connectivity
Language English
License Copyright © 2018 Elsevier Ltd. All rights reserved.
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c428t-733660b3cd5c6ee676ae564a392162a20f44a9afe7ac7661d2bcc79007989ad33
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0003-3566-1020
PMID 30245431
PQID 2111746774
PQPubID 23479
PageCount 35
ParticipantIDs proquest_miscellaneous_2111746774
pubmed_primary_30245431
crossref_citationtrail_10_1016_j_neunet_2018_08_019
crossref_primary_10_1016_j_neunet_2018_08_019
elsevier_sciencedirect_doi_10_1016_j_neunet_2018_08_019
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate December 2018
2018-12-00
2018-Dec
20181201
PublicationDateYYYYMMDD 2018-12-01
PublicationDate_xml – month: 12
  year: 2018
  text: December 2018
PublicationDecade 2010
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle Neural networks
PublicationTitleAlternate Neural Netw
PublicationYear 2018
Publisher Elsevier Ltd
Publisher_xml – name: Elsevier Ltd
References Mallat (b39) 2012; 65
Rumelhart, Hinton, Williams (b52) 1986
Grohs (b22) 2015
shearlets. arXiv preprint
Kutyniok, Labate (b30) 2012
Hinton, Deng, Yu, Dahl, Mohamed, Jaitly (b25) 2012; 29
Voigtlaender, F., & Pein, A. (2017). Analysis sparsity versus synthesis sparsity for
Martin, Pittman (b40) 1991; 3
Mhaskar, H., Liao, Q., & Poggio, T. (2016). Learning functions: when is deep better than shallow. arXiv preprint
Barron (b3) 1993; 39
Evans, Gariepy (b19) 1992
Candès, Donoho (b10) 2004; 57
LeCun, Bengio, Hinton (b33) 2015; 521
Hornik, Stinchcombe, White (b26) 1989; 2
Lee (b35) 2013; vol. 218
Rosenblatt (b49) 1962
Safran, Shamir (b54) 2017; vol. 70
Telgarsky, M. (2017). Neural networks and rational functions. arXiv preprint
Yarotsky (b62) 2017; 94
Delalleau, Bengio (b15) 2011
Anthony, Bartlett (b2) 2009
Dudley (b18) 2002; vol. 74
.
Safran, I., & Shamir, O. (2016). Depth-width tradeoffs in approximating natural functions with neural networks. arXiv preprint
Folland (b20) 1999
Kirszbraun (b27) 1934; 22
Zhang (b63) 2000; 30
Pinkus (b47) 1999; 8
Bölcskei, Grohs, Kutyniok, Petersen (b6) 2017
Lang (b32) 1993; vol. 142
Burke (b8) 1994; 10
Clements (b13) 1963; 13
Rudin (b51) 1991
Megginson (b43) 1998; vol. 183
Telgarsky, M. (2015). Representation benefits of deep feedforward networks. arXiv preprint
Leshno, Lin, Pinkus, Schocken (b37) 1993; 6
McCulloch, Pitts (b42) 1943; 5
Valentine (b59) 1943; 49
Krizhevsky, Sutskever, Hinton (b29) 2012
Kutyniok, Lim (b31) 2011; 163
Wiatowski, Bölcskei (b61) 2018; 64
Adams (b1) 1975
LeCun, Boser, Denker, Henderson, Howard, Hubbard (b34) 1990
Guo, Labate (b23) 2007; 39
Poggio, Mhaskar, Rosasco, Miranda, Liao (b48) 2017
Donoho (b16) 1993; 1
Guyon (b24) 1991; 05
Goodfellow, Bengio, Courville (b21) 2016
Rudin (b50) 1976
Montúfar, Pascanu, Cho, Bengio (b46) 2014
Cybenko (b14) 1989; 2
Barron (b4) 1994; 14
Chandrasekaran, Wakin, Baron, Baraniuk (b12) 2009; 55
Chandrasekaran, V., Wakin, M., Baron, D., & Baraniuk, R. G. (2004). Compressing piecewise smooth multidimensional functions using surflets: Rate-distortion analysis. Rice University ECE Technical Report.
Le Pennec, Mallat (b36) 2005; 14
Bölcskei, H., Grohs, P., Kutyniok, G., & Petersen, P. (2017b). Optimal approximation with sparsely connected deep neural networks. arXiv preprint
Silver, Huang, Maddison, Guez, Sifre, van den Driessche (b55) 2016; 529
Telgarsky (b57) 2016; vol. 49
Baxt (b5) 1990; 2
Knerr, Personnaz, Dreyfus (b28) 1992; 3
Maiorov, Pinkus (b38) 1999; 25
Candès, Donoho (b9) 2000
Donoho (b17) 2001; 17
Mattila (b41) 1999
Mhaskar (b44) 1996; 8
Megginson (10.1016/j.neunet.2018.08.019_b43) 1998; vol. 183
Lee (10.1016/j.neunet.2018.08.019_b35) 2013; vol. 218
Wiatowski (10.1016/j.neunet.2018.08.019_b61) 2018; 64
Kutyniok (10.1016/j.neunet.2018.08.019_b30) 2012
Dudley (10.1016/j.neunet.2018.08.019_b18) 2002; vol. 74
Rosenblatt (10.1016/j.neunet.2018.08.019_b49) 1962
Mallat (10.1016/j.neunet.2018.08.019_b39) 2012; 65
Martin (10.1016/j.neunet.2018.08.019_b40) 1991; 3
Delalleau (10.1016/j.neunet.2018.08.019_b15) 2011
10.1016/j.neunet.2018.08.019_b45
Montúfar (10.1016/j.neunet.2018.08.019_b46) 2014
Rumelhart (10.1016/j.neunet.2018.08.019_b52) 1986
Barron (10.1016/j.neunet.2018.08.019_b4) 1994; 14
Le Pennec (10.1016/j.neunet.2018.08.019_b36) 2005; 14
Kutyniok (10.1016/j.neunet.2018.08.019_b31) 2011; 163
Valentine (10.1016/j.neunet.2018.08.019_b59) 1943; 49
Folland (10.1016/j.neunet.2018.08.019_b20) 1999
Barron (10.1016/j.neunet.2018.08.019_b3) 1993; 39
Mhaskar (10.1016/j.neunet.2018.08.019_b44) 1996; 8
Guo (10.1016/j.neunet.2018.08.019_b23) 2007; 39
Mattila (10.1016/j.neunet.2018.08.019_b41) 1999
Cybenko (10.1016/j.neunet.2018.08.019_b14) 1989; 2
Maiorov (10.1016/j.neunet.2018.08.019_b38) 1999; 25
Adams (10.1016/j.neunet.2018.08.019_b1) 1975
Pinkus (10.1016/j.neunet.2018.08.019_b47) 1999; 8
Anthony (10.1016/j.neunet.2018.08.019_b2) 2009
LeCun (10.1016/j.neunet.2018.08.019_b33) 2015; 521
10.1016/j.neunet.2018.08.019_b7
Kirszbraun (10.1016/j.neunet.2018.08.019_b27) 1934; 22
10.1016/j.neunet.2018.08.019_b60
Chandrasekaran (10.1016/j.neunet.2018.08.019_b12) 2009; 55
LeCun (10.1016/j.neunet.2018.08.019_b34) 1990
Leshno (10.1016/j.neunet.2018.08.019_b37) 1993; 6
McCulloch (10.1016/j.neunet.2018.08.019_b42) 1943; 5
Clements (10.1016/j.neunet.2018.08.019_b13) 1963; 13
Evans (10.1016/j.neunet.2018.08.019_b19) 1992
Rudin (10.1016/j.neunet.2018.08.019_b50) 1976
Hinton (10.1016/j.neunet.2018.08.019_b25) 2012; 29
Guyon (10.1016/j.neunet.2018.08.019_b24) 1991; 05
Poggio (10.1016/j.neunet.2018.08.019_b48) 2017
Rudin (10.1016/j.neunet.2018.08.019_b51) 1991
Silver (10.1016/j.neunet.2018.08.019_b55) 2016; 529
Donoho (10.1016/j.neunet.2018.08.019_b16) 1993; 1
Knerr (10.1016/j.neunet.2018.08.019_b28) 1992; 3
Safran (10.1016/j.neunet.2018.08.019_b54) 2017; vol. 70
Hornik (10.1016/j.neunet.2018.08.019_b26) 1989; 2
Goodfellow (10.1016/j.neunet.2018.08.019_b21) 2016
Telgarsky (10.1016/j.neunet.2018.08.019_b57) 2016; vol. 49
Donoho (10.1016/j.neunet.2018.08.019_b17) 2001; 17
Baxt (10.1016/j.neunet.2018.08.019_b5) 1990; 2
10.1016/j.neunet.2018.08.019_b11
Candès (10.1016/j.neunet.2018.08.019_b10) 2004; 57
10.1016/j.neunet.2018.08.019_b53
Candès (10.1016/j.neunet.2018.08.019_b9) 2000
Burke (10.1016/j.neunet.2018.08.019_b8) 1994; 10
Yarotsky (10.1016/j.neunet.2018.08.019_b62) 2017; 94
Zhang (10.1016/j.neunet.2018.08.019_b63) 2000; 30
Bölcskei (10.1016/j.neunet.2018.08.019_b6) 2017
Lang (10.1016/j.neunet.2018.08.019_b32) 1993; vol. 142
10.1016/j.neunet.2018.08.019_b58
10.1016/j.neunet.2018.08.019_b56
Krizhevsky (10.1016/j.neunet.2018.08.019_b29) 2012
Grohs (10.1016/j.neunet.2018.08.019_b22) 2015
References_xml – year: 2017
  ident: b6
  article-title: Memory-optimal neural network approximation
  publication-title: Proc. of SPIE (wavelets and sparsity XVII)
– volume: 521
  start-page: 436
  year: 2015
  end-page: 444
  ident: b33
  article-title: Deep learning
  publication-title: Nature
– volume: 10
  start-page: 73
  year: 1994
  end-page: 79
  ident: b8
  article-title: Artificial neural networks for cancer research: outcome prediction
  publication-title: Seminars in Surgical Oncology
– reference: Voigtlaender, F., & Pein, A. (2017). Analysis sparsity versus synthesis sparsity for
– volume: 14
  start-page: 115
  year: 1994
  end-page: 133
  ident: b4
  article-title: Approximation and estimation bounds for artificial neural networks
  publication-title: Machine Learning
– volume: vol. 70
  start-page: 2979
  year: 2017
  end-page: 2987
  ident: b54
  article-title: Depth-width tradeoffs in approximating natural functions with neural networks
  publication-title: Proceedings of the 34th international conference on machine learning
– year: 2017
  ident: b48
  article-title: Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
  publication-title: International Journal of Automation and Computing
– reference: -shearlets. arXiv preprint
– volume: vol. 142
  start-page: xiv+580
  year: 1993
  ident: b32
  publication-title: Real and functional analysis
– volume: 39
  start-page: 298
  year: 2007
  end-page: 318
  ident: b23
  article-title: Optimally sparse multidimensional representation using shearlets
  publication-title: SIAM Journal on Mathematical Analysis
– volume: 30
  start-page: 451
  year: 2000
  end-page: 462
  ident: b63
  article-title: Neural networks for classification: A survey
  publication-title: IEEE Transactions on Systems, Man, and Cybernetics Part C
– year: 2009
  ident: b2
  article-title: Neural network learning: Theoretical foundations
– volume: 163
  start-page: 1564
  year: 2011
  end-page: 1589
  ident: b31
  article-title: Compactly supported shearlets are optimally sparse
  publication-title: Journal of Approximation Theory
– volume: 49
  start-page: 100
  year: 1943
  end-page: 108
  ident: b59
  article-title: On the extension of a vector function so as to preserve a Lipschitz condition
  publication-title: American Mathematical Society. Bulletin
– volume: 22
  start-page: 77
  year: 1934
  end-page: 108
  ident: b27
  article-title: Über die zusammenziehende und Lipschitzsche Transformationen
  publication-title: Fundamental Mathematics
– volume: 8
  start-page: 164
  year: 1996
  end-page: 177
  ident: b44
  article-title: Neural networks for optimal approximation of smooth and analytic functions
  publication-title: Neural Computation
– reference: Chandrasekaran, V., Wakin, M., Baron, D., & Baraniuk, R. G. (2004). Compressing piecewise smooth multidimensional functions using surflets: Rate-distortion analysis. Rice University ECE Technical Report.
– start-page: 318
  year: 1986
  end-page: 362
  ident: b52
  article-title: Learning internal representations by error propagation
  publication-title: Parallel distributed processing: Explorations in the microstructure of cognition
– reference: Safran, I., & Shamir, O. (2016). Depth-width tradeoffs in approximating natural functions with neural networks. arXiv preprint
– volume: 2
  start-page: 480
  year: 1990
  end-page: 489
  ident: b5
  article-title: Use of an artificial neural network for data analysis in clinical decision-making: The diagnosis of acute coronary occlusion
  publication-title: Neural Computation
– volume: 1
  start-page: 100
  year: 1993
  end-page: 115
  ident: b16
  article-title: Unconditional bases are optimal bases for data compression and for statistical estimation
  publication-title: Applied and Computational Harmonic Analysis
– start-page: xviii+424
  year: 1991
  ident: b51
  article-title: Functional analysis
  publication-title: International series in pure and applied mathematics
– year: 1962
  ident: b49
  article-title: Principles of neurodynamics: Perceptrons and the theory of brain mechanisms
– volume: 55
  start-page: 374
  year: 2009
  end-page: 400
  ident: b12
  article-title: Representation and compression of multidimensional piecewise functions using surflets
  publication-title: IEEE Transaction on Information Theory
– volume: vol. 218
  start-page: xvi+708
  year: 2013
  ident: b35
  publication-title: Introduction to smooth manifolds
– volume: 5
  start-page: 115
  year: 1943
  end-page: 133
  ident: b42
  article-title: A logical calculus of ideas immanent in nervous activity
  publication-title: Bulletin of Mathematical Biophysics
– volume: 2
  start-page: 303
  year: 1989
  end-page: 314
  ident: b14
  article-title: Approximation by superpositions of a sigmoidal function
  publication-title: Mathematics of Control, Signals
– year: 1992
  ident: b19
  article-title: Measure theory and fine properties of functions
– volume: vol. 183
  start-page: xx+596
  year: 1998
  ident: b43
  publication-title: An introduction to banach space theory
– start-page: 199
  year: 2015
  end-page: 248
  ident: b22
  article-title: Optimally sparse data representations
  publication-title: Harmonic and applied analysis
– volume: 2
  start-page: 359
  year: 1989
  end-page: 366
  ident: b26
  article-title: Multilayer feedforward networks are universal approximators
  publication-title: Neural Networks
– start-page: 666
  year: 2011
  end-page: 674
  ident: b15
  article-title: Shallow vs. deep sum-product networks
  publication-title: Advances in neural information processing systems, Vol. 24
– start-page: 105
  year: 2000
  end-page: 120
  ident: b9
  article-title: Curvelets: a surprisingly effective nonadaptive representation of objects with edges
  publication-title: Curve and surface fitting
– volume: 05
  start-page: 353
  year: 1991
  end-page: 382
  ident: b24
  article-title: Applications of neural networks to character recognition
  publication-title: International Journal of Pattern Recognition
– volume: 14
  start-page: 423
  year: 2005
  end-page: 438
  ident: b36
  article-title: Sparse geometric image representations with bandelets
  publication-title: IEEE Transactions on Image Processing
– volume: 529
  start-page: 484
  year: 2016
  end-page: 489
  ident: b55
  article-title: Mastering the game of Go with deep neural networks and tree search
  publication-title: Nature
– start-page: 1097
  year: 2012
  end-page: 1105
  ident: b29
  article-title: Imagenet classification with deep convolutional neural networks
  publication-title: Advances in neural information processing systems, Vol. 25
– volume: 29
  start-page: 82
  year: 2012
  end-page: 97
  ident: b25
  article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
  publication-title: IEEE Signal Processing Magazine
– volume: 3
  start-page: 258
  year: 1991
  end-page: 267
  ident: b40
  article-title: Recognizing hand-printed letters and digits using backpropagation learning
  publication-title: Neural Computation
– volume: 8
  start-page: 143
  year: 1999
  end-page: 195
  ident: b47
  article-title: Approximation theory of the MLP model in neural networks
  publication-title: Acta Numerica
– volume: vol. 74
  start-page: x+555
  year: 2002
  ident: b18
  publication-title: Real analysis and probability
– reference: Telgarsky, M. (2015). Representation benefits of deep feedforward networks. arXiv preprint
– volume: vol. 49
  start-page: 1517
  year: 2016
  end-page: 1539
  ident: b57
  article-title: Benefits of depth in neural networks
  publication-title: 29th annual conference on learning theory
– volume: 57
  start-page: 219
  year: 2004
  end-page: 266
  ident: b10
  article-title: New tight frames of curvelets and optimal representations of objects with piecewise
  publication-title: Communications on Pure and Applied Mathematics
– volume: 13
  start-page: 1085
  year: 1963
  end-page: 1095
  ident: b13
  article-title: Entropies of several sets of real valued functions
  publication-title: Pacific Journal of Mathematics
– start-page: 2924
  year: 2014
  end-page: 2932
  ident: b46
  article-title: On the number of linear regions of deep neural networks
  publication-title: Proceedings of the 27th international conference on neural information processing systems
– reference: Telgarsky, M. (2017). Neural networks and rational functions. arXiv preprint
– year: 1999
  ident: b41
  article-title: Geometry of sets and measures in euclidean spaces: Fractals and rectifiability, Vol. 44
– start-page: x+342
  year: 1976
  ident: b50
  article-title: Principles of mathematical analysis
– volume: 39
  start-page: 930
  year: 1993
  end-page: 945
  ident: b3
  article-title: Universal approximation bounds for superpositions of a sigmoidal function
  publication-title: IEEE Transaction on Information Theory
– volume: 94
  start-page: 103
  year: 2017
  end-page: 114
  ident: b62
  article-title: Error bounds for approximations with deep ReLU networks
  publication-title: Neural Networks
– volume: 25
  start-page: 81
  year: 1999
  end-page: 91
  ident: b38
  article-title: Lower bounds for approximation by MLP neural networks
  publication-title: Neurocomputing
– start-page: 1
  year: 2012
  end-page: 38
  ident: b30
  article-title: Introduction to shearlets
  publication-title: Shearlets
– volume: 17
  start-page: 353
  year: 2001
  end-page: 382
  ident: b17
  article-title: Sparse components of images and optimal atomic decompositions
  publication-title: Constructive Approximation
– reference: .
– volume: 65
  start-page: 1331
  year: 2012
  end-page: 1398
  ident: b39
  article-title: Group invariant scattering
  publication-title: Communications on Pure and Applied Mathematics
– volume: 6
  start-page: 861
  year: 1993
  end-page: 867
  ident: b37
  article-title: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function
  publication-title: Neural Networks
– volume: 64
  start-page: 1845
  year: 2018
  end-page: 1866
  ident: b61
  article-title: A mathematical theory of deep convolutional neural networks for feature extraction
  publication-title: IEEE Transaction on Information Theory
– reference: Bölcskei, H., Grohs, P., Kutyniok, G., & Petersen, P. (2017b). Optimal approximation with sparsely connected deep neural networks. arXiv preprint
– start-page: xviii+268
  year: 1975
  ident: b1
  article-title: Sobolev spaces
– volume: 3
  start-page: 962
  year: 1992
  end-page: 968
  ident: b28
  article-title: Handwritten digit recognition by neural networks with single-layer training
  publication-title: IEEE Transactions on Neural Networks
– reference: Mhaskar, H., Liao, Q., & Poggio, T. (2016). Learning functions: when is deep better than shallow. arXiv preprint
– year: 1999
  ident: b20
  publication-title: Real analysis: Modern techniques and their applications
– start-page: 396
  year: 1990
  end-page: 404
  ident: b34
  article-title: Handwritten digit recognition with a back-propagation network
  publication-title: Advances in neural information processing systems, Vol. 2
– year: 2016
  ident: b21
  article-title: Deep learning
– volume: 163
  start-page: 1564
  issue: 11
  year: 2011
  ident: 10.1016/j.neunet.2018.08.019_b31
  article-title: Compactly supported shearlets are optimally sparse
  publication-title: Journal of Approximation Theory
  doi: 10.1016/j.jat.2011.06.005
– volume: 1
  start-page: 100
  issue: 1
  year: 1993
  ident: 10.1016/j.neunet.2018.08.019_b16
  article-title: Unconditional bases are optimal bases for data compression and for statistical estimation
  publication-title: Applied and Computational Harmonic Analysis
  doi: 10.1006/acha.1993.1008
– volume: 5
  start-page: 115
  year: 1943
  ident: 10.1016/j.neunet.2018.08.019_b42
  article-title: A logical calculus of ideas immanent in nervous activity
  publication-title: Bulletin of Mathematical Biophysics
  doi: 10.1007/BF02478259
– year: 1999
  ident: 10.1016/j.neunet.2018.08.019_b41
– volume: vol. 183
  start-page: xx+596
  year: 1998
  ident: 10.1016/j.neunet.2018.08.019_b43
– ident: 10.1016/j.neunet.2018.08.019_b60
– year: 2009
  ident: 10.1016/j.neunet.2018.08.019_b2
– volume: 14
  start-page: 423
  year: 2005
  ident: 10.1016/j.neunet.2018.08.019_b36
  article-title: Sparse geometric image representations with bandelets
  publication-title: IEEE Transactions on Image Processing
  doi: 10.1109/TIP.2005.843753
– volume: 14
  start-page: 115
  issue: 1
  year: 1994
  ident: 10.1016/j.neunet.2018.08.019_b4
  article-title: Approximation and estimation bounds for artificial neural networks
  publication-title: Machine Learning
  doi: 10.1007/BF00993164
– year: 1999
  ident: 10.1016/j.neunet.2018.08.019_b20
– ident: 10.1016/j.neunet.2018.08.019_b11
– volume: 64
  start-page: 1845
  issue: 3
  year: 2018
  ident: 10.1016/j.neunet.2018.08.019_b61
  article-title: A mathematical theory of deep convolutional neural networks for feature extraction
  publication-title: IEEE Transaction on Information Theory
  doi: 10.1109/TIT.2017.2776228
– volume: 10
  start-page: 73
  year: 1994
  ident: 10.1016/j.neunet.2018.08.019_b8
  article-title: Artificial neural networks for cancer research: outcome prediction
  publication-title: Seminars in Surgical Oncology
  doi: 10.1002/ssu.2980100111
– start-page: 199
  year: 2015
  ident: 10.1016/j.neunet.2018.08.019_b22
  article-title: Optimally sparse data representations
– volume: 521
  start-page: 436
  issue: 7553
  year: 2015
  ident: 10.1016/j.neunet.2018.08.019_b33
  article-title: Deep learning
  publication-title: Nature
  doi: 10.1038/nature14539
– start-page: xviii+268
  year: 1975
  ident: 10.1016/j.neunet.2018.08.019_b1
– volume: 49
  start-page: 100
  issue: 2
  year: 1943
  ident: 10.1016/j.neunet.2018.08.019_b59
  article-title: On the extension of a vector function so as to preserve a Lipschitz condition
  publication-title: American Mathematical Society. Bulletin
  doi: 10.1090/S0002-9904-1943-07859-7
– start-page: 318
  year: 1986
  ident: 10.1016/j.neunet.2018.08.019_b52
  article-title: Learning internal representations by error propagation
– volume: vol. 74
  start-page: x+555
  year: 2002
  ident: 10.1016/j.neunet.2018.08.019_b18
– year: 2016
  ident: 10.1016/j.neunet.2018.08.019_b21
– volume: 39
  start-page: 298
  issue: 1
  year: 2007
  ident: 10.1016/j.neunet.2018.08.019_b23
  article-title: Optimally sparse multidimensional representation using shearlets
  publication-title: SIAM Journal on Mathematical Analysis
  doi: 10.1137/060649781
– volume: 39
  start-page: 930
  issue: 3
  year: 1993
  ident: 10.1016/j.neunet.2018.08.019_b3
  article-title: Universal approximation bounds for superpositions of a sigmoidal function
  publication-title: IEEE Transaction on Information Theory
  doi: 10.1109/18.256500
– volume: 529
  start-page: 484
  issue: 7587
  year: 2016
  ident: 10.1016/j.neunet.2018.08.019_b55
  article-title: Mastering the game of Go with deep neural networks and tree search
  publication-title: Nature
  doi: 10.1038/nature16961
– volume: 2
  start-page: 480
  issue: 4
  year: 1990
  ident: 10.1016/j.neunet.2018.08.019_b5
  article-title: Use of an artificial neural network for data analysis in clinical decision-making: The diagnosis of acute coronary occlusion
  publication-title: Neural Computation
  doi: 10.1162/neco.1990.2.4.480
– volume: 55
  start-page: 374
  issue: 1
  year: 2009
  ident: 10.1016/j.neunet.2018.08.019_b12
  article-title: Representation and compression of multidimensional piecewise functions using surflets
  publication-title: IEEE Transaction on Information Theory
  doi: 10.1109/TIT.2008.2008153
– volume: 65
  start-page: 1331
  issue: 10
  year: 2012
  ident: 10.1016/j.neunet.2018.08.019_b39
  article-title: Group invariant scattering
  publication-title: Communications on Pure and Applied Mathematics
  doi: 10.1002/cpa.21413
– start-page: 2924
  year: 2014
  ident: 10.1016/j.neunet.2018.08.019_b46
  article-title: On the number of linear regions of deep neural networks
– volume: 25
  start-page: 81
  issue: 1–3
  year: 1999
  ident: 10.1016/j.neunet.2018.08.019_b38
  article-title: Lower bounds for approximation by MLP neural networks
  publication-title: Neurocomputing
  doi: 10.1016/S0925-2312(98)00111-8
– volume: 94
  start-page: 103
  year: 2017
  ident: 10.1016/j.neunet.2018.08.019_b62
  article-title: Error bounds for approximations with deep ReLU networks
  publication-title: Neural Networks
  doi: 10.1016/j.neunet.2017.07.002
– volume: 3
  start-page: 962
  issue: 6
  year: 1992
  ident: 10.1016/j.neunet.2018.08.019_b28
  article-title: Handwritten digit recognition by neural networks with single-layer training
  publication-title: IEEE Transactions on Neural Networks
  doi: 10.1109/72.165597
– ident: 10.1016/j.neunet.2018.08.019_b58
– ident: 10.1016/j.neunet.2018.08.019_b7
– volume: 3
  start-page: 258
  issue: 2
  year: 1991
  ident: 10.1016/j.neunet.2018.08.019_b40
  article-title: Recognizing hand-printed letters and digits using backpropagation learning
  publication-title: Neural Computation
  doi: 10.1162/neco.1991.3.2.258
– year: 1962
  ident: 10.1016/j.neunet.2018.08.019_b49
– volume: 29
  start-page: 82
  issue: 6
  year: 2012
  ident: 10.1016/j.neunet.2018.08.019_b25
  article-title: Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
  publication-title: IEEE Signal Processing Magazine
  doi: 10.1109/MSP.2012.2205597
– start-page: 1097
  year: 2012
  ident: 10.1016/j.neunet.2018.08.019_b29
  article-title: Imagenet classification with deep convolutional neural networks
– volume: 8
  start-page: 164
  issue: 1
  year: 1996
  ident: 10.1016/j.neunet.2018.08.019_b44
  article-title: Neural networks for optimal approximation of smooth and analytic functions
  publication-title: Neural Computation
  doi: 10.1162/neco.1996.8.1.164
– year: 2017
  ident: 10.1016/j.neunet.2018.08.019_b48
  article-title: Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review
  publication-title: International Journal of Automation and Computing
  doi: 10.1007/s11633-017-1054-2
– ident: 10.1016/j.neunet.2018.08.019_b56
– start-page: xviii+424
  year: 1991
  ident: 10.1016/j.neunet.2018.08.019_b51
  article-title: Functional analysis
– volume: vol. 70
  start-page: 2979
  year: 2017
  ident: 10.1016/j.neunet.2018.08.019_b54
  article-title: Depth-width tradeoffs in approximating natural functions with neural networks
– volume: 17
  start-page: 353
  issue: 3
  year: 2001
  ident: 10.1016/j.neunet.2018.08.019_b17
  article-title: Sparse components of images and optimal atomic decompositions
  publication-title: Constructive Approximation
  doi: 10.1007/s003650010032
– volume: vol. 49
  start-page: 1517
  year: 2016
  ident: 10.1016/j.neunet.2018.08.019_b57
  article-title: Benefits of depth in neural networks
– volume: 22
  start-page: 77
  year: 1934
  ident: 10.1016/j.neunet.2018.08.019_b27
  article-title: Über die zusammenziehende und Lipschitzsche Transformationen
  publication-title: Fundamental Mathematics
  doi: 10.4064/fm-22-1-77-108
– volume: 13
  start-page: 1085
  year: 1963
  ident: 10.1016/j.neunet.2018.08.019_b13
  article-title: Entropies of several sets of real valued functions
  publication-title: Pacific Journal of Mathematics
  doi: 10.2140/pjm.1963.13.1085
– start-page: 1
  year: 2012
  ident: 10.1016/j.neunet.2018.08.019_b30
  article-title: Introduction to shearlets
– volume: 2
  start-page: 359
  issue: 5
  year: 1989
  ident: 10.1016/j.neunet.2018.08.019_b26
  article-title: Multilayer feedforward networks are universal approximators
  publication-title: Neural Networks
  doi: 10.1016/0893-6080(89)90020-8
– volume: 30
  start-page: 451
  issue: 4
  year: 2000
  ident: 10.1016/j.neunet.2018.08.019_b63
  article-title: Neural networks for classification: A survey
  publication-title: IEEE Transactions on Systems, Man, and Cybernetics Part C
  doi: 10.1109/5326.897072
– volume: 57
  start-page: 219
  issue: 2
  year: 2004
  ident: 10.1016/j.neunet.2018.08.019_b10
  article-title: New tight frames of curvelets and optimal representations of objects with piecewise C2 singularities
  publication-title: Communications on Pure and Applied Mathematics
  doi: 10.1002/cpa.10116
– volume: 2
  start-page: 303
  issue: 4
  year: 1989
  ident: 10.1016/j.neunet.2018.08.019_b14
  article-title: Approximation by superpositions of a sigmoidal function
  publication-title: Mathematics of Control, Signals
  doi: 10.1007/BF02551274
– ident: 10.1016/j.neunet.2018.08.019_b53
– start-page: 666
  year: 2011
  ident: 10.1016/j.neunet.2018.08.019_b15
  article-title: Shallow vs. deep sum-product networks
– volume: vol. 218
  start-page: xvi+708
  year: 2013
  ident: 10.1016/j.neunet.2018.08.019_b35
– volume: 6
  start-page: 861
  issue: 6
  year: 1993
  ident: 10.1016/j.neunet.2018.08.019_b37
  article-title: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function
  publication-title: Neural Networks
  doi: 10.1016/S0893-6080(05)80131-5
– volume: vol. 142
  start-page: xiv+580
  year: 1993
  ident: 10.1016/j.neunet.2018.08.019_b32
– ident: 10.1016/j.neunet.2018.08.019_b45
– volume: 8
  start-page: 143
  year: 1999
  ident: 10.1016/j.neunet.2018.08.019_b47
  article-title: Approximation theory of the MLP model in neural networks
  publication-title: Acta Numerica
  doi: 10.1017/S0962492900002919
– start-page: x+342
  year: 1976
  ident: 10.1016/j.neunet.2018.08.019_b50
– year: 2017
  ident: 10.1016/j.neunet.2018.08.019_b6
  article-title: Memory-optimal neural network approximation
– start-page: 396
  year: 1990
  ident: 10.1016/j.neunet.2018.08.019_b34
  article-title: Handwritten digit recognition with a back-propagation network
– year: 1992
  ident: 10.1016/j.neunet.2018.08.019_b19
– volume: 05
  start-page: 353
  issue: 01n02
  year: 1991
  ident: 10.1016/j.neunet.2018.08.019_b24
  article-title: Applications of neural networks to character recognition
  publication-title: International Journal of Pattern Recognition
  doi: 10.1142/S021800149100020X
– start-page: 105
  year: 2000
  ident: 10.1016/j.neunet.2018.08.019_b9
  article-title: Curvelets: a surprisingly effective nonadaptive representation of objects with edges
SSID ssj0006843
Score 2.6589606
Snippet We study the necessary and sufficient complexity of ReLU neural networks – in terms of depth and number of weights – which is required for approximating...
We study the necessary and sufficient complexity of ReLU neural networks - in terms of depth and number of weights - which is required for approximating...
SourceID proquest
pubmed
crossref
elsevier
SourceType Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 296
SubjectTerms Curse of dimension
Deep neural networks
Function approximation
Metric entropy
Neural Networks (Computer)
Piecewise smooth functions
Sparse connectivity
Title Optimal approximation of piecewise smooth functions using deep ReLU neural networks
URI https://dx.doi.org/10.1016/j.neunet.2018.08.019
https://www.ncbi.nlm.nih.gov/pubmed/30245431
https://www.proquest.com/docview/2111746774
Volume 108
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1La9wwEB6SzaWXPtJHtmmDCr2qa1mybB1DaNi2aQpNF3ITsjRbHBLv0t2lOeW3R2PZC4WWQI82km1mRvPA33wD8F6EqhZeCV57LLhyheG1KGoesuArica5rlH467meztTny-JyB06GXhiCVfa-P_n0zlv3dya9NCfLpplcZDHU6pjwCKKwUoXahb1cGl2MYO_405fp-dYh6yqB5-J6ThuGDroO5tXipkUCVYqq4_Ikyp2_R6h_ZaBdJDp9Co_7FJIdp698BjvY7sOTYTwD60_rc7j4Ft3BTVzZ8YbfNqlJkS3mbNmgx9_NCtnqZhFVxSi6dQbICAf_kwXEJfuOZzNGfJfxEW1Ci69ewOz044-TKe9nKHAfC4s1J7ZDndXSh8JrRF1qh4VWLqZFQucuz-ZKOePmWDpfxlgd8tr70sTMwVTGBSlfwqhdtHgAjNi6ZEzJgqIaSgQ3l7G8kiXmJnN1lY1BDnKzvicYpzkX13ZAkl3ZJG1L0rY0_lKYMfDtrmUi2HhgfTmoxP5hKDbGgAd2vhs0aOMZoh8jrsXFZmVjESxo7EqpxvAqqXb7Lcm2pHj93-89hEd0lTAwb2C0_rXBtzGTWddHsPvhThz19noP1RXy_w
linkProvider Elsevier
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwnV1Lb9QwEB6VcoAL5c2Wl5G4mo3XjpMcUUW1wLZItCv1Zjn2bBVEsyt2V3Dqb-9MnBQhgSpxTcZJNDOeh_L5G4C3Kpa1CkbJOmAujc8rWau8ljGLodRYed8dFD46ttO5-XSWn-3AwXAWhmGVfexPMb2L1v2Vca_N8appxicZpVpLBY9iCiuTm1tw2-S6YFzfu8vfOA9bJugcSUsWH87PdSCvFrctMqRSlR2TJxPu_D0__av-7PLQ4X241xeQ4n36xgewg-1D2BuGM4h-rz6Cky8UDC5IsmMN_9WkI4piuRCrBgP-bNYo1hdLMpTg3Na5n2AU_LmIiCvxFWdzwWyX9Ig2YcXXj2F--OH0YCr7CQoyUFuxkcx1aLNah5gHi2gL6zG3xlNRpOzET7KFMb7yCyx8KChTx0kdQlFR3VCVlY9aP4HddtniMxDM1aWpIIuGOygV_UJTc6ULnFSZr8tsBHrQmws9vThPufjuBhzZN5e07VjbjodfqmoE8nrVKtFr3CBfDCZxf7iJowxww8o3gwUd7SD-LeJbXG7XjlpgxUNXCjOCp8m019-SPEur_f9-72u4Mz09mrnZx-PPz-Eu30lomBewu_mxxZdU02zqV53PXgHiQPPK
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Optimal+approximation+of+piecewise+smooth+functions+using+deep+ReLU+neural+networks&rft.jtitle=Neural+networks&rft.au=Petersen%2C+Philipp&rft.au=Voigtlaender%2C+Felix&rft.date=2018-12-01&rft.eissn=1879-2782&rft.volume=108&rft.spage=296&rft_id=info:doi/10.1016%2Fj.neunet.2018.08.019&rft_id=info%3Apmid%2F30245431&rft.externalDocID=30245431
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0893-6080&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0893-6080&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0893-6080&client=summon