Strong mixed-integer programming formulations for trained neural networks

We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial i...

Full description

Saved in:
Bibliographic Details
Published inMathematical programming Vol. 183; no. 1-2; pp. 3 - 39
Main Authors Anderson, Ross, Huchette, Joey, Ma, Will, Tjandraatmadja, Christian, Vielma, Juan Pablo
Format Journal Article
LanguageEnglish
Published Berlin/Heidelberg Springer Berlin Heidelberg 01.09.2020
Springer Nature B.V
Subjects
Online AccessGet full text

Cover

Loading…
Abstract We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial inputs, or solving decision problems where the objective function is a machine learning model. We present a generic framework, which may be of independent interest, that provides a way to construct sharp or ideal formulations for the maximum of d affine functions over arbitrary polyhedral input domains. We apply this result to derive MIP formulations for a number of the most popular nonlinear operations (e.g. ReLU and max pooling) that are strictly stronger than other approaches from the literature. We corroborate this computationally, showing that our formulations are able to offer substantial improvements in solve time on verification tasks for image classification networks.
AbstractList We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial inputs, or solving decision problems where the objective function is a machine learning model. We present a generic framework, which may be of independent interest, that provides a way to construct sharp or ideal formulations for the maximum of d affine functions over arbitrary polyhedral input domains. We apply this result to derive MIP formulations for a number of the most popular nonlinear operations (e.g. ReLU and max pooling) that are strictly stronger than other approaches from the literature. We corroborate this computationally, showing that our formulations are able to offer substantial improvements in solve time on verification tasks for image classification networks.
We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks. These formulations can be used for a number of important tasks, such as verifying that an image classification network is robust to adversarial inputs, or solving decision problems where the objective function is a machine learning model. We present a generic framework, which may be of independent interest, that provides a way to construct sharp or ideal formulations for the maximum of d affine functions over arbitrary polyhedral input domains. We apply this result to derive MIP formulations for a number of the most popular nonlinear operations (e.g. ReLU and max pooling) that are strictly stronger than other approaches from the literature. We corroborate this computationally, showing that our formulations are able to offer substantial improvements in solve time on verification tasks for image classification networks.
Author Vielma, Juan Pablo
Anderson, Ross
Ma, Will
Huchette, Joey
Tjandraatmadja, Christian
Author_xml – sequence: 1
  givenname: Ross
  surname: Anderson
  fullname: Anderson, Ross
  organization: Google Inc
– sequence: 2
  givenname: Joey
  orcidid: 0000-0003-3552-0316
  surname: Huchette
  fullname: Huchette, Joey
  email: joehuchette@rice.edu
  organization: Rice University
– sequence: 3
  givenname: Will
  orcidid: 0000-0002-2420-4468
  surname: Ma
  fullname: Ma, Will
  organization: Columbia University
– sequence: 4
  givenname: Christian
  surname: Tjandraatmadja
  fullname: Tjandraatmadja, Christian
  organization: Google Inc
– sequence: 5
  givenname: Juan Pablo
  surname: Vielma
  fullname: Vielma, Juan Pablo
  organization: Google Inc, Massachusetts Institute of Technology
BookMark eNp9kF1LwzAUhoNMcE7_gFcFr6Mn3-2lDD8GAy_U65C26ehsk5m0qP_ebBUEL3Z1CHmf5D3POZo57yxCVwRuCIC6jQQIKAwUMBCuOBYnaE44k5hLLmdoDkAFFpLAGTqPcQsAhOX5HK1ehuDdJuvbL1vj1g12Y0O2C34TTN-36abxoR87M7Texf0hG4Jpna0zZ8dgujSGTx_e4wU6bUwX7eXvXKC3h_vX5RNePz-ulndrXDFSDNgIw5RiSkLT1KIipSCECporLqlVsiwaU4gcWClkbYEbCyWnXJU1JWVTQcUW6Hp6N5X8GG0c9NaPwaUvNeVM5DwvOKRUPqWq4GMMttFVOxyW2NfvNAG9F6cncTqJ0wdxWiSU_kN3oe1N-D4OsQmKKeySw79WR6gfbq2Cog
CitedBy_id crossref_primary_10_1109_TITS_2023_3334976
crossref_primary_10_2514_1_A36122
crossref_primary_10_1007_s10458_021_09529_3
crossref_primary_10_3389_fenrg_2021_719658
crossref_primary_10_1137_22M1542313
crossref_primary_10_1287_ijoc_2022_0312
crossref_primary_10_1016_j_tre_2023_103379
crossref_primary_10_1007_s10107_022_01874_9
crossref_primary_10_1016_j_compchemeng_2023_108411
crossref_primary_10_1088_1361_6560_ad2d7e
crossref_primary_10_1007_s10994_023_06482_y
crossref_primary_10_3389_fenrg_2024_1514705
crossref_primary_10_2139_ssrn_3179994
crossref_primary_10_3390_math10203902
crossref_primary_10_1287_ijoo_2024_0033
crossref_primary_10_1145_3527319
crossref_primary_10_3390_en16165913
crossref_primary_10_1287_opre_2022_0150
crossref_primary_10_1016_j_ejor_2024_06_038
crossref_primary_10_1109_TPWRS_2022_3223418
crossref_primary_10_2139_ssrn_2986630
crossref_primary_10_1007_s11634_022_00508_4
crossref_primary_10_1016_j_neucom_2024_127936
crossref_primary_10_1007_s10957_023_02317_x
crossref_primary_10_1016_j_compchemeng_2024_108660
crossref_primary_10_1016_j_compchemeng_2022_107970
crossref_primary_10_1016_j_compchemeng_2022_107850
crossref_primary_10_1016_j_neucom_2023_126995
crossref_primary_10_1016_j_disopt_2023_100795
crossref_primary_10_1109_TCAD_2024_3447577
crossref_primary_10_1287_ijoc_2023_0281
crossref_primary_10_1016_j_ejtl_2023_100105
crossref_primary_10_1137_22M1489332
crossref_primary_10_1109_TPAMI_2024_3416514
crossref_primary_10_1007_s10898_024_01434_9
crossref_primary_10_1287_mnsc_2020_3741
crossref_primary_10_1287_opre_2022_0035
crossref_primary_10_1016_j_compchemeng_2024_108596
crossref_primary_10_1287_msom_2022_0617
crossref_primary_10_1111_poms_13877
crossref_primary_10_1287_opre_2021_0707
crossref_primary_10_1016_j_ces_2022_117469
crossref_primary_10_1016_j_compchemeng_2023_108518
crossref_primary_10_1007_s10898_022_01228_x
crossref_primary_10_1287_ijoc_2023_1285
crossref_primary_10_1287_ijoc_2023_0153
crossref_primary_10_2139_ssrn_4217092
crossref_primary_10_1016_j_dche_2024_100200
crossref_primary_10_1287_ijoc_2020_1023
crossref_primary_10_1145_3498704
crossref_primary_10_1109_TSTE_2022_3223764
crossref_primary_10_1287_trsc_2023_0438
crossref_primary_10_1016_j_ces_2024_120165
crossref_primary_10_1109_TAC_2023_3283213
crossref_primary_10_1016_j_orl_2024_107194
crossref_primary_10_1016_j_conengprac_2024_105841
crossref_primary_10_1016_j_ejor_2023_04_041
crossref_primary_10_1016_j_apm_2023_04_032
crossref_primary_10_1016_j_compchemeng_2024_108684
crossref_primary_10_1016_j_compchemeng_2023_108347
crossref_primary_10_1016_j_compchemeng_2024_108764
crossref_primary_10_1109_TSTE_2023_3268140
crossref_primary_10_1016_j_eswa_2023_120895
crossref_primary_10_1016_j_ifacol_2024_09_011
crossref_primary_10_1109_TSTE_2023_3274735
crossref_primary_10_1016_j_compchemeng_2024_108723
crossref_primary_10_1287_opre_2021_0034
crossref_primary_10_1007_s10107_021_01652_z
crossref_primary_10_1016_j_compchemeng_2024_108807
crossref_primary_10_1016_j_eswa_2023_121022
Cites_doi 10.1007/BFb0121119
10.24963/ijcai.2017/346
10.1007/s10107-015-0891-4
10.1007/s10957-018-1396-0
10.1007/s10107-018-1301-5
10.1007/s100970050003
10.24963/ijcai.2017/104
10.1007/978-1-4757-3532-1
10.2139/ssrn.3159473
10.24963/ijcai.2018/772
10.1007/978-3-319-68167-2_19
10.1038/nature14539
10.1137/130915303
10.1287/mnsc.1080.0951
10.1038/nbt.3300
10.1109/5.726791
10.1016/S0166-218X(98)00136-X
10.1137/0606047
10.1007/s10601-018-9285-6
10.1007/s10601-015-9234-6
10.1609/aaai.v26i1.8138
10.1093/bioinformatics/btw255
10.1007/s10107-018-1258-4
10.1109/MSP.2017.2743240
10.1016/j.compchemeng.2015.02.013
10.1007/s10589-011-9424-0
10.1007/s10107-009-0295-4
10.1007/978-3-642-23786-7_11
10.1109/SP.2017.49
10.1287/opre.34.4.595
10.1007/978-3-662-21708-5
10.1016/j.artint.2016.01.005
10.1007/978-3-319-77935-5_9
10.1007/BFb0121015
10.1109/EuroSP.2016.36
10.1109/CVPR.2010.5539963
10.1016/j.compchemeng.2019.106580
10.1007/978-3-319-68167-2_18
10.1007/s10589-016-9847-8
10.1007/BF02186369
10.1007/978-3-319-63387-9_5
ContentType Journal Article
Copyright Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2020
Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2020.
Copyright_xml – notice: Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2020
– notice: Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2020.
DBID AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1007/s10107-020-01474-5
DatabaseName CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Mathematics
EISSN 1436-4646
EndPage 39
ExternalDocumentID 10_1007_s10107_020_01474_5
GroupedDBID --K
--Z
-52
-5D
-5G
-BR
-EM
-Y2
-~C
-~X
.4S
.86
.DC
.VR
06D
0R~
0VY
199
1B1
1N0
1OL
1SB
203
28-
29M
2J2
2JN
2JY
2KG
2KM
2LR
2P1
2VQ
2~H
30V
3V.
4.4
406
408
409
40D
40E
5GY
5QI
5VS
67Z
6NX
6TJ
78A
7WY
88I
8AO
8FE
8FG
8FL
8TC
8UJ
8VB
95-
95.
95~
96X
AAAVM
AABHQ
AACDK
AAHNG
AAIAL
AAJBT
AAJKR
AANZL
AARHV
AARTL
AASML
AATNV
AATVU
AAUYE
AAWCG
AAYIU
AAYQN
AAYTO
AAYZH
ABAKF
ABBBX
ABBXA
ABDBF
ABDZT
ABECU
ABFTV
ABHLI
ABHQN
ABJCF
ABJNI
ABJOX
ABKCH
ABKTR
ABMNI
ABMQK
ABNWP
ABQBU
ABQSL
ABSXP
ABTEG
ABTHY
ABTKH
ABTMW
ABULA
ABUWG
ABWNU
ABXPI
ACAOD
ACBXY
ACDTI
ACGFS
ACGOD
ACHSB
ACHXU
ACIWK
ACKNC
ACMDZ
ACMLO
ACNCT
ACOKC
ACOMO
ACPIV
ACUHS
ACZOJ
ADHHG
ADHIR
ADIMF
ADINQ
ADKNI
ADKPE
ADRFC
ADTPH
ADURQ
ADYFF
ADZKW
AEBTG
AEFIE
AEFQL
AEGAL
AEGNC
AEJHL
AEJRE
AEKMD
AEMOZ
AEMSY
AENEX
AEOHA
AEPYU
AESKC
AETLH
AEVLU
AEXYK
AFBBN
AFEXP
AFFNX
AFGCZ
AFKRA
AFLOW
AFQWF
AFWTZ
AFZKB
AGAYW
AGDGC
AGGDS
AGJBK
AGMZJ
AGQEE
AGQMX
AGRTI
AGWIL
AGWZB
AGYKE
AHAVH
AHBYD
AHKAY
AHQJS
AHSBF
AHYZX
AIAKS
AIGIU
AIIXL
AILAN
AITGF
AJBLW
AJRNO
AJZVZ
AKVCP
ALMA_UNASSIGNED_HOLDINGS
ALWAN
AMKLP
AMXSW
AMYLF
AMYQR
AOCGG
ARAPS
ARCSS
ARMRJ
ASPBG
AVWKF
AXYYD
AYJHY
AZFZN
AZQEC
B-.
B0M
BA0
BAPOH
BBWZM
BDATZ
BENPR
BEZIV
BGLVJ
BGNMA
BPHCQ
BSONS
CAG
CCPQU
COF
CS3
CSCUP
DDRTE
DL5
DNIVK
DPUIP
DU5
DWQXO
EAD
EAP
EBA
EBLON
EBR
EBS
EBU
ECS
EDO
EIOEI
EJD
EMI
EMK
EPL
ESBYG
EST
ESX
FEDTE
FERAY
FFXSO
FIGPU
FINBP
FNLPD
FRNLG
FRRFC
FSGXE
FWDCC
GGCAI
GGRSB
GJIRD
GNUQQ
GNWQR
GQ6
GQ7
GQ8
GROUPED_ABI_INFORM_COMPLETE
GXS
H13
HCIFZ
HF~
HG5
HG6
HMJXF
HQYDN
HRMNR
HVGLF
HZ~
H~9
I-F
I09
IAO
IHE
IJ-
IKXTQ
ITM
IWAJR
IXC
IZIGR
IZQ
I~X
I~Z
J-C
J0Z
JBSCW
JCJTX
JZLTJ
K1G
K60
K6V
K6~
K7-
KDC
KOV
KOW
L6V
LAS
LLZTM
M0C
M0N
M2P
M4Y
M7S
MA-
N2Q
N9A
NB0
NDZJH
NPVJJ
NQ-
NQJWS
NU0
O9-
O93
O9G
O9I
O9J
OAM
P19
P2P
P62
P9R
PF0
PQBIZ
PQBZA
PQQKQ
PROAC
PT4
PT5
PTHSS
Q2X
QOK
QOS
QWB
R4E
R89
R9I
RHV
RIG
RNI
RNS
ROL
RPX
RPZ
RSV
RZK
S16
S1Z
S26
S27
S28
S3B
SAP
SCLPG
SDD
SDH
SDM
SHX
SISQX
SJYHP
SMT
SNE
SNPRN
SNX
SOHCF
SOJ
SPISZ
SRMVM
SSLCW
STPWE
SZN
T13
T16
TH9
TN5
TSG
TSK
TSV
TUC
TUS
U2A
UG4
UOJIU
UTJUX
UZXMN
VC2
VFIZW
W23
W48
WH7
WK8
XPP
YLTOR
Z45
Z5O
Z7R
Z7S
Z7X
Z7Y
Z7Z
Z81
Z83
Z86
Z88
Z8M
Z8N
Z8R
Z8T
Z8W
Z92
ZL0
ZMTXR
ZWQNP
~02
~8M
~EX
AAPKM
AAYXX
ABBRH
ABDBE
ABFSG
ACSTC
ADHKG
ADXHL
AEZWR
AFDZB
AFHIU
AFOHR
AGQPQ
AHPBZ
AHWEU
AIXLP
AMVHM
ATHPR
AYFIA
CITATION
PHGZM
PHGZT
7SC
8FD
ABRTQ
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c319t-a5a3773760ffd5c1b51125287462e76b9fa95803b56de04ae0b4247bd21bfc0c3
IEDL.DBID U2A
ISSN 0025-5610
IngestDate Fri Jul 25 19:49:16 EDT 2025
Tue Jul 01 02:15:12 EDT 2025
Thu Apr 24 23:05:51 EDT 2025
Fri Feb 21 02:32:34 EST 2025
IsPeerReviewed true
IsScholarly true
Issue 1-2
Keywords Formulations
Deep learning
Mixed-integer programming
90C11
Language English
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c319t-a5a3773760ffd5c1b51125287462e76b9fa95803b56de04ae0b4247bd21bfc0c3
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-2420-4468
0000-0003-3552-0316
PQID 2435848940
PQPubID 25307
PageCount 37
ParticipantIDs proquest_journals_2435848940
crossref_citationtrail_10_1007_s10107_020_01474_5
crossref_primary_10_1007_s10107_020_01474_5
springer_journals_10_1007_s10107_020_01474_5
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2020-09-01
PublicationDateYYYYMMDD 2020-09-01
PublicationDate_xml – month: 09
  year: 2020
  text: 2020-09-01
  day: 01
PublicationDecade 2020
PublicationPlace Berlin/Heidelberg
PublicationPlace_xml – name: Berlin/Heidelberg
– name: Heidelberg
PublicationSubtitle A Publication of the Mathematical Optimization Society
PublicationTitle Mathematical programming
PublicationTitleAbbrev Math. Program
PublicationYear 2020
Publisher Springer Berlin Heidelberg
Springer Nature B.V
Publisher_xml – name: Springer Berlin Heidelberg
– name: Springer Nature B.V
References Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Optimization and controlled systems: a case study on thermal aware workload dispatching. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 427–433 (2012)
LombardiMMilanoMBartoliniAEmpirical decision model learningArtif. Intell.201724434336736060041404.68113
BertsimasDTsitsiklisJIntroduction to Linear Optimization1997Belmont, MAAthena Scientific
VielmaJPNemhauserGModeling disjunctive constraints with a logarithmic number of binary variables and constraintsMath. Program.20111281–2497228109521218.90137
Haneveld, W.K.K.: Robustness against dependence in pert: an application of duality and distributions with known marginals. In: Stochastic Programming 84 Part I, pp. 153–182. Springer (1986)
Hanin, B.: Universal function approximation by deep neural nets with bounded width and ReLU activations (2017). arXiv preprint arXiv:1708.02691
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). arxiv:1412.6980
ZengHEdwardsMDLiuGGiffordDKConvolutional neural network architectures for predicting DNA-protein bindingBioinformatics20163212121127
GrimstadBAnderssonHReLU networks as surrogate models in mixed-integer linear programsComput. Chem. Eng.2019131106580
Ryu, M., Chow, Y., Anderson, R., Tjandraatmadja, C., Boutilier, C.: CAQL: Continuous action Q-learning (2019). arxiv:1909.12397
VielmaJPMixed integer linear programming formulation techniquesSIAM Rev.201557135733065821338.90277
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
BalasEDisjunctive programming: properties of the convex hull of feasible pointsDiscrete Appl. Math.19988934416630990921.90118
Wu, G., Say, B., Sanner, S.: Scalable planning with Tensorflow for hybrid nonlinear domains. In: Advances in Neural Information Processing Systems, pp. 6276–6286 (2017)
Anderson, R., Huchette, J., Tjandraatmadja, C., Vielma, J.P.: Strong mixed-integer programming formulations for trained neural networks. In: A. Lodi, V. Nagarajan (eds.) Proceedings of the 20th Conference on Integer Programming and Combinatorial Optimization, pp. 27–42. Springer International Publishing, Cham (2019). arxiv:1811.08359
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)
Tjeng, V., Xiao, K., Tedrake, R.: Verifying neural networks with mixed integer programming. In: International Conference on Learning Representations (2019)
Wong, E., Schmidt, F., Metzen, J.H., Kolter, J.Z.: Scaling provable adversarial defenses. In: 32nd Conference on Neural Information Processing Systems (2018)
HuberBRambauJSantosFThe Cayley Trick, lifting subdivisions and the Bohne-Dress theorem of zonotopal tiltingsJ. Eur. Math. Soc.20002217919817633040988.52017
Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill (2017). https://distill.pub/2017/feature-visualization. Accessed 6 Feb 2020
Huchette, J.: Advanced mixed-integer programming formulations: methodology, computation, and application. Ph.D. thesis, Massachusetts Institute of Technology (2018)
JeroslowRGAlternative formulations of mixed integer programsAnn. Oper. Res.198812241276948042
Cheng, C.H., Nührenberg, G., Ruess, N.: Maximum resilience of artifical neural networks. In: International Symposium on Automated Technology for Verification and Analysis. Springer, Cham (2017)
WeissGStochastic bounds on distributions of optimal value functions with applications to pert, network flows and reliabilityOper. Res.19863445956058742980609.90093
SchweidtmannAMMitsosAGlobal deterministic optimization with artificial neural networks embeddedJ. Optim. Theory Appl.201918092594839139161407.90263
VielmaJPEmbedding formulations and complexity for unions of polyhedraManage. Sci.2018641044714965
Dvijotham, K., Gowal, S., Stanforth, R., Arandjelovic, R., O’Donoghue, B., Uesato, J., Kohli, P.: Training verified learners with learned verifiers (2018). arxiv:1805.10265
VielmaJPSmall and strong formulations for unions of convex sets from the Cayley embeddingMath. Program.2018177215339871931418.90180
Liu, C., Arnon, T., Lazarus, C., Barrett, C., Kochenderfer, M.J.: Algorithms for verifying deep neural networks (2019). arxiv:1903.06758
Lombardi, M., Milano, M.: Boosting combinatorial problem modeling with machine learning. In: Proceedings IJCAI, pp. 5472–5478 (2018)
Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A.V., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2016)
Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: NASA Formal Methods Symposium (2018)
Raghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, pp. 10,900–10,910. Curran Associates Inc. (2018)
Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Neuron constraints to model complex real-world problems. In: International Conference on the Principles and Practice of Constraint Programming, pp. 115–129. Springer, Berlin (2011)
Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: ICML Workshop on Deep Learning for Audio, Speech and Language (2013)
TrespalaciosFGrossmannIEImproved big-M reformulation for generalized disjunctive programsComput. Chem. Eng.20157698103
Boureau, Y.L., Bach, F., LeCun, Y., Ponce, J.: Learning mid-level features for recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2559–2566 (2010)
LombardiMGualandiSA lagrangian propagator for artificial neural networks in constraint programmingConstraints201621443546235513211368.90148
Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward ReLU neural networks (2017). arxiv:1706.07351
Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style (2015). arxiv:1508.06576
LeCunYBengioYHintonGDeep learningNature20155217553436444
Xiao, K.Y., Tjeng, V., Shafiullah, N.M., Madry, A.: Training for faster adversarial robustness verification via inducing ReLU stability. In: International Conference on Learning Representations (2019)
Bienstock, D., Muñoz, G., Pokutta, S.: Principled deep neural network training through linear programming (2018). arXiv preprint arXiv:1810.03218
Serra, T., Ramalingam, S.: Empirical bounds on linear regions of deep rectifier networks (2018). arxiv:1810.03370
TawarmalaniMSahinidisNConvexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming: Theory, Algorithms, Software and Applications2002BerlinSpringer1031.90022
AtamtürkAGómezAStrong formulations for quadratic optimization with M-matrices and indicator variablesMath. Program.201817014117638165611391.90423
GoodfellowIBengioYCourvilleADeep Learning2016CambridgeMIT Press1373.68009
Serra, T., Tjandraatmadja, C., Ramalingam, S.: Bounding and counting linear regions of deep neural networks. In: Thirty-Fifth International Conference on Machine Learning (2018)
Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: Going deeper into neural networks (2015). https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html. Accessed 6 Feb 2020
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 97, pp. 1802–1811. PMLR, Long Beach, CA (2019). http://proceedings.mlr.press/v97/engstrom19a.html. Accessed 6 Feb 2020
Dulac-Arnold, G., Evans, R., van Hasselt, H., Sunehag, P., Lillicrap, T., Hunt, J., Mann, T., Weber, T., Degris, T., Coppin, B.: Deep reinforcement learning in large discrete action spaces (2015). arxiv:1512.07679
Arora, R., Basu, A., Mianjy, P., Mukherjee, A.: Understanding deep neural networks with rectified linear units (2016). arXiv preprint arXiv:1611.01491
HijaziHBonamiPCornuéjolsGOuorouAMixed-integer nonlinear programs featuring “on/off” constraintsComput. Optim. Appl.201252253755829257851250.90058
Khalil, E.B., Gupta, A., Dilkina, B.: Combinatorial attacks on binarized neural networks. In: International Conference on Learning Representations (2019)
Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: International Conference on Machine Learning (2018)
Salman, H., Yang, G., Zhang, H., Hsieh, C.J., Zhang, P.: A convex relaxation barrier to tight robustness verification of neural networks (2019). arxiv:1902.08722
ArulkumaranKDeisenrothMPBrundageMBharathAADeep reinforcement learning: a brief surveyIEEE Signal Process. Mag.20173462638
Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: Proceedings of the 30th International Conference on Machine Learning, vol. 28, pp. 1319–1327 (2013)
Amos, B., Xu, L., Kolter, J.Z.: Input convex neural networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 146–155. PMLR, International Convention Centre, Sydney (2017)
BonamiPLodiATramontaniAWieseSOn mathematical programming with indicator constraintsMath. Program.2015151119122333475541328.90086
Dvijotham, K., Stanforth, R., Gowal, S., Mann, T., Kohli, P.: A dual approach to scalable verification of deep networks. In: Thirty-Fourth Conference Annual Conference on Uncertainty in Artificial Intelligence (2018)
BelottiPBonamiPFischettiMLodiAMonaciMNogales-GomezASalvagninDOn handling indicator constraints in mixed integer programmingComput. Optim. Appl.201665354556635696101357.90094
BalasEDisjunctive programming and a hierarchy
E Balas (1474_CR9) 1998; 89
1474_CR52
1474_CR50
1474_CR12
1474_CR56
1474_CR11
1474_CR55
1474_CR10
B Grimstad (1474_CR34) 2019; 131
1474_CR54
1474_CR15
M Lombardi (1474_CR51) 2016; 21
1474_CR59
JP Vielma (1474_CR72) 2015; 57
1474_CR57
1474_CR19
1474_CR18
K Arulkumaran (1474_CR6) 2017; 34
G Weiss (1474_CR77) 1986; 34
B Alipanahi (1474_CR2) 2015; 33
F Trespalacios (1474_CR71) 2015; 76
CM Bishop (1474_CR16) 2006
R Jeroslow (1474_CR41) 1984; 22
1474_CR81
1474_CR80
M Tawarmalani (1474_CR69) 2002
1474_CR40
JP Vielma (1474_CR74) 2018; 177
1474_CR82
1474_CR45
RG Jeroslow (1474_CR42) 1988; 12
1474_CR44
1474_CR43
1474_CR47
K Natarajan (1474_CR58) 2009; 55
P Belotti (1474_CR13) 2016; 65
H Zeng (1474_CR83) 2016; 32
E Balas (1474_CR8) 1985; 6
1474_CR70
1474_CR30
H Hijazi (1474_CR37) 2012; 52
M Lombardi (1474_CR53) 2017; 244
1474_CR78
1474_CR33
1474_CR76
1474_CR31
AM Schweidtmann (1474_CR65) 2019; 180
1474_CR38
1474_CR36
1474_CR35
1474_CR79
Y LeCun (1474_CR49) 1998; 86
B Huber (1474_CR39) 2000; 2
B Korte (1474_CR46) 2000
1474_CR1
I Goodfellow (1474_CR32) 2016
1474_CR3
1474_CR5
1474_CR4
1474_CR63
D Bertsimas (1474_CR14) 1997
P Bonami (1474_CR17) 2015; 151
1474_CR62
1474_CR61
1474_CR60
1474_CR23
1474_CR67
1474_CR22
1474_CR66
1474_CR21
JP Vielma (1474_CR73) 2018; 64
1474_CR20
1474_CR64
1474_CR27
1474_CR26
1474_CR25
1474_CR24
1474_CR68
A Atamtürk (1474_CR7) 2018; 170
1474_CR28
M Fischetti (1474_CR29) 2018; 23
Y LeCun (1474_CR48) 2015; 521
JP Vielma (1474_CR75) 2011; 128
References_xml – reference: BalasEDisjunctive programming: properties of the convex hull of feasible pointsDiscrete Appl. Math.19988934416630990921.90118
– reference: Ryu, M., Chow, Y., Anderson, R., Tjandraatmadja, C., Boutilier, C.: CAQL: Continuous action Q-learning (2019). arxiv:1909.12397
– reference: Mladenov, M., Boutilier, C., Schuurmans, D., Elidan, G., Meshi, O., Lu, T.: Approximate linear programming for logistic Markov decision processes. In: Proceedings of the Twenty-sixth International Joint Conference on Artificial Intelligence (IJCAI-17), pp. 2486–2493. Melbourne, Australia (2017)
– reference: Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Neuron constraints to model complex real-world problems. In: International Conference on the Principles and Practice of Constraint Programming, pp. 115–129. Springer, Berlin (2011)
– reference: Bunel, R., Turkaslan, I., Torr, P.H., Kohli, P., Kumar, M.P.: A unified view of piecewise linear neural network verification. In: Advances in Neural Information Processing Systems (2018)
– reference: SchweidtmannAMMitsosAGlobal deterministic optimization with artificial neural networks embeddedJ. Optim. Theory Appl.201918092594839139161407.90263
– reference: Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification, pp. 97–117 (2017)
– reference: Cheng, C.H., Nührenberg, G., Ruess, N.: Maximum resilience of artifical neural networks. In: International Symposium on Automated Technology for Verification and Analysis. Springer, Cham (2017)
– reference: Amos, B., Xu, L., Kolter, J.Z.: Input convex neural networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 146–155. PMLR, International Convention Centre, Sydney (2017)
– reference: Tjeng, V., Xiao, K., Tedrake, R.: Verifying neural networks with mixed integer programming. In: International Conference on Learning Representations (2019)
– reference: Olah, C., Mordvintsev, A., Schubert, L.: Feature visualization. Distill (2017). https://distill.pub/2017/feature-visualization. Accessed 6 Feb 2020
– reference: Hijazi, H., Bonami, P., Ouorou, A.: A note on linear on/off constraints (2014). http://www.optimization-online.org/DB_FILE/2014/04/4309.pdf. Accessed 6 Feb 2020
– reference: Chen, L., Ma, W., Natarajan, K., Simchi-Levi, D., Yan, Z.: Distributionally robust linear and discrete optimization with marginals. Available at SSRN 3159473 (2018)
– reference: Dutta, S., Jha, S., Sanakaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: NASA Formal Methods Symposium (2018)
– reference: Say, B., Wu, G., Zhou, Y.Q., Sanner, S.: Nonlinear hybrid planning with deep net learned transition models and mixed-integer linear programming. In: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI-17, pp. 750–756 (2017)
– reference: Raghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS’18, pp. 10,900–10,910. Curran Associates Inc. (2018)
– reference: Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization (2014). arxiv:1412.6980
– reference: BonamiPLodiATramontaniAWieseSOn mathematical programming with indicator constraintsMath. Program.2015151119122333475541328.90086
– reference: HijaziHBonamiPCornuéjolsGOuorouAMixed-integer nonlinear programs featuring “on/off” constraintsComput. Optim. Appl.201252253755829257851250.90058
– reference: Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
– reference: VielmaJPSmall and strong formulations for unions of convex sets from the Cayley embeddingMath. Program.2018177215339871931418.90180
– reference: Wu, G., Say, B., Sanner, S.: Scalable planning with Tensorflow for hybrid nonlinear domains. In: Advances in Neural Information Processing Systems, pp. 6276–6286 (2017)
– reference: Wong, E., Schmidt, F., Metzen, J.H., Kolter, J.Z.: Scaling provable adversarial defenses. In: 32nd Conference on Neural Information Processing Systems (2018)
– reference: Xiao, K.Y., Tjeng, V., Shafiullah, N.M., Madry, A.: Training for faster adversarial robustness verification via inducing ReLU stability. In: International Conference on Learning Representations (2019)
– reference: Dulac-Arnold, G., Evans, R., van Hasselt, H., Sunehag, P., Lillicrap, T., Hunt, J., Mann, T., Weber, T., Degris, T., Coppin, B.: Deep reinforcement learning in large discrete action spaces (2015). arxiv:1512.07679
– reference: LombardiMGualandiSA lagrangian propagator for artificial neural networks in constraint programmingConstraints201621443546235513211368.90148
– reference: Dvijotham, K., Gowal, S., Stanforth, R., Arandjelovic, R., O’Donoghue, B., Uesato, J., Kohli, P.: Training verified learners with learned verifiers (2018). arxiv:1805.10265
– reference: Serra, T., Tjandraatmadja, C., Ramalingam, S.: Bounding and counting linear regions of deep neural networks. In: Thirty-Fifth International Conference on Machine Learning (2018)
– reference: BishopCMPattern Recognition and Machine Learning2006BerlinSpringer1107.68072
– reference: Huchette, J.: Advanced mixed-integer programming formulations: methodology, computation, and application. Ph.D. thesis, Massachusetts Institute of Technology (2018)
– reference: Boureau, Y.L., Bach, F., LeCun, Y., Ponce, J.: Learning mid-level features for recognition. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 2559–2566 (2010)
– reference: Kumar, A., Serra, T., Ramalingam, S.: Equivalent and approximate transformations of deep neural networks (2019). arxiv:1905.11428
– reference: Hanin, B.: Universal function approximation by deep neural nets with bounded width and ReLU activations (2017). arXiv preprint arXiv:1708.02691
– reference: ZengHEdwardsMDLiuGGiffordDKConvolutional neural network architectures for predicting DNA-protein bindingBioinformatics20163212121127
– reference: BalasEDisjunctive programming and a hierarchy of relaxations for discrete optimization problemsSIAM J. Algorithmic Discrete Methods1985634664867911750592.90070
– reference: LombardiMMilanoMBartoliniAEmpirical decision model learningArtif. Intell.201724434336736060041404.68113
– reference: Arora, R., Basu, A., Mianjy, P., Mukherjee, A.: Understanding deep neural networks with rectified linear units (2016). arXiv preprint arXiv:1611.01491
– reference: Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: IEEE European Symposium on Security and Privacy, pp. 372–387 (2016)
– reference: GrimstadBAnderssonHReLU networks as surrogate models in mixed-integer linear programsComput. Chem. Eng.2019131106580
– reference: TrespalaciosFGrossmannIEImproved big-M reformulation for generalized disjunctive programsComput. Chem. Eng.20157698103
– reference: Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A.V., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2016)
– reference: Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
– reference: LeCunYBengioYHintonGDeep learningNature20155217553436444
– reference: Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style (2015). arxiv:1508.06576
– reference: Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)
– reference: Bartolini, A., Lombardi, M., Milano, M., Benini, L.: Optimization and controlled systems: a case study on thermal aware workload dispatching. In: Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 427–433 (2012)
– reference: BelottiPBonamiPFischettiMLodiAMonaciMNogales-GomezASalvagninDOn handling indicator constraints in mixed integer programmingComput. Optim. Appl.201665354556635696101357.90094
– reference: AtamtürkAGómezAStrong formulations for quadratic optimization with M-matrices and indicator variablesMath. Program.201817014117638165611391.90423
– reference: TawarmalaniMSahinidisNConvexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming: Theory, Algorithms, Software and Applications2002BerlinSpringer1031.90022
– reference: Goodfellow, I.J., Warde-Farley, D., Mirza, M., Courville, A., Bengio, Y.: Maxout networks. In: Proceedings of the 30th International Conference on Machine Learning, vol. 28, pp. 1319–1327 (2013)
– reference: JeroslowRLoweJModelling with integer variablesMath. Program. Study1984221671847742410554.90081
– reference: Mordvintsev, A., Olah, C., Tyka, M.: Inceptionism: Going deeper into neural networks (2015). https://ai.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html. Accessed 6 Feb 2020
– reference: Wong, E., Kolter, J.Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: International Conference on Machine Learning (2018)
– reference: Bienstock, D., Muñoz, G., Pokutta, S.: Principled deep neural network training through linear programming (2018). arXiv preprint arXiv:1810.03218
– reference: Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: Chaudhuri, K., Salakhutdinov, R. (eds.) Proceedings of the 36th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 97, pp. 1802–1811. PMLR, Long Beach, CA (2019). http://proceedings.mlr.press/v97/engstrom19a.html. Accessed 6 Feb 2020
– reference: https://developers.google.com/machine-learning/glossary/#logits. Accessed 6 Feb 2020
– reference: BertsimasDTsitsiklisJIntroduction to Linear Optimization1997Belmont, MAAthena Scientific
– reference: Xu, B., Wang, N., Chen, T., Li, M.: Empirical evaluation of rectified activations in convolution network (2015). arxiv:1505.00853
– reference: Lombardi, M., Milano, M.: Boosting combinatorial problem modeling with machine learning. In: Proceedings IJCAI, pp. 5472–5478 (2018)
– reference: Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward ReLU neural networks (2017). arxiv:1706.07351
– reference: Liu, C., Arnon, T., Lazarus, C., Barrett, C., Kochenderfer, M.J.: Algorithms for verifying deep neural networks (2019). arxiv:1903.06758
– reference: ArulkumaranKDeisenrothMPBrundageMBharathAADeep reinforcement learning: a brief surveyIEEE Signal Process. Mag.20173462638
– reference: FischettiMJoJDeep neural networks and mixed integer linear optimizationConstraints20182329630938146721402.90096
– reference: HuberBRambauJSantosFThe Cayley Trick, lifting subdivisions and the Bohne-Dress theorem of zonotopal tiltingsJ. Eur. Math. Soc.20002217919817633040988.52017
– reference: Weibel, C.: Minkowski sums of polytopes: combinatorics and computation. Ph.D. thesis, École Polytechnique Fédérale de Lausanne (2007)
– reference: VielmaJPNemhauserGModeling disjunctive constraints with a logarithmic number of binary variables and constraintsMath. Program.20111281–2497228109521218.90137
– reference: Khalil, E.B., Gupta, A., Dilkina, B.: Combinatorial attacks on binarized neural networks. In: International Conference on Learning Representations (2019)
– reference: Dvijotham, K., Stanforth, R., Gowal, S., Mann, T., Kohli, P.: A dual approach to scalable verification of deep networks. In: Thirty-Fourth Conference Annual Conference on Uncertainty in Artificial Intelligence (2018)
– reference: GoodfellowIBengioYCourvilleADeep Learning2016CambridgeMIT Press1373.68009
– reference: Haneveld, W.K.K.: Robustness against dependence in pert: an application of duality and distributions with known marginals. In: Stochastic Programming 84 Part I, pp. 153–182. Springer (1986)
– reference: Salman, H., Yang, G., Zhang, H., Hsieh, C.J., Zhang, P.: A convex relaxation barrier to tight robustness verification of neural networks (2019). arxiv:1902.08722
– reference: AlipanahiBDelongAWeirauchMTFreyBJPredicting the sequence specificities of DNA- and RNA-binding proteins by deep learningNat. Biotechnol.201533831838
– reference: VielmaJPEmbedding formulations and complexity for unions of polyhedraManage. Sci.2018641044714965
– reference: VielmaJPMixed integer linear programming formulation techniquesSIAM Rev.201557135733065821338.90277
– reference: NatarajanKSongMTeoCPPersistency model and its applications in choice modelingManage. Sci.20095534534691232.91139
– reference: Anderson, R., Huchette, J., Tjandraatmadja, C., Vielma, J.P.: Strong mixed-integer programming formulations for trained neural networks. In: A. Lodi, V. Nagarajan (eds.) Proceedings of the 20th Conference on Integer Programming and Combinatorial Optimization, pp. 27–42. Springer International Publishing, Cham (2019). arxiv:1811.08359
– reference: LeCunYBottouLBengioYHaffnerPGradient-based learning applied to document recognitionProc. IEEE19988622782324
– reference: Maas, A.L., Hannun, A.Y., Ng, A.Y.: Rectifier nonlinearities improve neural network acoustic models. In: ICML Workshop on Deep Learning for Audio, Speech and Language (2013)
– reference: Serra, T., Ramalingam, S.: Empirical bounds on linear regions of deep rectifier networks (2018). arxiv:1810.03370
– reference: WeissGStochastic bounds on distributions of optimal value functions with applications to pert, network flows and reliabilityOper. Res.19863445956058742980609.90093
– reference: JeroslowRGAlternative formulations of mixed integer programsAnn. Oper. Res.198812241276948042
– reference: KorteBVygenJCombinatorial Optimization: Theory and Algorithms2000BerlinSpringer0953.90052
– reference: Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: International Symposium on Automated Technology for Verification and Analysis. Springer, Cham (2017)
– ident: 1474_CR54
– ident: 1474_CR31
– ident: 1474_CR35
  doi: 10.1007/BFb0121119
– ident: 1474_CR56
  doi: 10.24963/ijcai.2017/346
– volume: 151
  start-page: 191
  issue: 1
  year: 2015
  ident: 1474_CR17
  publication-title: Math. Program.
  doi: 10.1007/s10107-015-0891-4
– volume: 180
  start-page: 925
  year: 2019
  ident: 1474_CR65
  publication-title: J. Optim. Theory Appl.
  doi: 10.1007/s10957-018-1396-0
– ident: 1474_CR4
– volume-title: Pattern Recognition and Machine Learning
  year: 2006
  ident: 1474_CR16
– ident: 1474_CR50
– ident: 1474_CR19
– volume: 170
  start-page: 141
  year: 2018
  ident: 1474_CR7
  publication-title: Math. Program.
  doi: 10.1007/s10107-018-1301-5
– volume: 2
  start-page: 179
  issue: 2
  year: 2000
  ident: 1474_CR39
  publication-title: J. Eur. Math. Soc.
  doi: 10.1007/s100970050003
– ident: 1474_CR64
  doi: 10.24963/ijcai.2017/104
– volume-title: Deep Learning
  year: 2016
  ident: 1474_CR32
– volume-title: Convexification and Global Optimization in Continuous and Mixed-Integer Nonlinear Programming: Theory, Algorithms, Software and Applications
  year: 2002
  ident: 1474_CR69
  doi: 10.1007/978-1-4757-3532-1
– ident: 1474_CR21
  doi: 10.2139/ssrn.3159473
– ident: 1474_CR12
– ident: 1474_CR52
  doi: 10.24963/ijcai.2018/772
– ident: 1474_CR27
  doi: 10.1007/978-3-319-68167-2_19
– volume: 521
  start-page: 436
  issue: 7553
  year: 2015
  ident: 1474_CR48
  publication-title: Nature
  doi: 10.1038/nature14539
– ident: 1474_CR68
– ident: 1474_CR45
– volume: 57
  start-page: 3
  issue: 1
  year: 2015
  ident: 1474_CR72
  publication-title: SIAM Rev.
  doi: 10.1137/130915303
– ident: 1474_CR28
– ident: 1474_CR30
– volume: 55
  start-page: 453
  issue: 3
  year: 2009
  ident: 1474_CR58
  publication-title: Manage. Sci.
  doi: 10.1287/mnsc.1080.0951
– volume: 33
  start-page: 831
  year: 2015
  ident: 1474_CR2
  publication-title: Nat. Biotechnol.
  doi: 10.1038/nbt.3300
– volume: 64
  start-page: 4471
  issue: 10
  year: 2018
  ident: 1474_CR73
  publication-title: Manage. Sci.
– ident: 1474_CR3
– volume: 86
  start-page: 2278
  year: 1998
  ident: 1474_CR49
  publication-title: Proc. IEEE
  doi: 10.1109/5.726791
– volume: 89
  start-page: 3
  year: 1998
  ident: 1474_CR9
  publication-title: Discrete Appl. Math.
  doi: 10.1016/S0166-218X(98)00136-X
– ident: 1474_CR59
– ident: 1474_CR38
– ident: 1474_CR55
– ident: 1474_CR76
– volume: 6
  start-page: 466
  issue: 3
  year: 1985
  ident: 1474_CR8
  publication-title: SIAM J. Algorithmic Discrete Methods
  doi: 10.1137/0606047
– volume: 23
  start-page: 296
  year: 2018
  ident: 1474_CR29
  publication-title: Constraints
  doi: 10.1007/s10601-018-9285-6
– ident: 1474_CR40
– volume: 21
  start-page: 435
  issue: 4
  year: 2016
  ident: 1474_CR51
  publication-title: Constraints
  doi: 10.1007/s10601-015-9234-6
– ident: 1474_CR11
  doi: 10.1609/aaai.v26i1.8138
– volume: 32
  start-page: 121
  issue: 12
  year: 2016
  ident: 1474_CR83
  publication-title: Bioinformatics
  doi: 10.1093/bioinformatics/btw255
– ident: 1474_CR23
– volume: 177
  start-page: 21
  year: 2018
  ident: 1474_CR74
  publication-title: Math. Program.
  doi: 10.1007/s10107-018-1258-4
– ident: 1474_CR61
– ident: 1474_CR44
– volume: 34
  start-page: 26
  issue: 6
  year: 2017
  ident: 1474_CR6
  publication-title: IEEE Signal Process. Mag.
  doi: 10.1109/MSP.2017.2743240
– ident: 1474_CR82
– ident: 1474_CR79
– volume: 76
  start-page: 98
  year: 2015
  ident: 1474_CR71
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2015.02.013
– volume: 52
  start-page: 537
  issue: 2
  year: 2012
  ident: 1474_CR37
  publication-title: Comput. Optim. Appl.
  doi: 10.1007/s10589-011-9424-0
– ident: 1474_CR33
– volume: 128
  start-page: 49
  issue: 1–2
  year: 2011
  ident: 1474_CR75
  publication-title: Math. Program.
  doi: 10.1007/s10107-009-0295-4
– ident: 1474_CR10
  doi: 10.1007/978-3-642-23786-7_11
– ident: 1474_CR20
  doi: 10.1109/SP.2017.49
– volume: 34
  start-page: 595
  issue: 4
  year: 1986
  ident: 1474_CR77
  publication-title: Oper. Res.
  doi: 10.1287/opre.34.4.595
– volume-title: Combinatorial Optimization: Theory and Algorithms
  year: 2000
  ident: 1474_CR46
  doi: 10.1007/978-3-662-21708-5
– volume: 244
  start-page: 343
  year: 2017
  ident: 1474_CR53
  publication-title: Artif. Intell.
  doi: 10.1016/j.artint.2016.01.005
– volume-title: Introduction to Linear Optimization
  year: 1997
  ident: 1474_CR14
– ident: 1474_CR47
– ident: 1474_CR24
  doi: 10.1007/978-3-319-77935-5_9
– volume: 22
  start-page: 167
  year: 1984
  ident: 1474_CR41
  publication-title: Math. Program. Study
  doi: 10.1007/BFb0121015
– ident: 1474_CR81
– ident: 1474_CR60
  doi: 10.1109/EuroSP.2016.36
– ident: 1474_CR62
– ident: 1474_CR18
  doi: 10.1109/CVPR.2010.5539963
– ident: 1474_CR66
– ident: 1474_CR1
– ident: 1474_CR78
– ident: 1474_CR26
– volume: 131
  start-page: 106580
  year: 2019
  ident: 1474_CR34
  publication-title: Comput. Chem. Eng.
  doi: 10.1016/j.compchemeng.2019.106580
– ident: 1474_CR36
– ident: 1474_CR5
– ident: 1474_CR70
– ident: 1474_CR22
  doi: 10.1007/978-3-319-68167-2_18
– ident: 1474_CR57
– volume: 65
  start-page: 545
  issue: 3
  year: 2016
  ident: 1474_CR13
  publication-title: Comput. Optim. Appl.
  doi: 10.1007/s10589-016-9847-8
– volume: 12
  start-page: 241
  year: 1988
  ident: 1474_CR42
  publication-title: Ann. Oper. Res.
  doi: 10.1007/BF02186369
– ident: 1474_CR15
– ident: 1474_CR43
  doi: 10.1007/978-3-319-63387-9_5
– ident: 1474_CR25
– ident: 1474_CR63
– ident: 1474_CR67
– ident: 1474_CR80
SSID ssj0001388
Score 2.6625407
Snippet We present strong mixed-integer programming (MIP) formulations for high-dimensional piecewise linear functions that correspond to trained neural networks....
SourceID proquest
crossref
springer
SourceType Aggregation Database
Enrichment Source
Index Database
Publisher
StartPage 3
SubjectTerms Calculus of Variations and Optimal Control; Optimization
Combinatorics
Full Length Paper
Image classification
Integer programming
Linear functions
Linear programming
Machine learning
Mathematical and Computational Physics
Mathematical Methods in Physics
Mathematics
Mathematics and Statistics
Mathematics of Computing
Mixed integer
Neural networks
Numerical Analysis
Theoretical
Title Strong mixed-integer programming formulations for trained neural networks
URI https://link.springer.com/article/10.1007/s10107-020-01474-5
https://www.proquest.com/docview/2435848940
Volume 183
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV1NS8MwGA66XfQgfuJ0jh68aaD5atfjpptTmRcdzFNJmkQEV2Wd4M83b5duU1TwVEKTtLz5epI87_MidCoTJbVb-LGb-uDohjGsEsVxYrUr4DCGZeCcPLyLBiN-MxZj7xRWVGz36kqynKlXnN0IHKtRIFLxmGOxjurC7d2ByDWincX8S1i7XQVqBXTgXWV-ruPrcrTEmN-uRcvVpr-NtjxMDDrzdt1BaybfRZsr4oEuNVworhZ76PoezrSfgsnzh9G4FIEw08CzryauQADo1MfqKiARlNEhjA5A0tJ9K58Twot9NOr3Hi4G2IdJwJkbPzMshWRxDOQWa7XIiAIMJUDHPqImjlRiZSLaIVMi0ibk0oSKUx4rTYmyWZixA1TLX3NziAJDM0qYCZk0hEvLJSda2UQSzYXbR9sGIpW10sxriMPPvqRL9WOwcOosnJYWTkUDnS3KvM0VNP7M3awaIfWjqUipw3SuKyU8bKDzqmGWr3-v7eh_2Y_RBi37BlDImqg2m76bE4c5ZqqF6p3uZbcPz6vH216r7HKfbYXPMQ
linkProvider Springer Nature
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV07T8MwED4VGIAB8RTlmQEmiBQ_8hoYEFC19LHQSt2CHdsICQoiRcD_4YdyTpO2IEBiYLRiO9b5bH8-330HcCBiKRQe_C5ufdZ0w5grY8nd2ChsgBjDMBuc3O4E9R6_7Pv9CryXsTC5t3v5JJnv1FPBbsSa1ah1pOIhd0tXyqZ-e8GLWnbSOMdZPaS0dtE9q7tFLgE3RSUbusIXLAytB4gxyk-JtEDDt2TvAdVhIGMjYj_ymPQDpT0utCc55aFUlEiTeinDfmdgDsFHZNdOj56O93vCoqhMDGvRSBGa8_2YPx9_E0z75Rk2P91qy7BUwFLndKRHK1DRg1VYnCIrxFJ7zPCarUHjytrQb5z721et3Jx0Qj85hbfXPTZwLBoucoNltuDk2Si0ciyFJv5rMHJAz9ah9y-i3IDZwcNAb4KjaUoJ0x4TmnBhuOBESRMLoriP93ZTBVJKK0kLznI72LtkwrZsJZyghJNcwolfhaNxm8cRY8evtXfKSUiK1ZslFDEkqm7MvSoclxMz-fxzb1t_q74P8_Vuu5W0Gp3mNizQXE-s-9oOzA6fnvUu4p2h3MvVzYHr_9bvD279CIc
linkToPdf http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LS8QwEB58gOhBfOL67EFPWmwe3W4PHkRdXF8IuuCtJk0iglvFrqj_yp_oTLfdVVHBg8fQJA2TSfJlMvMNwLqKtTJ48Pu49ZHpRghfx1r6sTPYADGGExScfHpWP2zLo6vwagjeqliYwtu9epLsxTQQS1PW3X4wbvtD4BsjExsnpyoZSb9yqzy2r894act3Wvs4wxucNw8u9w79Mq-An6LCdX0VKhFF5A3inAlTpgl0hET8Xuc2quvYqThsBEKHdWMDqWygJZeRNpxplwapwH6HYVRS9DGuoDbf7e_9TDQaVZJYQiZlmM73Y_58FA7w7Zcn2eKka07BZAlRvd2eTk3DkM1mYOIDcSGWTvtsr_kstC7Inn7jdW5frPELAgr76JWeXx1s4BEyLvOE5VTwiswU1nhEp4n_ynrO6PkctP9FlPMwkt1ndgE8y1POhA2EskwqJ5VkRrtYMSNDvMO7GrBKWkla8pfTYO-SAfMySThBCSeFhJOwBpv9Ng899o5fay9Xk5CUKzlPOOJJVONYBjXYqiZm8Pnn3hb_Vn0Nxs73m8lJ6-x4CcZ5oSbkybYMI93HJ7uC0KerVwtt8-D6v9X7HSHTDLo
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Strong+mixed-integer+programming+formulations+for+trained+neural+networks&rft.jtitle=Mathematical+programming&rft.au=Anderson%2C+Ross&rft.au=Huchette%2C+Joey&rft.au=Ma%2C+Will&rft.au=Tjandraatmadja%2C+Christian&rft.date=2020-09-01&rft.pub=Springer+Berlin+Heidelberg&rft.issn=0025-5610&rft.eissn=1436-4646&rft.volume=183&rft.issue=1-2&rft.spage=3&rft.epage=39&rft_id=info:doi/10.1007%2Fs10107-020-01474-5&rft.externalDocID=10_1007_s10107_020_01474_5
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0025-5610&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0025-5610&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0025-5610&client=summon