Scalable Interpretability via Polynomials

Generalized Additive Models (GAMs) have quickly become the leading choice for inherently-interpretable machine learning. However, unlike uninterpretable methods such as DNNs, they lack expressive power and easy scalability, and are hence not a feasible alternative for real-world tasks. We present a...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Dubey, Abhimanyu, Radenovic, Filip, Mahajan, Dhruv
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 18.10.2022
Subjects
Online AccessGet full text
ISSN2331-8422

Cover

Abstract Generalized Additive Models (GAMs) have quickly become the leading choice for inherently-interpretable machine learning. However, unlike uninterpretable methods such as DNNs, they lack expressive power and easy scalability, and are hence not a feasible alternative for real-world tasks. We present a new class of GAMs that use tensor rank decompositions of polynomials to learn powerful, {\em inherently-interpretable} models. Our approach, titled Scalable Polynomial Additive Models (SPAM) is effortlessly scalable and models {\em all} higher-order feature interactions without a combinatorial parameter explosion. SPAM outperforms all current interpretable approaches, and matches DNN/XGBoost performance on a series of real-world benchmarks with up to hundreds of thousands of features. We demonstrate by human subject evaluations that SPAMs are demonstrably more interpretable in practice, and are hence an effortless replacement for DNNs for creating interpretable and high-performance systems suitable for large-scale machine learning. Source code is available at https://github.com/facebookresearch/nbm-spam.
AbstractList Generalized Additive Models (GAMs) have quickly become the leading choice for inherently-interpretable machine learning. However, unlike uninterpretable methods such as DNNs, they lack expressive power and easy scalability, and are hence not a feasible alternative for real-world tasks. We present a new class of GAMs that use tensor rank decompositions of polynomials to learn powerful, {\em inherently-interpretable} models. Our approach, titled Scalable Polynomial Additive Models (SPAM) is effortlessly scalable and models {\em all} higher-order feature interactions without a combinatorial parameter explosion. SPAM outperforms all current interpretable approaches, and matches DNN/XGBoost performance on a series of real-world benchmarks with up to hundreds of thousands of features. We demonstrate by human subject evaluations that SPAMs are demonstrably more interpretable in practice, and are hence an effortless replacement for DNNs for creating interpretable and high-performance systems suitable for large-scale machine learning. Source code is available at https://github.com/facebookresearch/nbm-spam.
Author Dubey, Abhimanyu
Mahajan, Dhruv
Radenovic, Filip
Author_xml – sequence: 1
  givenname: Abhimanyu
  surname: Dubey
  fullname: Dubey, Abhimanyu
– sequence: 2
  givenname: Filip
  surname: Radenovic
  fullname: Radenovic, Filip
– sequence: 3
  givenname: Dhruv
  surname: Mahajan
  fullname: Mahajan, Dhruv
BookMark eNrjYmDJy89LZWLgNDI2NtS1MDEy4mDgLS7OMjAwMDIzNzI1NeZk0AxOTsxJTMpJVfDMK0ktKihKLUlMyszJLKlUKMtMVAjIz6nMy8_NTMwp5mFgTQNSqbxQmptB2c01xNlDt6Aov7A0tbgkPiu_tCgPKBUPNN3QxMTE3MDSmDhVADPIMr0
ContentType Paper
Copyright 2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Copyright_xml – notice: 2022. This work is published under http://arxiv.org/licenses/nonexclusive-distrib/1.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
DBID 8FE
8FG
ABJCF
ABUWG
AFKRA
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
HCIFZ
L6V
M7S
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
DatabaseName ProQuest SciTech Collection
ProQuest Technology Collection
Materials Science & Engineering Collection (subscription)
ProQuest Central (Alumni)
ProQuest Central UK/Ireland
ProQuest Central Essentials
ProQuest Central
ProQuest Technology Collection
ProQuest One Community College
ProQuest Central Korea
SciTech Premium Collection
ProQuest Engineering Collection
Engineering Database (subscription)
ProQuest Central Premium
ProQuest One Academic (New)
Publicly Available Content Database
ProQuest One Academic Middle East (New)
ProQuest One Academic Eastern Edition (DO NOT USE)
ProQuest One Applied & Life Sciences
ProQuest One Academic
ProQuest One Academic UKI Edition
ProQuest Central China
Engineering Collection
DatabaseTitle Publicly Available Content Database
Engineering Database
Technology Collection
ProQuest One Academic Middle East (New)
ProQuest Central Essentials
ProQuest One Academic Eastern Edition
ProQuest Central (Alumni Edition)
SciTech Premium Collection
ProQuest One Community College
ProQuest Technology Collection
ProQuest SciTech Collection
ProQuest Central China
ProQuest Central
ProQuest One Applied & Life Sciences
ProQuest Engineering Collection
ProQuest One Academic UKI Edition
ProQuest Central Korea
Materials Science & Engineering Collection
ProQuest Central (New)
ProQuest One Academic
ProQuest One Academic (New)
Engineering Collection
DatabaseTitleList Publicly Available Content Database
Database_xml – sequence: 1
  dbid: 8FG
  name: ProQuest Technology Collection
  url: https://search.proquest.com/technologycollection1
  sourceTypes: Aggregation Database
DeliveryMethod fulltext_linktorsrc
Discipline Physics
EISSN 2331-8422
Genre Working Paper/Pre-Print
GroupedDBID 8FE
8FG
ABJCF
ABUWG
AFKRA
ALMA_UNASSIGNED_HOLDINGS
AZQEC
BENPR
BGLVJ
CCPQU
DWQXO
FRJ
HCIFZ
L6V
M7S
M~E
PHGZM
PHGZT
PIMPY
PKEHL
PQEST
PQGLB
PQQKQ
PQUKI
PRINS
PTHSS
ID FETCH-proquest_journals_26714447093
IEDL.DBID BENPR
IngestDate Mon Jun 30 09:08:18 EDT 2025
IsOpenAccess true
IsPeerReviewed false
IsScholarly false
Language English
LinkModel DirectLink
MergedId FETCHMERGED-proquest_journals_26714447093
Notes content type line 50
SourceType-Working Papers-1
ObjectType-Working Paper/Pre-Print-1
OpenAccessLink https://www.proquest.com/docview/2671444709?pq-origsite=%requestingapplication%
PQID 2671444709
PQPubID 2050157
ParticipantIDs proquest_journals_2671444709
PublicationCentury 2000
PublicationDate 20221018
PublicationDateYYYYMMDD 2022-10-18
PublicationDate_xml – month: 10
  year: 2022
  text: 20221018
  day: 18
PublicationDecade 2020
PublicationPlace Ithaca
PublicationPlace_xml – name: Ithaca
PublicationTitle arXiv.org
PublicationYear 2022
Publisher Cornell University Library, arXiv.org
Publisher_xml – name: Cornell University Library, arXiv.org
SSID ssj0002672553
Score 3.3402982
SecondaryResourceType preprint
Snippet Generalized Additive Models (GAMs) have quickly become the leading choice for inherently-interpretable machine learning. However, unlike uninterpretable...
SourceID proquest
SourceType Aggregation Database
SubjectTerms Combinatorial analysis
Machine learning
Polynomials
Tensors
Title Scalable Interpretability via Polynomials
URI https://www.proquest.com/docview/2671444709
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwY2BQMQFWSmnmJonAjJRooWtiZmGoa2mclKRrlGhmkmhikmJkkApebeFn5hFq4hVhGgEdcCuGLquElYnggjolPxk0Rq5vZGYObPubmBtY2hcU6oJujQLNrkKv0GBmYAUWwRbAdM7q5OoXEAQfZQHqAraZjTEKWnDt4SbIwBqQWJBaJMTAlJonzMAOXnSZXCzCoBkMDCDQ1iUFxNI_8FrVSoWyzESFgPycStCmYWACEWVQdnMNcfbQhZkfD00DxfEIFxuLMbAAO_OpEgwKxkbGwAYTsN1jaZwK7LqZJSWmJaeCOopmlinA0DKSZJDBZ5IUfmlpBi4j0PJ80IoLCxkGlpKi0lRZYKVZkiTHwGzh5i4HDR8gz7fOFQByhnVv
linkProvider ProQuest
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwY2BQMQFWSmnmJonAjJRooWtiZmGoa2mclKRrlGhmkmhikmJkkApebeFn5hFq4hVhGsHEUAPbCwNaVgkrE8EFdUp-MmiMXN_IzBzY9jcxN7C0LyjUBd0aBZpdhV2hAUkW3qmV5cAuW7GtpwswflWNjNxcQ5w9dGHmxUOjpzgeYZgxUBUzA6sJaH8qCwOrk6tfQBB8jAaoDNjiNsYopsF1j5sgA2tAYkFqkRADU2qeMAM7eMlmcrEIg2YwMHhBG58UEAsHwStdKxXKMhMVAvJzKkFbjoEOEWVQJsKJYgwsefl5qRIMCsZGxsDmFrDVZGmcCuz4mSUlpiWngrqZZpYpwLA2kmSQwWeSFH5peQZOjxBfn3gfTz9vaQYuI9BCf_CyNRkGlpKi0lRZYPVbkiQHDSUFBj3SghkA6PeYew
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Scalable+Interpretability+via+Polynomials&rft.jtitle=arXiv.org&rft.au=Dubey%2C+Abhimanyu&rft.au=Radenovic%2C+Filip&rft.au=Mahajan%2C+Dhruv&rft.date=2022-10-18&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422