SYSTEM AND METHOD FOR ADVERSARIAL VULNERABILITY TESTING OF MACHINE LEARNING MODELS

A system and method for adversarial vulnerability testing of machine learning models is proposed that receives as an input, a representation of a non-differentiable machine learning model, transforms the input model into a smoothed model and conducts an adversarial search against the smoothed model...

Full description

Saved in:
Bibliographic Details
Main Authors WU, Ga, DING, Weiguang, HASHEMI AMROABADI, Sayedmasoud, CASTIGLIONE, Giuseppe Marcello Antonio, SRINIVASA, Christopher Côté
Format Patent
LanguageEnglish
Published 01.12.2022
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:A system and method for adversarial vulnerability testing of machine learning models is proposed that receives as an input, a representation of a non-differentiable machine learning model, transforms the input model into a smoothed model and conducts an adversarial search against the smoothed model to generate an output data value representative of a potential vulnerability to adversarial examples. Variant embodiments are also proposed, directed to noise injection, hyperparameter control, and exhaustive/sampling-based searches in an effort to balance computational efficiency and accuracy in practical implementation. Flagged vulnerabilities can be used to have models re-validated, re-trained, or removed from use due to an increased cybersecurity risk profile.
Bibliography:Application Number: US202217750205