SYSTEM AND METHOD FOR ADVERSARIAL VULNERABILITY TESTING OF MACHINE LEARNING MODELS
A system and method for adversarial vulnerability testing of machine learning models is proposed that receives as an input, a representation of a non-differentiable machine learning model, transforms the input model into a smoothed model and conducts an adversarial search against the smoothed model...
Saved in:
Main Authors | , , , , |
---|---|
Format | Patent |
Language | English |
Published |
01.12.2022
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A system and method for adversarial vulnerability testing of machine learning models is proposed that receives as an input, a representation of a non-differentiable machine learning model, transforms the input model into a smoothed model and conducts an adversarial search against the smoothed model to generate an output data value representative of a potential vulnerability to adversarial examples. Variant embodiments are also proposed, directed to noise injection, hyperparameter control, and exhaustive/sampling-based searches in an effort to balance computational efficiency and accuracy in practical implementation. Flagged vulnerabilities can be used to have models re-validated, re-trained, or removed from use due to an increased cybersecurity risk profile. |
---|---|
Bibliography: | Application Number: US202217750205 |