VERIFYING REMOTE EXECUTION OF MACHINE LEARNING INFERENCE UNDER HOMOMORPHIC ENCRYPTION USING PERMUTATIONS
A technique to remotely identify potential compromise of a service provider that performs homomorphic inferencing on a model. For a set of real data samples on which the inferencing is to take place, at least first and second permutations of a set of trigger samples are generated. Every set of sampl...
Saved in:
Main Authors | , , , |
---|---|
Format | Patent |
Language | English |
Published |
18.07.2024
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | A technique to remotely identify potential compromise of a service provider that performs homomorphic inferencing on a model. For a set of real data samples on which the inferencing is to take place, at least first and second permutations of a set of trigger samples are generated. Every set of samples (both trigger and real samples) are then sent for homomorphic inferencing on the model at least twice, and in a secret permutated way. To improve performance, a permutation is packaged with the real data samples prior to encryption using a general purpose data structure, a tile tensor, that allows users to store multi-dimensional arrays (tensors) of arbitrary shapes and sizes. In response to receiving one or more results from the HE-based model inferencing, a determination is made whether the service provider is compromised. Upon a determination that the service provider is compromised, a given mitigation action is taken. |
---|---|
Bibliography: | Application Number: US202318097995 |