Splintering with distributions: A stochastic decoy scheme for private computation
Performing computations while maintaining privacy is an important problem in todays distributed machine learning solutions. Consider the following two set ups between a client and a server, where in setup i) the client has a public data vector $\mathbf{x}$, the server has a large private database of...
Saved in:
Main Authors | , , |
---|---|
Format | Journal Article |
Language | English |
Published |
06.07.2020
|
Subjects | |
Online Access | Get full text |
Cover
Loading…
Summary: | Performing computations while maintaining privacy is an important problem in
todays distributed machine learning solutions. Consider the following two set
ups between a client and a server, where in setup i) the client has a public
data vector $\mathbf{x}$, the server has a large private database of data
vectors $\mathcal{B}$ and the client wants to find the inner products $\langle
\mathbf{x,y_k} \rangle, \forall \mathbf{y_k} \in \mathcal{B}$. The client does
not want the server to learn $\mathbf{x}$ while the server does not want the
client to learn the records in its database. This is in contrast to another
setup ii) where the client would like to perform an operation solely on its
data, such as computation of a matrix inverse on its data matrix $\mathbf{M}$,
but would like to use the superior computing ability of the server to do so
without having to leak $\mathbf{M}$ to the server. \par We present a stochastic
scheme for splitting the client data into privatized shares that are
transmitted to the server in such settings. The server performs the requested
operations on these shares instead of on the raw client data at the server. The
obtained intermediate results are sent back to the client where they are
assembled by the client to obtain the final result. |
---|---|
DOI: | 10.48550/arxiv.2007.02719 |