Differentially Private Model Selection with Penalized and Constrained Likelihood

In statistical disclosure control, the goal of data analysis is twofold: The released information must provide accurate and useful statistics about the underlying population of interest, while minimizing the potential for an individual record to be identified. In recent years, the notion of differen...

Full description

Saved in:
Bibliographic Details
Published inarXiv.org
Main Authors Lei, Jing, Charest, Anne-Sophie, Slavkovic, Aleksandra, Smith, Adam, Fienberg, Stephen
Format Paper
LanguageEnglish
Published Ithaca Cornell University Library, arXiv.org 14.07.2016
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:In statistical disclosure control, the goal of data analysis is twofold: The released information must provide accurate and useful statistics about the underlying population of interest, while minimizing the potential for an individual record to be identified. In recent years, the notion of differential privacy has received much attention in theoretical computer science, machine learning, and statistics. It provides a rigorous and strong notion of protection for individuals' sensitive information. A fundamental question is how to incorporate differential privacy into traditional statistical inference procedures. In this paper we study model selection in multivariate linear regression under the constraint of differential privacy. We show that model selection procedures based on penalized least squares or likelihood can be made differentially private by a combination of regularization and randomization, and propose two algorithms to do so. We show that our private procedures are consistent under essentially the same conditions as the corresponding non-private procedures. We also find that under differential privacy, the procedure becomes more sensitive to the tuning parameters. We illustrate and evaluate our method using simulation studies and two real data examples.
ISSN:2331-8422