Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning
Differentially Private Iterative Gradient Hard Thresholding for Sparse Learning
Lingxiao Wang, Quanquan Gu
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
Main track. Pages 3740-3747.
https://doi.org/10.24963/ijcai.2019/519
We consider the differentially private sparse learning problem, where the goal is to estimate the underlying sparse parameter vector of a statistical model in the high-dimensional regime while preserving the privacy of each training example. We propose a generic differentially private iterative gradient hard threshoding algorithm with a linear convergence rate and strong utility guarantee. We demonstrate the superiority of our algorithm through two specific applications: sparse linear regression and sparse logistic regression. Specifically, for sparse linear regression, our algorithm can achieve the best known utility guarantee without any extra support selection procedure used in previous work \cite{kifer2012private}. For sparse logistic regression, our algorithm can obtain the utility guarantee with a logarithmic dependence on the problem dimension. Experiments on both synthetic data and real world datasets verify the effectiveness of our proposed algorithm.
Keywords:
Machine Learning: Feature Selection ; Learning Sparse Models
Multidisciplinary Topics and Applications: Security and Privacy