Private Learning and Sanitization: Pure vs. Approximate Differential Privacy
by Amos Beimel, Kobbi Nissim, and Uri Stemmer
Theory of Computing, Volume 12(1), pp. 1-61, 2016
Bibliography with links to cited articles
[1] Martin Anthony and John Shawe-Taylor: A result of Vapnik with applications. Discrete Applied Mathematics, 47(3):207–217, 1993. Erratum in Discrete Applied Mathematics. [doi:10.1016/0166-218X(93)90126-9]
[2] Matin Anthony and Peter L. Bartlett: Neural Network Learning: Theoretical Foundations. Cambridge Univ. Press, 2009.
[3] Amos Beimel, Hai Brenner, Shiva Prasad Kasiviswanathan, and Kobbi Nissim: Bounds on the sample complexity for private learning and private data release. Machine Learning, 94(3):401–437, 2014. Preliminary version in TCC’10. [doi:10.1007/s10994-013-5404-1]
[4] Amos Beimel, Kobbi Nissim, and Uri Stemmer: Characterizing the sample complexity of private learners. In Proc. 4th Innovations in Theoretical Computer Science Conf. (ITCS’13), pp. 97–110. ACM Press, 2013. [doi:10.1145/2422436.2422450, arXiv:1402.2224]
[5] Amos Beimel, Kobbi Nissim, and Uri Stemmer: Private learning and sanitization: Pure vs. approximate differential privacy. In Proc. 17th Internat. Workshop on Randomization and Computation (RANDOM’13), volume 8096 of Lecture Notes in Computer Science, pp. 363–378. Springer, 2013. [doi:10.1007/978-3-642-40328-6_26, arXiv:1407.2674]
[6] Avrim Blum, Cynthia Dwork, Frank McSherry, and Kobbi Nissim: Practical privacy: The SuLQ framework. In Proc. 24nd Symp. on Principles of Database Systems (PODS’05), pp. 128–138. ACM Press, 2005. [doi:10.1145/1065167.1065184]
[7] Avrim Blum, Katrina Ligett, and Aaron Roth: A learning theory approach to noninteractive database privacy. J. ACM, 60(2):12, 2013. Preliminary version in STOC’08. [doi:10.1145/2450142.2450148]
[8] Anselm Blumer, Andrzej Ehrenfeucht, David Haussler, and Manfred K. Warmuth: Learnability and the Vapnik-Chervonenkis dimension. J. ACM, 36(4):929–965, 1989. [doi:10.1145/76359.76371]
[9] Kamalika Chaudhuri and Daniel Hsu: Sample complexity bounds for differentially private learning. In Proc. 24th Ann. Conf. on Learning Theory (COLT’11), volume 19, pp. 155–186. JMLR, 2011. Available at NCBL.
[10] Anindya De: Lower bounds in differential privacy. In 9th Theory of Cryptography Conf. (TCC’12), volume 7194 of Lecture Notes in Computer Science, pp. 321–338. Springer, 2012. [doi:10.1007/978-3-642-28914-9_18, arXiv:1107.2183]
[11] Cynthia Dwork, Krishnaram Kenthapadi, Frank McSherry, Ilya Mironov, and Moni Naor: Our data, ourselves: Privacy via distributed noise generation. In EUROCRYPT, volume 4004 of Lecture Notes in Computer Science, pp. 486–503. Springer, 2006. [doi:10.1007/11761679_29]
[12] Cynthia Dwork and Jing Lei: Differential privacy and robust statistics. In Proc. 41st STOC, pp. 371–380. ACM Press, 2009. [doi:10.1145/1536414.1536466]
[13] Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith: Calibrating noise to sensitivity in private data analysis. In 3rd Theory of Cryptography Conf. (TCC’06), volume 3876 of Lecture Notes in Computer Science, pp. 265–284. Springer, 2006. [doi:10.1007/11681878_14]
[14] Cynthia Dwork, Moni Naor, Omer Reingold, Guy N. Rothblum, and Salil P. Vadhan: On the complexity of differentially private data release: efficient algorithms and hardness results. In Proc. 41st STOC, pp. 381–390. ACM Press, 2009. [doi:10.1145/1536414.1536467]
[15] Cynthia Dwork, Guy N. Rothblum, and Salil P. Vadhan: Boosting and differential privacy. In Proc. 51st FOCS, pp. 51–60. IEEE Comp. Soc. Press, 2010. [doi:10.1109/FOCS.2010.12]
[16] Andrzej Ehrenfeucht, David Haussler, Michael J. Kearns, and Leslie G. Valiant: A general lower bound on the number of examples needed for learning. Inf. Comput., 82(3):247–261, 1989. Preliminary version in COLT’88. [doi:10.1016/0890-5401(89)90002-3]
[17] Vitaly Feldman and David Xiao: Sample complexity bounds on differentially private learning via communication complexity. In Proc. 27th Ann. Conf. on Learning Theory (COLT’14), pp. 1000–1019, 2014. Available at JMLR. [arXiv:1402.6278]
[18] Anupam Gupta, Moritz A. W. Hardt, Aaron Roth, and Jonathan Ullman: Privately releasing conjunctions and the statistical query barrier. SIAM J. Comput., 42(4):1494–1520, 2013. Preliminary version in STOC’11. [doi:10.1137/110857714]
[19] Moritz A. W. Hardt: A Study of Privacy and Fairness in Sensitive Data Analysis. Ph. D. thesis, Princeton University, 2011. Available at DataSpace.
[20] Moritz A. W. Hardt and Guy N. Rothblum: A multiplicative weights mechanism for privacy-preserving data analysis. In Proc. 51st FOCS, pp. 61–70. IEEE Comp. Soc. Press, 2010. [doi:10.1109/FOCS.2010.85]
[21] Moritz A. W. Hardt and Kunal Talwar: On the geometry of differential privacy. In Proc. 42nd STOC, pp. 705–714. ACM Press, 2010. [doi:10.1145/1806689.1806786]
[22] Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith: What can we learn privately? SIAM J. Comput., 40(3):793–826, 2011. Preliminary version in FOCS’08. [doi:10.1137/090756090]
[23] Michael J. Kearns: Efficient noise-tolerant learning from statistical queries. J. ACM, 45(6):983–1006, 1998. Preliminary version in STOC’93. [doi:10.1145/293347.293351]
[24] Frank McSherry and Kunal Talwar: Mechanism design via differential privacy. In Proc. 48th FOCS, pp. 94–103. IEEE Comp. Soc. Press, 2007. [doi:10.1109/FOCS.2007.66]
[25] Aaron Roth: Differential privacy and the fat-shattering dimension of linear queries. In Proc. 14th Internat. Workshop on Randomization and Computation (RANDOM’10), volume 6302 of Lecture Notes in Computer Science, pp. 683–695. Springer, 2010. [doi:10.1007/978-3-642-15369-3_51]
[26] Abhradeep Thakurta and Adam Smith: Differentially private feature selection via stability arguments, and the robustness of the lasso. In Proc. 26th Ann. Conf. on Learning Theory (COLT’13), volume 30, pp. 819–850. JMLR, 2013. Available at JMLR.
[27] Jonathan Ullman: Answering n2+o(1) counting queries with differential privacy is hard. In Proc. 45th STOC, pp. 361–370. ACM Press, 2013. [doi:10.1145/2488608.2488653]
[28] Jonathan Ullman and Salil P. Vadhan: PCPs and the hardness of generating private synthetic data. In 8th Theory of Cryptography Conf. (TCC’11), volume 6597 of Lecture Notes in Computer Science, pp. 400–416. Springer, 2011. [doi:10.1007/978-3-642-19571-6_24]
[29] Leslie G. Valiant: A theory of the learnable. Commun. ACM, 27(11):1134–1142, 1984. Preliminary version in STOC’84. [doi:10.1145/1968.1972]
[30] Vladimir N. Vapnik and Alexey Y. Chervonenkis: On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications, 16(2):264–280, 1971. [doi:10.1137/1116025]