Published: December 12, 2009
Abstract: [Plain Text Version]
Over the years a range of positive algorithmic results have been obtained for learning various classes of monotone Boolean functions from uniformly distributed random examples. Prior to our work, however, the only negative result for learning monotone functions in this model has been an information-theoretic lower bound showing that certain super-polynomial-size monotone circuits cannot be learned to accuracy $1/2 + \omega(\log n/\sqrt{n})$ (Blum, Burch, and Langford, FOCS’98). This is in contrast with the situation for non-monotone functions, where a wide range of cryptographic hardness results establish that various “simple” classes of polynomial-size circuits are not learnable by polynomial-time algorithms.
In this paper we establish cryptographic hardness results for learning various “simple” classes of monotone circuits, thus giving a computational analogue of the information-theoretic hardness results of Blum et al. mentioned above. Some of our results show the cryptographic hardness of learning polynomial-size monotone circuits to accuracy only slightly greater than $1/2 + 1/\sqrt{n}$, which is close to the optimal accuracy bound by positive results of Blum et al. Other results show that under a plausible cryptographic hardness assumption, a class of constant-depth, sub-polynomial-size circuits computing monotone functions is hard to learn. This result is close to optimal in terms of the circuit-size parameter by known positive results as well (Servedio, Information and Computation 2004). Our main tool is a complexity-theoretic approach to hardness amplification via noise sensitivity of monotone functions that was pioneered by O’Donnell (JCSS 2004).