On the asymptotic performance of l_q regularized least
squares
Arian Maleki, Columbia
Abstract:
In many application areas ranging from bioinformatics to imaging we
are faced with the following question: Can we recover a sparse
vector x of dimension p from its undersampled set of n noisy
observations y= Ax + z. The last decade has witnessed a surge
of algorithms to address this question. One of the most popular
algorithms is the l_q-regularized least squares, where 0 < q
<= 2. Despite the non-convexity of these optimization problems
for 0 < q < 1 , they are still appealing for their closer
proximity to the “ideal” l_0 -regularized least squares. In this
talk, we adopt the asymptotic framework where p tends to
infinity and n/p tends to a constant and analyze the
properties of the global minimizer of (1) under the optimal tuning
of the regularization parameter. Our goal is to answer the
following questions: (i) Do non-convex regularizers outperform
convex regularizers? (ii) Does q = 1 outperform other convex
optimization problems when the vector x is sparse?
We discuss both the predictive power and variable selection
accuracy of these algorithms. If time permits, we also
discuss algorithms that can provably reach to the global minima of
the non-convex problems in certain regimes.
This talk is based on a joint work with Haolei Weng, Shuaiwen
Wang, and Le Zheng.