Topics in sparse approximation
MetadataShow full item record
Sparse approximation problems request a good approximation of an input signal as a linear combination of elementary signals, yet they stipulate that the approximation may involve only a few of the elementary signals. This class of problems arises throughout applied mathematics, statistics, and electrical engineering, but small theoretical progress has been made over the last fifty years. This dissertation offers four main contributions to the theory of sparse approximation. The first two contributions concern the analysis of two types of numerical algorithms for sparse approximation: greedy methods and convex relaxation methods. Greedy methods make a sequence of locally optimal choices in an effort to obtain a globally optimal solution. Convex relaxation methods replace the combinatorial sparse approximation problem with a related convex optimization in hope that their solutions will coincide. This work delineates conditions under which greedy methods and convex relaxation methods actually succeed in solving a well-defined sparse approximation problem in part or in full. The conditions for both classes of algorithms are remarkably similar, in spite of the fact that the two analyses differ significantly. The study of these algorithms yields geometric conditions on the collection of elementary signals which ensure that sparse approximation problems are computationally tractable. One may interpret these conditions as a requirement that the elementary signals should form a good packing of points in projective space. The third contribution of this work is an alternating projection algorithm that can produce good packings of points in projective space. The output of this algorithm frequently matches the best recorded solutions of projective packing problems. It can also address many related packing problems that have never been studied numerically. Finally, the dissertation develops a novel connection between sparse approximation problems and clustering problems. This perspective shows that many clustering problems from the literature can be viewed as sparse approximation problems where the collection of elementary signals must be learned along with the optimal sparse approximation. This treatment also yields many novel clustering problems, and it leads to a numerical method for solving them.