A Mathematical Introduction to Compressive Sensing (Applied and Numerical Harmonic Analysis)
At the intersection of mathematics, engineering, and computer science sits the thriving field of compressive sensing. Based on the premise that data acquisition and compression can be performed simultaneously, compressive sensing finds applications in imaging, signal processing, and many other domains. In the areas of applied mathematics, electrical engineering, and theoretical computer science, an explosion of research activity has already followed the theoretical results that highlighted the efficiency of the basic principles. The elegant ideas behind these principles are also of independent interest to pure mathematicians.
A Mathematical Introduction to Compressive Sensing gives a detailed account of the core theory upon which the field is build. With only moderate prerequisites, it is an excellent textbook for graduate courses in mathematics, engineering, and computer science. It also serves as a reliable resource for practitioners and researchers in these disciplines who want to acquire a careful understanding of the subject. A Mathematical Introduction to Compressive Sensing uses a mathematical perspective to present the core of the theory underlying compressive sensing.
Why Read This Book
You should read this book if you want a clear, mathematically rigorous foundation for sparse-signal recovery and compressive sensing algorithms so you can reason about guarantees (stability, robustness, sampling rates) rather than just run black-box code. It connects core theorems (RIP, null-space/coherence conditions) to practical reconstruction methods like l1 minimization and greedy algorithms, enabling you to design or assess sensing systems with provable performance.
Who Will Benefit
Graduate students, researchers, and DSP engineers who need a rigorous theoretical grounding in sparse recovery and compressive sensing to design or analyze sensing systems and algorithms.
Level: Advanced — Prerequisites: Linear algebra (bases, norms, singular values), basic convex optimization (convex sets, linear programming, l1 minimization), undergraduate probability (concentration inequalities helpful), and familiarity with Fourier transforms and basic DSP concepts.
Key Takeaways
- Understand the sparsity model and formalize what it means for a signal to be recoverable from undersampled linear measurements.
- Prove and apply core sufficient/necessary conditions for recovery such as the Restricted Isometry Property (RIP), null-space property, and coherence bounds.
- Analyze random measurement ensembles (Gaussian, subgaussian, partial Fourier) and derive sampling-rate guarantees using concentration results.
- Use and justify convex relaxation methods (basis pursuit / l1 minimization) for exact and stable recovery, including in the presence of noise and compressible signals.
- Compare and analyze greedy and iterative algorithms (e.g., matching pursuit, orthogonal matching pursuit, iterative thresholding) and understand their performance trade-offs.
- Translate theoretical conditions into practical sensing-matrix design and understand limitations and extensions (structured sparsity, model-based CS).
Topics Covered
- 1. Introduction and motivation: sparsity and compressed acquisition
- 2. Sparse vectors, models, and measures of sparsity
- 3. l0 vs l1: convex relaxation and basis pursuit
- 4. Coherence and the null-space property
- 5. The Restricted Isometry Property (RIP) and its consequences
- 6. Random matrices and probabilistic RIP guarantees
- 7. Stable recovery and compressible signals (noisy measurements)
- 8. Greedy algorithms and iterative thresholding methods
- 9. Structured sparsity and extensions (block-sparsity, model-based CS)
- 10. Connections to approximation theory and uncertainty principles
- 11. Practical aspects, sensing-matrix design, and brief applications
- Appendix: useful tools from probability, convex analysis, and linear algebra
How It Compares
More focused and theorem-driven than the edited volume 'Compressed Sensing: Theory and Applications' (Eldar & Kutyniok), and more rigorous/theoretical than Elad's 'Sparse and Redundant Representations', which emphasizes dictionaries and practical algorithms.












