Posts Tagged ‘compressed sampling’

Compressed Sampling

May 7, 2009

Compressed Sampling (CS) is an interesting technology that I’ve been investigating over the past week. CS is a relatively new sampling paradigm that allows near perfect signal reconstruction with far fewer samples than that suggested by the Shannon/Nyquist sampling theorem. This is particularly interesting in cases where the signal is not band-limited or the signal acquisition is far more costly than the computational effort required to compress it. Mathematically speaking, there isn’t much new happening here, vis-a-vis the usual sample-transform-compress approach to signal compression (and reconstruction, with the transform reversed). However, algorithmically, it is definitely a paradigm shift.

Suppose the sampled/discrete signal length is $n$. That is, suppose $s \in \mathbb{R}^n$ is the (sampled) signal of interest, then, the signal can be represented in terms of an appropriate linear transformation, defined on a suitable basis. Let $\Phi$ be the matrix of orthonormal basis functions. Then, we can express $s$ in terms of $\Phi$ as,

$s \,=\, \Phi \mathbf{c}$

Where, $\mathbf{c}$ is the vector of weighting co-efficients. Given a fixed orthonormal basis (though, this can be relaxed), the transformation problem is to find the corresponding co-efficients. This transform representation then can be used for compression, and even classification of signals. In the traditional compression model the signal of interest is sampled at the Nyquist rate (if it is an analog signal), and then transformed to a different domain (using Fourier basis functions, for instance) by explicitly solving a fairly large linear operation (either a matrix multiplication or an integration), and finally compressing the signal by dropping non-informative basis components, by thresholding the corresponding basis co-efficients. This is usually a very concise representation with minimal perceptual loss. This approach to signal compression has been quite successful – MP3, JPEG, JPEG2000 are examples.

The CS approach to signal compression is quite interesting. The important point to note with the traditional model is that it, potentially, requires sampling a large number of samples, and then applying the transform operator to these samples. It is usually the case that signals in the real world are sparse, where the number of significant co-efficients in the transform domain are much smaller than the number of samples. The question then arises – since the number of significant components is much lower than the signal length, can an adequate signal representation be obtained with far fewer samples? The answer, surprisingly, turns out to be yes. A number of elegant theorems have been proved to show that for adequately sparse signals, the sampling rate can be reduced significantly without hampering reconstruction. The idea here is to shift the “burden of information” from the signal sampling to the transform side. Rather than obtain $n$ samples, and then solve an integral or matrix operator equation, we sample fewer number of data points, but then search through the space of possible co-efficient vectors that can adequately weight the chosen basis functions so that the signal of interest can be reconstructed. This is the essence of CS.

An interesting aspect for me is the fact that the computational burden of computing a linear operation on a large dimensioned signal vector is replaced by a “simulation”-type approach, where by different possible solutions to the basis co-efficient search problem are tried and discarded in favor of better solutions. This can have quite important implications for the compression of analog signals, and might even be something to look into. It also appears that there might be some deep links to model selection.

I first came across Compressive Sampling (CS), in Emanuel Candes’ and Terrence Tao’s paper in the IEEE Transactions on Information Theory “Robust uncertainty principles: Exact recovery from highly incomplete Fourier information”. However my interest has piqued more recently by some conversations I’ve been having at work and outside. The links below have extensive bibliography, tutorials and information about CS –

http://www.acm.caltech.edu/l1magic/index.htm

http://www-dsp.rice.edu/cs

There are a number of really good introductions and tutorials on CS. A couple of easily accessible ones are –

1. Compressive Sensing

2. An introduction to compressive sampling.

Also Terrence Tao has a really good description of CS on his blog. Check it out!