DSPRelated.com
Forums

Floating Point Data Compression?

Started by DigitalSignal November 6, 2008
Hi there, A quick question: Is there any way to compress the single
point floating point data? Apparently most of the research and
development work focuses on fixed point compression.

James
www.go-ci.com
DigitalSignal wrote:
> Hi there, A quick question: Is there any way to compress the single > point floating point data? Apparently most of the research and > development work focuses on fixed point compression.
What do you want to achieve? Range compression? Storage reduction? Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
On Nov 6, 5:19 pm, DigitalSignal <digitalsignal...@yahoo.com> wrote:
> Hi there, A quick question: Is there any way to compress the single > point floating point data? Apparently most of the research and > development work focuses on fixed point compression. > > Jameswww.go-ci.com
Isn't this what is called "scalar quantization"?
On Nov 6, 5:19&#4294967295;pm, DigitalSignal <digitalsignal...@yahoo.com> wrote:
> Hi there, A quick question: Is there any way to compress the single > point floating point data? Apparently most of the research and > development work focuses on fixed point compression. > > Jameswww.go-ci.com
James, Do you mean single "precision" floating-point data? A lot of the work compares compressed data rates to fixed-point quantized data rates, but that does not mean that the data to be compressed was not floating point before it was compressed (or quantized). Dirk
DigitalSignal wrote:

> Hi there, A quick question: Is there any way to compress the single > point floating point data? Apparently most of the research and > development work focuses on fixed point compression.
Fixed point data that doesn't fill up its range compresses well with many algorithms. Floating point data with noise in the low order bits won't compress well. The results depend on the statistics of the bits, independent of their source. -- glen
Well, to reduce the number of bits you need to store, you could throw away
some mantissa bits.  That's probably the simplest form of compression, but
it is lossy.
On Thu, 06 Nov 2008 14:19:42 -0800, DigitalSignal wrote:

> Hi there, A quick question: Is there any way to compress the single > point floating point data? Apparently most of the research and > development work focuses on fixed point compression. > > James > www.go-ci.com
Yes. Set all the values in your vector to zero. Then transmit the number of samples in your vector. Clarify your question and maybe you'll get a meaningful answer. Lossy? Lossless? Any specific type of input data, such as still pictures, video, generic audio or voice? There are any number of lossy compression algorithms that are just as meaningful with floating point data as the source stream as with fixed point; but if you're talking lossless compression then you're pretty much down to the algorithms you find in zip, and their aunts, uncles, cousins and in-laws. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com Do you need to implement control loops in software? "Applied Control Theory for Embedded Systems" gives you just what it says. See details at http://www.wescottdesign.com/actfes/actfes.html
On 6 Nov, 23:19, DigitalSignal <digitalsignal...@yahoo.com> wrote:
> Hi there, A quick question: Is there any way to compress the single > point floating point data?
It depends entirely on the statistics of the data. Plain text is easy to compress because there are few cases to consider (256 charcters) and the probability of occurence is far from uniform over the alphabet. It might be that the entropy of FP data, when viewed per byte or 16-bit word is so high that 'naive' text compressing algorithms don't work well. Rune
Sorry, I should make it clearer. We tried to find a way to compress
the single precision floating point data streams losslessly. As a
general case, the data acquisition system stores time domain data up
to a few gigabytes. It is expensive to store the data in the portable
device and slow to transfer them.

James
www.go-ci.com
DigitalSignal wrote:

> Sorry, I should make it clearer. We tried to find a way to compress > the single precision floating point data streams losslessly. As a > general case, the data acquisition system stores time domain data up > to a few gigabytes. It is expensive to store the data in the portable > device and slow to transfer them.
To compress it you (or a compression program) have to find some pattern to the data such that it can be coded more efficiently. For fixed point data that pattern will often be high order zero bits. LZW and related algorithms will usually find them and compress them out fairly well. If, for example, you stored 12 bit random data in 16 bit words, LZW would compress that down pretty close to 12 bits each. You say lossless, but in most cases there has already been loss in the conversion/arithmetic operations on floating point data. Reasonably often there is no useful information in the low bits of a floating point value, but you and the compression algorithm don't know that. -- glen