We're using the Intel/DVI ADPCM algorithm. It works well but it distorts the audio beyond our specifications (greater than 7% THD+N for frequencies greater than 2KHz). I am thinking triangular dithering the signal before it is compressed may help (using 16-bit samples). When toggling the least significant bit of the sample between -1, 0,1, is this a "random" operation or is it periodic? ---------------------------------- Zero Crossings, Inc. -- Embedded and Digital Signal Processing Systems http://www.zerocrossings.com/
adpcm, distortion, and dithering
Started by ●March 3, 2005