Reply by Martin Eisenberg August 7, 20062006-08-07
terrel_b@yahoo.com wrote:

> > I have two 16-bit wav files and am using Microsoft DirectSound > to play and mix the files. I'm able to "mix" the two audio files > by simply adding the wav data values together. I have a problem > however, when attempting to "fade" from one audio file to the > other. Algorithm is listed below: > > Get two bytes of WAV data from the "right file" > Put second byte into uint (32 bit) and bit-shift 8 bits to the > left Put first byte into temp uint value > rightValue (uint) = first uint value | second uint value
Independent of my other reply, the words in a 16-bit wave file are twos-complement signed! Martin -- Quidquid latine scriptum sit, altum viditur.
Reply by Martin Eisenberg August 7, 20062006-08-07
terrel_b@yahoo.com wrote:

> // this doesn't work no matter how I try to do it > // faderValue ranges from +100 to -100 > uint (32 bit) addedValue = rightValue * (faderValue * .01) + > leftValue * (AbsoluteValue(faderValue) * .01)
The weights for the two files should be nonnegative and add up to 1 at any moment. Exactly how to get there depends on what the fader value means to the user. Martin -- Quidquid latine scriptum sit, altum viditur.
Reply by terr...@yahoo.com August 7, 20062006-08-07
I have two 16-bit wav files and am using Microsoft DirectSound to play
and mix the files. I'm able to "mix" the two audio files by simply
adding the wav data values together.  I have a problem however, when
attempting to "fade" from one audio file to the other.  Algorithm is
listed below:

Get two bytes of WAV data from the "right file"
Put second byte into uint (32 bit) and bit-shift 8 bits to the left
Put first byte into temp uint value
rightValue (uint) = first uint value | second uint value

repeat above steps for the "left file"

// this works for the most part, however there's a bit of distortion
// which dithering doesn't seem to fix
add values together

// this doesn't work no matter how I try to do it
// faderValue ranges from +100 to -100
uint (32 bit) addedValue  = rightValue * (faderValue * .01) + leftValue
* (AbsoluteValue(faderValue) * .01)

add random 1-bit dither value to the value

This results in mass distortion, barely able to make out either of the
audio files at all.  Additionally, I've tried to normalize the data
values in order to hopefully get some of the lower frequency values out
to avoid the distortion, but to no avail.  I realize that the
multiplication, rounding, etc. aren't necessarily good for digital
audio, but I've got to think there's a way to do this with 16 bit
files.  Or, are 16 bit files just not precise enough to do this sort of
thing?

Thanks in advance.

T