FFT convolution giving different results from convolution

Started by Michel Rouzic July 25, 2005
I read the chapter about FFT convolution (chapter 18) on dspguide.com
and from what I understood, to perform a FFT convolution, you perform a
FFT on your filter kernel and on your signal (after you have changed
their size and zero padded), you multiply each real and imaginary part
of the two signals, and then you perform an IFFT, and with that you're
supposed to get the same result as with a normal convolution, did I get
it right?

Because the problem is, when I perform a FFT on a delta function (the
first value of the array being 1 and all others being zero), every
sample of the real part equals one as expected, but every sample of the
imaginary part are equal to zero, and this is where I stop understand
how a FFT convolution can give the same result as a normal convolution.

If I shift the delta function by one (i mean if the 1 is on the second
value of the array), performing its FFT shows something wavy, and
performing a FFT convolution with such a kernel gives a result quite
far from what it would have given with a normal convolution, since it
would have given the same signal as in input, except that it would be
shifted by one sample.

I'm pretty sure there's something wrong in what I do, but I couldn't
find what was wrong from reading that chapter from dspguide.com

Michel Rouzic wrote:
> I read the chapter about FFT convolution (chapter 18) on dspguide.com > and from what I understood, to perform a FFT convolution, you perform a > FFT on your filter kernel and on your signal (after you have changed > their size and zero padded), you multiply each real and imaginary part > of the two signals,
I suspect here is your error. If H is the FFT of the filter, and X is the FFT of the input signal, you have to implement a complex multiplication, ie. H[k] X[k] = Re[H[k]] Re[X[k]] - Im[H[k]] Im[X[k]] + j (Re[H[k]] Im[X[k]] + Im[H[k]] Re[X[k]]) Regards, Andor

Michel Rouzic skrev:
> I read the chapter about FFT convolution (chapter 18) on dspguide.com > and from what I understood, to perform a FFT convolution, you perform a > FFT on your filter kernel and on your signal (after you have changed > their size and zero padded), you multiply each real and imaginary part > of the two signals, and then you perform an IFFT, and with that you're > supposed to get the same result as with a normal convolution, did I get > it right?
Almost. It may be a case of impresice language, but I interpret the sentence "multiply the real and imaginary parts of the two signals" as z1 = re1 + j*im1 z2 = re2 + j*im2 z1*z2 = re1*re2 + j*im1*im2. If this is what you do, it is wrong. You should perform complex multiplication as follows: z1*z2 = (re1 + j*im1)(re2 + j*im2) = re1*re2 - im1*im2 + j*(re1*im2 + re2*im1) or something like that (I worked this out in my head, which is >99% certain to introduce flaws in the formula).
> Because the problem is, when I perform a FFT on a delta function (the > first value of the array being 1 and all others being zero), every > sample of the real part equals one as expected, but every sample of the > imaginary part are equal to zero, and this is where I stop understand > how a FFT convolution can give the same result as a normal convolution.
If you see this as a multiplication of complex numbers, everything makes sense: Convolution with the Kronecer delta in time domain leaves the signal unchanged. Multiplying each coeffcient in the spectrum with 1, leaves the spectrum unchanged.
> If I shift the delta function by one (i mean if the 1 is on the second > value of the array), performing its FFT shows something wavy,
How do you visualize the FFT result? Plotting the real part? Plotting the imaginary part? Plotting both? Try to convert the spectrum to polar form, z = re +j*im = sqrt(re^2 + im^2) exp(j*Atan(im/re)) and plot the magnitude and phase. Does it look better to you now?
> and > performing a FFT convolution with such a kernel gives a result quite > far from what it would have given with a normal convolution, since it > would have given the same signal as in input, except that it would be > shifted by one sample.
If you do things correctly, the results should be equal.
> I'm pretty sure there's something wrong in what I do, but I couldn't > find what was wrong from reading that chapter from dspguide.com
It seems to me that you understand the concepts. I can't find anything obviously wrong in how you think and what you do, you obviously know what to expect from each example, but you don't get those results. The one thing that is slightly "off" in your post, is your remark on how you multiply the two spectra. It may be a matter of impresice language, but it might be where the error is. I suggest you make sure you use complex multiplication in your program, as I explained above. If it still doesn't work, please post the code you use for the test. Rune
oh, i hadn't understood that it was a complex multiplication, so all I
did was to multiply each value from let's say H with the matching value
in X. Makes sence, even tho my problem is that I never dealt with
complex multiplications before

> Almost. It may be a case of impresice language, but I interpret > the sentence "multiply the real and imaginary parts of the > two signals" as > > z1 = re1 + j*im1 > z2 = re2 + j*im2 > > z1*z2 = re1*re2 + j*im1*im2.
um... i'm not sure to know what j is, but yeah i guess thats what I did
> H[k] X[k] = Re[H[k]] Re[X[k]] - Im[H[k]] Im[X[k]] + j (Re[H[k]] > Im[X[k]] + Im[H[k]] Re[X[k]])
um... if i understand right, I must put Re[H[k]] Re[X[k]] - Im[H[k]] Im[X[k]] in the real part of the result and Re[H[k]] Im[X[k]] + Im[H[k]] Re[X[k]] in the imaginary part, right?
Yes.

"j" is equivalent to "i", and both mean the square root of -1,
signifying the imaginary part of a complex number.  Mathemeticians use
"i"; engineers, OTOH, are used to using "i" to signify electrical
current, so they use "j" instead.

Michel Rouzic wrote:
> I read the chapter about FFT convolution (chapter 18) on dspguide.com > and from what I understood, to perform a FFT convolution, you perform a > FFT on your filter kernel and on your signal (after you have changed > their size and zero padded), you multiply each real and imaginary part > of the two signals, and then you perform an IFFT, and with that you're > supposed to get the same result as with a normal convolution, did I get > it right?
Yes (given the correct implementation of multiplying the complex transforms, which previous posters have discussed), as long as you promise that you did not overlook the fact that the result of such frequency-domain convolution is a *circular* convolution, which is different from a "normal convolution". You can implement normal convolution using these circular convolutions, which is described at the end of Ch. 9 as well as in Ch. 18.
i forgot to mention that i realloc both the input signal and kernel and
zero-pad them so that there length is for both of them the length of
the input signal plus the length of the kernel minus one. thus i avoid
circular convolution, right?

Jerry W. wrote:
> Michel Rouzic wrote: > > I read the chapter about FFT convolution (chapter 18) on dspguide.com > > and from what I understood, to perform a FFT convolution, you perform a > > FFT on your filter kernel and on your signal (after you have changed > > their size and zero padded), you multiply each real and imaginary part > > of the two signals, and then you perform an IFFT, and with that you're > > supposed to get the same result as with a normal convolution, did I get > > it right? > > Yes (given the correct implementation of multiplying the complex > transforms, which previous posters have discussed), as long as you > promise that you did not overlook the fact that the result of such > frequency-domain convolution is a *circular* convolution, which is > different from a "normal convolution". You can implement normal > convolution using these circular convolutions, which is described at > the end of Ch. 9 as well as in Ch. 18.
ok, now i think I did that complex multiplication right, but the result
still doesn't match to what I expect. To do my test I use a sound which
has it's last half set to zero, thus it can help me find out lots of
bugs, and when performing that convolution with a delta function as a
kernel, that second half isn't back to zero as it should but with some
of the input signal backwards, so i guess there's something wrong with
my code, so here's the part that deals with multiplying :

	dftfunction(sound, N, 0, 1);	//3rd arg. being whether it's a FFT or
IFFT

	for (i=0; i<N; i++)
	{
		a=sound[i];
		b=sound[N-i-1];
		c=kernel[i];
		d=kernel[N-i-1];

		if (i<N/2)
			sound[i]=a*c - b*d;
		else
			sound[i]=b*c + a*d;
	}

	dftfunction(sound, N, 1, 1);

is there something in this bit of code that can explain why my result
aint what's expected?

Andor wrote:
> Yes.