DSPRelated.com
Forums

why we want to do folding while performing convolution

Started by sk July 18, 2006
hi all
can u explain in detail why there is need for folding when we perform
convolution operation with an (practical)example? 

thanks in advance
sk

sk wrote:
> hi all > can u explain in detail why there is need for folding when we perform > convolution operation with an (practical)example?
It gets your laundry done faster? Sorry. I presume you mean "flipping"? It results from the (t-tau) in y(t) = \int_{-\infty}^{+\infty} x(tau) * g(t - tau) d tau You simply flip the g(t) about the y-axis, then slide it left or right according to t. This is the sort of thing that would be much better explained in person. --Randy
thanks for ur response...
i know the formula.... but i want to know why we want to do folding?
cant we do wtihout folding?....


Randy Yates wrote:

> sk wrote: > > hi all > > can u explain in detail why there is need for folding when we perform > > convolution operation with an (practical)example? > > It gets your laundry done faster? > > Sorry. I presume you mean "flipping"? It results from > the (t-tau) in > > y(t) = \int_{-\infty}^{+\infty} x(tau) * g(t - tau) d tau > > You simply flip the g(t) about the y-axis, then slide > it left or right according to t. > > This is the sort of thing that would be much better > explained in person. > > --Randy
sk wrote:
> thanks for ur response... > i know the formula.... but i want to know why we want to do folding? > cant we do wtihout folding?.... > > > Randy Yates wrote: > > >>sk wrote: >> >>>hi all >>>can u explain in detail why there is need for folding when we perform >>>convolution operation with an (practical)example? >> >>It gets your laundry done faster? >> >>Sorry. I presume you mean "flipping"? It results from >>the (t-tau) in >> >> y(t) = \int_{-\infty}^{+\infty} x(tau) * g(t - tau) d tau >> >>You simply flip the g(t) about the y-axis, then slide >>it left or right according to t. >> >>This is the sort of thing that would be much better >>explained in person. >> >>--Randy > >
sk, Here is an alternative explanation that may be useful to you. Below, I have drawn an example of a five-tap FIR filter, being of course an example of convolution. The drawing will make sense only if you are using a mono-spaced typeface to display this message. We have signal samples coming in from the left into a RAM 'delay-line.' Also, we have a set of coefficients, stored in memory in the order shown. Below this, for illustration purposes I have drawn a representation of the sort of values that may have been stored in the coefficient memory. This is intended to be the impulse response of an example single-pole low-pass filter. Each output sample is generated by doing multiply-accumulates between x(n) and h(1), between x(n-1) and h(2) and so on. In other words, convolution is being performed. ______________________________________________________________________ signal---> x(n) x(n-1) x(n-2) x(n-3) x(n-4) coefficients h(1) h(2) h(3) h(4) h(5) * ^ * * | * * stored impulse-response v * * | * * *** *** ---> t ______________________________________________________________________ Now, to answer your question, look at what happens when the signal consists of one sample of value 1.0, preceded and followed by many samples of value 0.0. The output of successive convolutions will then be a replica of the stored impulse response of the example low-pass filter. So far so good. Now, look at the time relationships: Sample x(n) is the MOST RECENT signal sample, and it gets multiplied by h(1), the LEAST RECENT sample of the impulse response of the low-pass filter. Sample x(n-4) is the LEAST RECENT signal sample, and it gets multiplied by h(5), the MOST RECENT sample of the impulse response of the low-pass filter. In other words, if the impulse response of the FIR filter is to be the same as the example filter, the time relationships between the stored signal samples and the stored impulse-response samples must be reversed or 'flipped' (If the signal and impulse-response samples were in the SAME order then the impulse response of the FIR filter would be the REVERSE of the impulse response of the original low-pass filter.) Regards, John
sk wrote:
> thanks for ur response... > i know the formula.... but i want to know why we want to do folding? > cant we do wtihout folding?....
Isn't that kinda like asking, "Why do we have to work? Why can't we just get money without working?" I don't know, sk, maybe I'm misunderstanding your question. If you're looking for a good explanation on why passing a signal through a linear system is equivalent to convolution, then the best one I've ever come across is section 3.3, "Continuous-time LTI Systems: The Convolution Integral" in Oppenheim et al.'s "Signals and Systems." Maybe you should go to the library and read this. --Randy
sk wrote:
> thanks for ur response... > i know the formula.... but i want to know why we want to do folding? > cant we do wtihout folding?....
Randy assumed that by "folding", you meant turning end for end. Is that the case? There's no sense in answering the wrong question. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
John Monro wrote:
> sk wrote: > > thanks for ur response... > > i know the formula.... but i want to know why we want to do folding? > > cant we do wtihout folding?.... > > > > > > Randy Yates wrote: > > > > > >>sk wrote: > >> > >>>hi all > >>>can u explain in detail why there is need for folding when we perform > >>>convolution operation with an (practical)example? > >> > >>It gets your laundry done faster? > >> > >>Sorry. I presume you mean "flipping"? It results from > >>the (t-tau) in > >> > >> y(t) = \int_{-\infty}^{+\infty} x(tau) * g(t - tau) d tau > >> > >>You simply flip the g(t) about the y-axis, then slide > >>it left or right according to t. > >> > >>This is the sort of thing that would be much better > >>explained in person. > >> > >>--Randy > > > > > > > sk, > Here is an alternative explanation that may be useful to you. > > Below, I have drawn an example of a five-tap FIR filter, being of course > an example of convolution. The drawing will make sense only if you are > using a mono-spaced typeface to display this message. > > We have signal samples coming in from the left into a RAM 'delay-line.' > Also, we have a set of coefficients, stored in memory in the order shown. > > Below this, for illustration purposes I have drawn a representation of > the sort of values that may have been stored in the coefficient memory. > This is intended to be the impulse response of an example single-pole > low-pass filter. > > Each output sample is generated by doing multiply-accumulates between > x(n) and h(1), between x(n-1) and h(2) and so on. In other words, > convolution is being performed. > ______________________________________________________________________ > > signal---> x(n) x(n-1) x(n-2) x(n-3) x(n-4) > > coefficients h(1) h(2) h(3) h(4) h(5) > > * > ^ * * > | * * stored impulse-response > v * * > | * * > *** *** ---> t > ______________________________________________________________________ > > > Now, to answer your question, look at what happens when the signal > consists of one sample of value 1.0, preceded and followed by many > samples of value 0.0. The output of successive convolutions will then > be a replica of the stored impulse response of the example low-pass > filter. So far so good. > > Now, look at the time relationships: > > Sample x(n) is the MOST RECENT signal sample, and it gets multiplied by > h(1), the LEAST RECENT sample of the impulse response of the low-pass > filter. > > Sample x(n-4) is the LEAST RECENT signal sample, and it gets multiplied > by h(5), the MOST RECENT sample of the impulse response of the low-pass > filter. > > In other words, if the impulse response of the FIR filter is to be the > same as the example filter, the time relationships between the stored > signal samples and the stored impulse-response samples must be reversed > or 'flipped' > > (If the signal and impulse-response samples were in the SAME order then > the impulse response of the FIR filter would be the REVERSE of the > impulse response of the original low-pass filter.) > > Regards, > John
the site http://www-es.fernuni-hagen.de/JAVA/DisFaltung/convol.html also illustrates this example (use x(n) = delta and h(n) = exp(-nT)) intuitively why do you need to flip it? I don't know but if you don't flip you end up with the cross correlation between the two functions, an entirely different animal (but it uses the same math, which is also why you can use an FFT to perform a fast cross correlation if you preflip one of the inputs)
steve wrote:

   ...

> intuitively why do you need to flip it? I don't know
Because you want to start with the first sample interacting with the first coefficient, then the two data sets to slide along past one another. If the data are used in acquisition order and the convolution coefficients aren't reversed, that doesn't happen. For continuous signals, the dummy variable tau is used, and the integrand involves t - tau. That minus sign flips the coefficients. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
steve wrote:

> > intuitively why do you need to flip it? I don't know > >
Because the OLDEST sample of the stored impulse response needs to come out first. regards, John
steve wrote:
> intuitively why do you need to flip it? I don't know
The folding/flipping/convolving pops straight out as a result of the intuitive arguments behind and derivation of the convolution sum formula. The flipping as such may not be "intuitive", but once you have derived the convolution sum formula using ONLY the linear systems properties of superposition and shift invariance, you can easily see how the "flipping" results, you can see why it occurs, and once you have seen these two, you don't need an argument as to why one "wants" to do the flipping. The ommision of the derivation of the convolution sum formula this way, is one of the greatest dis-services modern DSP texts do to the profession. The convolution sum fomula is so fundamental that everybody should be able to derive it in the way indicated above. I sent a sketch of this derivation to Rick Lyons for his 3rd edition. It remains to be seen if he incorporates it if and when a 3rd edition is released. BTW, I only did this derivation a year ago or so, myself. Only then, after having dabbled with DSP for some 15 years, could I convince myself that I understand what convolution is all about. Rune