Hi, It is known that downsampling introduces aliasing if the signal to be downsampled has high frequency components. Hence it is passed through a low pass filter before it is downsampled. I want to know whether the filtering and the downsampling operations can be interchanged as they are essentially the same, or is it always necessary to filter the signal first and then downsample it. Thanks in advance.
Interchanging of filtering and decimation operations
Started by ●April 26, 2008
Reply by ●April 26, 20082008-04-26
>Hi, > >It is known that downsampling introduces aliasing if the signal to be >downsampled has high frequency components. Hence it is passed through alow>pass filter before it is downsampled. I want to know whether thefiltering>and the downsampling operations can be interchanged as they areessentially>the same, or is it always necessary to filter the signal first and then >downsample it. >Thanks in advance. > > >Down sampling a digital signal is similar to digitizing a digital signal.Any signal must be low-pass filtered prior to digitization to avoid aliasing if the signal that is being digitized (or down sampled) is not band-limited to fs, the new sampling rate.
Reply by ●April 26, 20082008-04-26
Could you explain what you try to do. There are many well developed technology in this area. James Zhuge www.go-ci.com
Reply by ●April 26, 20082008-04-26
vasindagi wrote:> Hi, > > It is known that downsampling introduces aliasing if the signal to be > downsampled has high frequency components. Hence it is passed through a > low pass filter before it is downsampled. I want to know whether the > filtering and the downsampling operations can be interchanged as they are > essentially the same, or is it always necessary to filter the signal first > and then downsample it.You must filter first. If you downsample first and then try to filter, you are effectively trying to un-scamble and egg. Erik -- ----------------------------------------------------------------- Erik de Castro Lopo ----------------------------------------------------------------- "We have fifty million Muslims in Europe. There are signs that Allah will grant Islam victory in Europe - without swords, without guns, without conquests. The fifty million Muslims of Europe will turn it into a Muslim continent within a few decades." -- Libyan leader Mu'ammar Al-Qadhafi http://www.memritv.org/Transcript.asp?P1=1121
Reply by ●April 26, 20082008-04-26
On Apr 26, 7:12 am, "vasindagi" <vish...@gmail.com> wrote:> Hi, > > It is known that downsampling introduces aliasing if the signal to be > downsampled has high frequency components. Hence it is passed through a low > pass filter before it is downsampled. I want to know whether the filtering > and the downsampling operations can be interchanged as they are essentially > the same, or is it always necessary to filter the signal first and then > downsample it. > Thanks in advance.It is funny how everyone seems to think you have to filter *before* you down sample. One of the cool things about decimation is that you simply toss out samples. If you are tossing out data from a filter, why do you need to calculate the results being tossed??? So you can combine the filter and down sampling by designing your filter to only calculate the samples you will be outputting. Pretty simple, no? Rick
Reply by ●April 26, 20082008-04-26
"rickman" <gnuarm@gmail.com> wrote in message news:c4a4ca29-9b14-48a3-8161-b59bbdb5f9e9@f63g2000hsf.googlegroups.com...> > It is funny how everyone seems to think you have to filter > *before* > you down sample. One of the cool things about decimation is > that you > simply toss out samples. If you are tossing out data from a > filter, > why do you need to calculate the results being tossed??? So > you can > combine the filter and down sampling by designing your filter > to only > calculate the samples you will be outputting. > > Pretty simple, no? >Yes. Pretty simple and pretty wrong too. Before you downsample you have to insure that the signal being downsampled has no frequency components above the *new* Nyquist frequency (which is lower than the old Nyquist frequency). You can't do this after you've downsampled because these components will have been aliased into your downsampled signal. As Eric said, it's like trying to unscramble an egg.> Rick
Reply by ●April 27, 20082008-04-27
I think both Rick and John are correct. Rick is talking about the actual implementation of the (filter + decimation) algorithm. There are many ways to speed it up, including skipping the samples to be tossed out. This method is widely used with FIR decimation filter. When you combine the FIR decimation filter and down-sampling process together in one programming loop, there are other tricks, such as taking the advantage of zeros in the taps, or symmetry of the taps. With all the tricks used, you can achieve 2 or 4 times efficiency for a 2:1 decimation. With IIR filter, because the history buffer has to be kept, I don't see an easy way to skip the samples to be tossed. John is probably talking about the block diagram where decimation filter has to be applied before the downsampling process. James www.go-ci.com
Reply by ●April 27, 20082008-04-27
On Apr 26, 7:41 pm, "John E. Hadstate" <jh113...@hotmail.com> wrote:> "rickman" <gnu...@gmail.com> wrote in message > > news:c4a4ca29-9b14-48a3-8161-b59bbdb5f9e9@f63g2000hsf.googlegroups.com... > > > > > It is funny how everyone seems to think you have to filter > > *before* > > you down sample. One of the cool things about decimation is > > that you > > simply toss out samples. If you are tossing out data from a > > filter, > > why do you need to calculate the results being tossed??? So > > you can > > combine the filter and down sampling by designing your filter > > to only > > calculate the samples you will be outputting. > > > Pretty simple, no? > > Yes. Pretty simple and pretty wrong too. > > Before you downsample you have to insure that the signal being > downsampled has no frequency components above the *new* Nyquist > frequency (which is lower than the old Nyquist frequency).Only the samples you plan to keep. The samples you plan on throwing away later can contain anything, including aliasing. So don't need to filter the entire signal before downsampling. You only need to correctly output a subset, which requires a lower computational effort than a low pass filter. IMHO. YMMV.
Reply by ●April 27, 20082008-04-27
"John E. Hadstate" <jh113355@hotmail.com> wrote in message news:TOudnSACULdpeo7VnZ2dnUVZ_uKpnZ2d@supernews.com...> > "rickman" <gnuarm@gmail.com> wrote in message > news:c4a4ca29-9b14-48a3-8161-b59bbdb5f9e9@f63g2000hsf.googlegroups.com... >> >> It is funny how everyone seems to think you have to filter >> *before* >> you down sample. One of the cool things about decimation is >> that you >> simply toss out samples. If you are tossing out data from a >> filter, >> why do you need to calculate the results being tossed??? So >> you can >> combine the filter and down sampling by designing your >> filter to only >> calculate the samples you will be outputting. >> >> Pretty simple, no? >> > > Yes. Pretty simple and pretty wrong too. > > Before you downsample you have to insure that the signal > being downsampled has no frequency components above the *new* > Nyquist frequency (which is lower than the old Nyquist > frequency). You can't do this after you've downsampled > because these components will have been aliased into your > downsampled signal. As Eric said, it's like trying to > unscramble an egg. > >> Rick >I am given a signal that is sampled at 10 Ms/s. I know it contains at least three strong narrowband signals, one at 1.0103 MHz.(the signal of interest), one at 2.3230 MHz. and one at 4.3436 MHz. Since I am severely pressed for CPU cycles, I need to decimate by 3 to process the signal. If I can forego the filtering before decimation, I can use the CPU samples to good advantage. Would either of you gentlemen be so good as to explain to me a method by which I can design this decimation without prefiltering and without corrupting the SOI?
Reply by ●April 27, 20082008-04-27
On Apr 27, 9:41 am, "John E. Hadstate" <jh113...@hotmail.com> wrote:> "John E. Hadstate" <jh113...@hotmail.com> wrote in messagenews:TOudnSACULdpeo7VnZ2dnUVZ_uKpnZ2d@supernews.com... > > > "rickman" <gnu...@gmail.com> wrote in message > >news:c4a4ca29-9b14-48a3-8161-b59bbdb5f9e9@f63g2000hsf.googlegroups.com... > > >> It is funny how everyone seems to think you have to filter > >> *before* > >> you down sample. One of the cool things about decimation is > >> that you > >> simply toss out samples. If you are tossing out data from a > >> filter, > >> why do you need to calculate the results being tossed??? So > >> you can > >> combine the filter and down sampling by designing your > >> filter to only > >> calculate the samples you will be outputting. > > >> Pretty simple, no? > > > Yes. Pretty simple and pretty wrong too. > > > Before you downsample you have to insure that the signal > > being downsampled has no frequency components above the *new* > > Nyquist frequency (which is lower than the old Nyquist > > frequency). You can't do this after you've downsampled > > because these components will have been aliased into your > > downsampled signal. As Eric said, it's like trying to > > unscramble an egg. > > >> Rick > > I am given a signal that is sampled at 10 Ms/s. I know it > contains at least three strong narrowband signals, one at > 1.0103 MHz.(the signal of interest), one at 2.3230 MHz. and one > at 4.3436 MHz. Since I am severely pressed for CPU cycles, I > need to decimate by 3 to process the signal. If I can forego > the filtering before decimation, I can use the CPU samples to > good advantage. > > Would either of you gentlemen be so good as to explain to me a > method by which I can design this decimation without > prefiltering and without corrupting the SOI?I think I didn't read the OP very well. The question was, can the filter and decimation steps be interchanged. My answer was, "Yes you can combine them". No one is saying you don't have to use a filter and no one is saying that the filter can be *after* the decimation. But the two can be combined to the advantage of saving processing steps. If we decimate by 4, your filter is going to produce the results; Y0, Y1, Y2, Y3, Y4, Y5, Y6,... and you will be decimating to retain Y0, Y4, Y8,... why bother to calculate the samples that you are tossing; Y1, Y2, Y3, Y5, Y6, Y7,...? I am thinking of a FIR filter implementation of course where a decimation by 4 will save you 3/4 of the calculations in the filter. But even in an IIR filter, you can save work if you are calculating the results on the fly. If you are combining 4 steps of a IIR into one operation, you can save processor work in loading and storing the intermediate results and loading up the registers for computation. The point is that by combining the decimation with the filtering, you get the *same* computation as the separate filter and decimation with fewer computation steps. This also works for interpolation filters and combined decimation/interpolation filters. If this explanation is not clear, try doing a little research on the web. I expect there are any number of resources to explain decimating filters.