DSPRelated.com
Forums

trigonometric "downsampling" in frequency domain

Started by RobR June 2, 2008
Hi,

Problem description:

i have 12 complex frequency domain (FD) samples that are a subset of a
time domain (TD) sampled signal (known sampling rate) that has been FD
converted via fixed size fft; so the frequency spacing is known.

now i pad zeros to those 12 FD samples and perform an 2048-ifft to achieve
oversampled (trigonometrically interpolated) TD representation.
then i apply a TD window filter.
next i transform back to FD via 2048-fft over all TD samples (also those
that have been supressed by the filter).
so i end up with an FD representation being 2048/12 (about 170) times the
FD resolution i started over.
So my understanding is, that when i had frequency spacing x in the
beginning, now i have frequency spacing x/170, so about 170 times finer.

Now my question:
what is the correct FD operation to merge this fine resolved FD samples to
the original frequency spacing?
In other words: i want my 12 complex FD samples back (but now filtered of
course).
Is it take a bunch of 170 finer samples and add them together, or do i
have to apply a (anti alias) filter and then pick out every 170th sample?
Or?

Very best regards,
Robert
RobR wrote:
> Hi, > > Problem description: > > i have 12 complex frequency domain (FD) samples that are a subset of a > time domain (TD) sampled signal (known sampling rate) that has been FD > converted via fixed size fft; so the frequency spacing is known. > > now i pad zeros to those 12 FD samples and perform an 2048-ifft to achieve > oversampled (trigonometrically interpolated) TD representation. > then i apply a TD window filter. > next i transform back to FD via 2048-fft over all TD samples (also those > that have been supressed by the filter).
OK so far.
> so i end up with an FD representation being 2048/12 (about 170) times the > FD resolution i started over.
No. You have more points to plot -- you call them trigonometrically interpolated -- but no more information, Therefore no more resolution. *No information was created. You have learned nothing new.*
> So my understanding is, that when i had frequency spacing x in the > beginning, now i have frequency spacing x/170, so about 170 times finer.
Yes.
> Now my question: > what is the correct FD operation to merge this fine resolved FD samples to > the original frequency spacing? > In other words: i want my 12 complex FD samples back (but now filtered of > course).
Why didn't you just filter the time-domain signal?
> Is it take a bunch of 170 finer samples and add them together, or do i > have to apply a (anti alias) filter and then pick out every 170th sample? > Or?
What do you want to accomplish? All filters, no matter how implemented, trash the beginning and end of the signal (start and end transients). A sequence of only 12 samples is hard to deal with. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
Thanks for your answer Jerry.

I know i do not get more information by interpolating.
But after my operation i have those 2048 FD samples with the same
information as the 12 from the beginning, but filtered.
And, the information is spread along all those 2048 samples and i want to
know the correct operation to merge it back to the 12.

I do not filter in TD because this would require a runtime configurable
bandpass filter.
The initial fft is done in the system anyway.
Also I think fft-ifft filtering, also called fast convolution is more
efficient for my case.

My question is not about filters impairing a signal.

Best regards,
Robert


>RobR wrote: >> Hi, >> >> Problem description: >> >> i have 12 complex frequency domain (FD) samples that are a subset of a >> time domain (TD) sampled signal (known sampling rate) that has been FD >> converted via fixed size fft; so the frequency spacing is known. >> >> now i pad zeros to those 12 FD samples and perform an 2048-ifft to
achieve
>> oversampled (trigonometrically interpolated) TD representation. >> then i apply a TD window filter. >> next i transform back to FD via 2048-fft over all TD samples (also
those
>> that have been supressed by the filter). > >OK so far. > >> so i end up with an FD representation being 2048/12 (about 170) times
the
>> FD resolution i started over. > >No. You have more points to plot -- you call them trigonometrically >interpolated -- but no more information, Therefore no more resolution. >*No information was created. You have learned nothing new.* > >> So my understanding is, that when i had frequency spacing x in the >> beginning, now i have frequency spacing x/170, so about 170 times
finer.
> >Yes. > >> Now my question: >> what is the correct FD operation to merge this fine resolved FD samples
to
>> the original frequency spacing? >> In other words: i want my 12 complex FD samples back (but now filtered
of
>> course). > >Why didn't you just filter the time-domain signal? > >> Is it take a bunch of 170 finer samples and add them together, or do i >> have to apply a (anti alias) filter and then pick out every 170th
sample?
>> Or? > >What do you want to accomplish? All filters, no matter how implemented, >trash the beginning and end of the signal (start and end transients). A >sequence of only 12 samples is hard to deal with. > >Jerry >-- >Engineering is the art of making what you want from things you can get. >����������������������������������������������������������������������� >
RobR wrote:
> Thanks for your answer Jerry. > > I know i do not get more information by interpolating. > But after my operation i have those 2048 FD samples with the same > information as the 12 from the beginning, but filtered.
Why create the extra samples only to discard them later?
> And, the information is spread along all those 2048 samples and i want to > know the correct operation to merge it back to the 12. > > I do not filter in TD because this would require a runtime configurable > bandpass filter. > The initial fft is done in the system anyway. > Also I think fft-ifft filtering, also called fast convolution is more > efficient for my case.
Fast convolution is more efficient than transversal convolution when the signal size exceeds some number of samples. Although that number depends somewhat on the processor, it is never as small as 12. You don't propose simple fast convolution, however. You also (needlessly, as far as I can see) interpolate and decimate.
> My question is not about filters impairing a signal.
Indeed it was not. You might benefit, however, by considering how the inevitable impairment affects your design. ... Jerry -- Engineering is the art of making what you want from things you can get.

>RobR wrote: >> Thanks for your answer Jerry. >> >> I know i do not get more information by interpolating. >> But after my operation i have those 2048 FD samples with the same >> information as the 12 from the beginning, but filtered. > >Why create the extra samples only to discard them later? >
The TD signal may have a certain amount of timing delay/advance. If I do: -12 FD samples (with TD timing error=>FD frequency phase increment) -12-ifft them -> 12 TD samples (its a transform, so full info is kept up till now) -apply an element-wise window filter (i.e. 110000000000); now info is lost! -convert back to FD by 12-fft -i come out with a quite high error vector, if the timing error was some where in between two discrete sampling points So, by oversampling I want to achieve a better TD resolution, such that the window filter (which of course is adopted to the higher sampling rate) does not kill that much information, as its the case with the former approach.
>> And, the information is spread along all those 2048 samples and i want
to
>> know the correct operation to merge it back to the 12. >> >> I do not filter in TD because this would require a runtime
configurable
>> bandpass filter. >> The initial fft is done in the system anyway. >> Also I think fft-ifft filtering, also called fast convolution is more >> efficient for my case. > >Fast convolution is more efficient than transversal convolution when the
>signal size exceeds some number of samples. Although that number depends
>somewhat on the processor, it is never as small as 12. You don't propose
>simple fast convolution, however. You also (needlessly, as far as I can >see) interpolate and decimate. >
I took the 12 samples as an working assumption. For the system I work on, this value may grow up to 1200 and 1200² >> (2048 log 2048)*3+2048
>> My question is not about filters impairing a signal. > >Indeed it was not. You might benefit, however, by considering how the >inevitable impairment affects your design. > > ... > >Jerry >-- >Engineering is the art of making what you want from things you can get. >
RobR wrote:
>> RobR wrote: >>> Thanks for your answer Jerry. >>> >>> I know i do not get more information by interpolating. >>> But after my operation i have those 2048 FD samples with the same >>> information as the 12 from the beginning, but filtered. >> Why create the extra samples only to discard them later? >> > > The TD signal may have a certain amount of timing delay/advance. > If I do: > -12 FD samples (with TD timing error=>FD frequency phase increment) > -12-ifft them -> 12 TD samples (its a transform, so full info is kept up > till now) > -apply an element-wise window filter (i.e. 110000000000); now info is > lost! > -convert back to FD by 12-fft > -i come out with a quite high error vector, if the timing error was some > where in between two discrete sampling points > > So, by oversampling I want to achieve a better TD resolution, such that > the window filter (which of course is adopted to the higher sampling rate) > does not kill that much information, as its the case with the former > approach.
I understand. By upsampling and later choosing the starting point to downsample, you allow fine time shifts in the signal. I don't understand why you don't downsample first and then filter, but I didn't work out the timing. Maybe it's more efficient. It might have helped had you noted that up front.
>>> And, the information is spread along all those 2048 samples and i want > to >>> know the correct operation to merge it back to the 12. >>> >>> I do not filter in TD because this would require a runtime > configurable >>> bandpass filter. >>> The initial fft is done in the system anyway. >>> Also I think fft-ifft filtering, also called fast convolution is more >>> efficient for my case. >> Fast convolution is more efficient than transversal convolution when the > >> signal size exceeds some number of samples. Although that number depends >> somewhat on the processor, it is never as small as 12. You don't propose >> simple fast convolution, however. You also (needlessly, as far as I can >> see) interpolate and decimate. > > I took the 12 samples as an working assumption. For the system I work on, > this value may grow up to 1200 and > > 1200� >> (2048 log 2048)*3+2048
If it looks like trouble at first, account for the filtering transients. Back to your original question: If you upsampled properly, the new signal contains all of the information in the original and nothing more. Therefore, it ought not change if you filter it appropriately for downsampling. (That would be a good overall check, but not worthwhile in the working system.) Decimate in goof health. :-) Jerry -- Engineering is the art of making what you want from things you can get.
>RobR wrote: >>> RobR wrote: >>>> Thanks for your answer Jerry. >>>> >>>> I know i do not get more information by interpolating. >>>> But after my operation i have those 2048 FD samples with the same >>>> information as the 12 from the beginning, but filtered. >>> Why create the extra samples only to discard them later? >>> >> >> The TD signal may have a certain amount of timing delay/advance. >> If I do: >> -12 FD samples (with TD timing error=>FD frequency phase increment) >> -12-ifft them -> 12 TD samples (its a transform, so full info is kept
up
>> till now) >> -apply an element-wise window filter (i.e. 110000000000); now info is >> lost! >> -convert back to FD by 12-fft >> -i come out with a quite high error vector, if the timing error was
some
>> where in between two discrete sampling points >> >> So, by oversampling I want to achieve a better TD resolution, such
that
>> the window filter (which of course is adopted to the higher sampling
rate)
>> does not kill that much information, as its the case with the former >> approach. > >I understand. By upsampling and later choosing the starting point to >downsample, you allow fine time shifts in the signal. I don't understand
>why you don't downsample first and then filter, but I didn't work out >the timing. Maybe it's more efficient. It might have helped had you >noted that up front. >
I start with those 12 samples and want to come back to this amount. If I would be fine with those iffted 12 samples, window filtered, back to FD via 12-fft, I would do it. My wanted information in TD is more or less a peak of power within the time frame. The window filter lets this peak pass and attenuates the rest (actually is completely cancels it out). With the oversampled version, it seems there is more info concentrated into the peak (especially in case of fractional timing errors), so the rest can be attenuated with less side effects. That's the only, but significant reason for oversampling. As I said, now I end up with the oversampled amount of samples after fft in FD. ..
>>>> And, the information is spread along all those 2048 samples and i
want
>> to >>>> know the correct operation to merge it back to the 12. >>>> >>>> I do not filter in TD because this would require a runtime >> configurable >>>> bandpass filter. >>>> The initial fft is done in the system anyway. >>>> Also I think fft-ifft filtering, also called fast convolution is
more
>>>> efficient for my case. >>> Fast convolution is more efficient than transversal convolution when
the
>> >>> signal size exceeds some number of samples. Although that number
depends
>>> somewhat on the processor, it is never as small as 12. You don't
propose
>>> simple fast convolution, however. You also (needlessly, as far as I
can
>>> see) interpolate and decimate. >> >> I took the 12 samples as an working assumption. For the system I work
on,
>> this value may grow up to 1200 and >> >> 1200� >> (2048 log 2048)*3+2048 > >If it looks like trouble at first, account for the filtering transients. > >Back to your original question: If you upsampled properly, the new >signal contains all of the information in the original and nothing more.
>Therefore, it ought not change if you filter it appropriately for >downsampling. (That would be a good overall check, but not worthwhile in
>the working system.) Decimate in goof health. :-) >
So with decimating you mean, take every nth sample? In may case about every 170th? Well, unfortunately this gives not the fully correct result. Assumed, I have all ones in FD over my twelve samples. ifft gets me a peak at first position in TD, rest 0s. Window Filter: [1 0 0 0 0 0 0 0 0 0 0 0] Back to FD via fft result: all ones as I started: correct This is the best-case scenario for the non-oversampling version. It fails, when my original FD signal has fractional (fractional, regarding sampling freq) timing error. So, use 2048-ifft i come our with abs(sinc) shaped values in TD with the main peak at first position. Apply expanded window filter [11111111...[170]00000000000000000...[2048]] (actually, this filter wraps around to the left a bit, but leave that out for now) Back to FD via 2048-fft. Now, I want the correct operation, that brings me back as near as possible to my starting 12 all one samples. The energy is spread along all 2048 samples and has to be focused back to 12. When taking just every nth sample, maybe the information is generally kept, but energy is lost. If its not completly clear, I may provide some MATLAB code, that clarifies. Best regards, Robert
>Jerry >-- >Engineering is the art of making what you want from things you can get. >
>RobR wrote: >>> RobR wrote: >>>> Thanks for your answer Jerry. >>>> >>>> I know i do not get more information by interpolating. >>>> But after my operation i have those 2048 FD samples with the same >>>> information as the 12 from the beginning, but filtered. >>> Why create the extra samples only to discard them later? >>> >> >> The TD signal may have a certain amount of timing delay/advance. >> If I do: >> -12 FD samples (with TD timing error=>FD frequency phase increment) >> -12-ifft them -> 12 TD samples (its a transform, so full info is kept
up
>> till now) >> -apply an element-wise window filter (i.e. 110000000000); now info is >> lost! >> -convert back to FD by 12-fft >> -i come out with a quite high error vector, if the timing error was
some
>> where in between two discrete sampling points >> >> So, by oversampling I want to achieve a better TD resolution, such
that
>> the window filter (which of course is adopted to the higher sampling
rate)
>> does not kill that much information, as its the case with the former >> approach. > >I understand. By upsampling and later choosing the starting point to >downsample, you allow fine time shifts in the signal. I don't understand
>why you don't downsample first and then filter, but I didn't work out >the timing. Maybe it's more efficient. It might have helped had you >noted that up front. >
I start with those 12 samples and want to come back to this amount. If I would be fine with those iffted 12 samples, window filtered, back to FD via 12-fft, I would do it. My wanted information in TD is more or less a peak of power within the time frame. The window filter lets this peak pass and attenuates the rest (actually is completely cancels it out). With the oversampled version, it seems there is more info concentrated into the peak (especially in case of fractional timing errors), so the rest can be attenuated with less side effects. That's the only, but significant reason for oversampling. As I said, now I end up with the oversampled amount of samples after fft in FD. ..
>>>> And, the information is spread along all those 2048 samples and i
want
>> to >>>> know the correct operation to merge it back to the 12. >>>> >>>> I do not filter in TD because this would require a runtime >> configurable >>>> bandpass filter. >>>> The initial fft is done in the system anyway. >>>> Also I think fft-ifft filtering, also called fast convolution is
more
>>>> efficient for my case. >>> Fast convolution is more efficient than transversal convolution when
the
>> >>> signal size exceeds some number of samples. Although that number
depends
>>> somewhat on the processor, it is never as small as 12. You don't
propose
>>> simple fast convolution, however. You also (needlessly, as far as I
can
>>> see) interpolate and decimate. >> >> I took the 12 samples as an working assumption. For the system I work
on,
>> this value may grow up to 1200 and >> >> 1200� >> (2048 log 2048)*3+2048 > >If it looks like trouble at first, account for the filtering transients. > >Back to your original question: If you upsampled properly, the new >signal contains all of the information in the original and nothing more.
>Therefore, it ought not change if you filter it appropriately for >downsampling. (That would be a good overall check, but not worthwhile in
>the working system.) Decimate in goof health. :-) >
So with decimating you mean, take every nth sample? In may case about every 170th? Well, unfortunately this gives not the fully correct result. Assumed, I have all ones in FD over my twelve samples. ifft gets me a peak at first position in TD, rest 0s. Window Filter: [1 0 0 0 0 0 0 0 0 0 0 0] Back to FD via fft result: all ones as I started: correct This is the best-case scenario for the non-oversampling version. It fails, when my original FD signal has fractional (fractional, regarding sampling freq) timing error. So, use 2048-ifft i come our with abs(sinc) shaped values in TD with the main peak at first position. Apply expanded window filter [11111111...[170]00000000000000000...[2048]] (actually, this filter wraps around to the left a bit, but leave that out for now) Back to FD via 2048-fft. Now, I want the correct operation, that brings me back as near as possible to my starting 12 all one samples. The energy is spread along all 2048 samples and has to be focused back to 12. When taking just every nth sample, maybe the information is generally kept, but energy is lost. If its not completly clear, I may provide some MATLAB code, that clarifies. Best regards, Robert
>Jerry >-- >Engineering is the art of making what you want from things you can get. >
(sry for double post)

Ahh Jerry you where right with pointing out the filter impairment topic I
ignored at first.
My TD window filter transform to FD sinc-shaped filter.
What I see back in FD are my ones convolved by the FD filter
representation. This is the "spread" I mentioned.
Probably there is no better way then, make my TD window sinc-shaped such
that it becomes rectangular in FD.
If so, I will probably end up with my 12 ones out of the 2048 FD samples.

So best thanks Jerry for your hint on this!

Robert

>>RobR wrote: >>>> RobR wrote: >>>>> Thanks for your answer Jerry. >>>>> >>>>> I know i do not get more information by interpolating. >>>>> But after my operation i have those 2048 FD samples with the same >>>>> information as the 12 from the beginning, but filtered. >>>> Why create the extra samples only to discard them later? >>>> >>> >>> The TD signal may have a certain amount of timing delay/advance. >>> If I do: >>> -12 FD samples (with TD timing error=>FD frequency phase increment) >>> -12-ifft them -> 12 TD samples (its a transform, so full info is kept >up >>> till now) >>> -apply an element-wise window filter (i.e. 110000000000); now info is >>> lost! >>> -convert back to FD by 12-fft >>> -i come out with a quite high error vector, if the timing error was >some >>> where in between two discrete sampling points >>> >>> So, by oversampling I want to achieve a better TD resolution, such >that >>> the window filter (which of course is adopted to the higher sampling >rate) >>> does not kill that much information, as its the case with the former >>> approach. >> >>I understand. By upsampling and later choosing the starting point to >>downsample, you allow fine time shifts in the signal. I don't
understand
> >>why you don't downsample first and then filter, but I didn't work out >>the timing. Maybe it's more efficient. It might have helped had you >>noted that up front. >> > >I start with those 12 samples and want to come back to this amount. >If I would be fine with those iffted 12 samples, window filtered, back
to
>FD via 12-fft, I would do it. >My wanted information in TD is more or less a peak of power within the >time frame. >The window filter lets this peak pass and attenuates the rest (actually
is
>completely cancels it out). >With the oversampled version, it seems there is more info concentrated >into the peak (especially in case of fractional timing errors), so the
rest
>can be attenuated with less side effects. >That's the only, but significant reason for oversampling. >As I said, now I end up with the oversampled amount of samples after fft >in FD. > >.. > >>>>> And, the information is spread along all those 2048 samples and i >want >>> to >>>>> know the correct operation to merge it back to the 12. >>>>> >>>>> I do not filter in TD because this would require a runtime >>> configurable >>>>> bandpass filter. >>>>> The initial fft is done in the system anyway. >>>>> Also I think fft-ifft filtering, also called fast convolution is >more >>>>> efficient for my case. >>>> Fast convolution is more efficient than transversal convolution when >the >>> >>>> signal size exceeds some number of samples. Although that number >depends >>>> somewhat on the processor, it is never as small as 12. You don't >propose >>>> simple fast convolution, however. You also (needlessly, as far as I >can >>>> see) interpolate and decimate. >>> >>> I took the 12 samples as an working assumption. For the system I work >on, >>> this value may grow up to 1200 and >>> >>> 1200� >> (2048 log 2048)*3+2048 >> >>If it looks like trouble at first, account for the filtering
transients.
>> >>Back to your original question: If you upsampled properly, the new >>signal contains all of the information in the original and nothing
more.
> >>Therefore, it ought not change if you filter it appropriately for >>downsampling. (That would be a good overall check, but not worthwhile
in
> >>the working system.) Decimate in goof health. :-) >> >So with decimating you mean, take every nth sample? >In may case about every 170th? >Well, unfortunately this gives not the fully correct result. > >Assumed, I have all ones in FD over my twelve samples. >ifft >gets me a peak at first position in TD, rest 0s. >Window Filter: [1 0 0 0 0 0 0 0 0 0 0 0] >Back to FD via fft >result: all ones as I started: correct >This is the best-case scenario for the non-oversampling version. >It fails, when my original FD signal has fractional (fractional,
regarding
>sampling freq) timing error. > >So, >use 2048-ifft >i come our with abs(sinc) shaped values in TD with the main peak at
first
>position. >Apply expanded window filter
[11111111...[170]00000000000000000...[2048]]
>(actually, this filter wraps around to the left a bit, but leave that
out
>for now) >Back to FD via 2048-fft. > >Now, I want the correct operation, that brings me back as near as
possible
>to my starting 12 all one samples. >The energy is spread along all 2048 samples and has to be focused back
to
>12. >When taking just every nth sample, maybe the information is generally >kept, but energy is lost. > >If its not completly clear, I may provide some MATLAB code, that >clarifies. > >Best regards, >Robert >>Jerry >>-- >>Engineering is the art of making what you want from things you can get. >> >
RobR wrote:
> (sry for double post) > > Ahh Jerry you where right with pointing out the filter impairment topic I > ignored at first. > My TD window filter transform to FD sinc-shaped filter. > What I see back in FD are my ones convolved by the FD filter > representation. This is the "spread" I mentioned. > Probably there is no better way then, make my TD window sinc-shaped such > that it becomes rectangular in FD. > If so, I will probably end up with my 12 ones out of the 2048 FD samples. > > So best thanks Jerry for your hint on this!
I'm glad I could help. For a while there, it seemed that I was in your way. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������